Rootclaim relies on the power of the crowd to make sure every analysis includes all relevant information, and each input is supported by solid reasoning and reliable sources.
This page will guide you through the process of challenging any inputs that you think you can improve, and adding new information if you feel the analysis is incomplete.
Rootclaim analyzes complex issues very differently than most people do. It is based on a proven mathematical model, which means that the conclusion is a necessary outcome of the inputs of the analysis.
If a Rootclaim conclusion is wrong, there must be a mistaken input element. This page will help you identify it. If you are still unable to do so, it means your intuitive feelings, however strong, are most likely due to one of the many flaws of human reasoning that we all share. And, indeed, Rootclaim consistently publishes analyses with conclusions that many people initially find controversial, but are later justified as new discoveries emerge.
Since most people are not experienced in probabilistic reasoning, objections to a Rootclaim conclusion are often based on logical arguments. Identifying flawed reasoning using logic is the common way humans argue, feeling they have a “winning argument” that just can’t be ignored. In reality, such magical arguments do not exist (understand why), and real world complex problems require probabilistic reasoning.
The rest of the page will help you find the mistake by walking through the types of inputs in a Rootclaim analysis, and the possible inaccuracy in each.
The Rootclaim engine carries out the final calculation once all the inputs are in place, but it can’t invent new hypotheses by itself. This requires the kind of intelligence that only humans possess.
If you think you have a hypothesis that is better than any of the existing ones (i.e. it is more likely and fits more of the evidence), simply add a comment to the analysis describing the new hypothesis and why you think it will receive a higher likelihood once analyzed.
One way Rootclaim might have gone wrong is if some relevant piece of information is missing from the analysis, or if the evaluation of the likelihood of an existing piece of evidence is inaccurate.
If you have important evidence that is not currently covered, simply add a comment on the analysis page, provide a source to the evidence, and explain how you think it affects the hypotheses.
The analysis contains an estimate of how each group of evidence affects the likelihood of each hypothesis. People don't like thinking in numbers, but in practice when they reach a conclusion based on uncertain data, they are implicitly assuming many numbers. Often, they would disagree with these numbers if they actually knew the values they were assuming.. But in Rootclaim analyses every number must be brought to light and is subject to scrutiny. Ideally these numbers are based on quantitative studies of similar cases, but in some cases those are not available and guesstimates must be used.
If you have a better estimate for one of the numbers, add a comment with your reasoning, preferably with sources to relevant studies or statistics.
Sometimes several pieces of evidence together have a weaker effect on the overall results than might be expected. This is usually because those pieces of evidence share a dependency, or common cause, and therefore the occurrence of one event makes the others more predictable.
These dependencies will usually be addressed by gathering each set of interdependent evidence in one evidence group. In this manner, the likelihood of the common cause will be accounted for and will appear only once in the calculation.
The evidence groups are ordered and the effect of each one is estimated given all of the evidence contained in the previous groups. In most cases evidence groups are assessed independent of other groups. The analysis should strive to make them independent, but if that’s impossible, the evidence group will be assessed taking into account the preceding evidence groups only and not the subsequent ones. For example, let’s assume a first evidence group shows that a suspect was seen in a certain location and a second evidence group shows that the suspect’s fingerprints were found on the murder weapon which was found at that location. Then in the first evidence group we would compare the likelihood of the suspect being in that location if he’s guilty with the likelihood he was there if he’s innocent. But in the second evidence group we would compare the likelihoods after already assuming he was at the scene: the likelihoods he would touch the weapon given we already know he was there.This ensures that dependent events are assessed only once.
Evidence groups occasionally contain lower level groups of evidence, which themselves may contain more evidence and so forth. This creates an easy-to-read analysis where you can drill down whenever some part of the analysis is unclear. It also allows the analysis to start with the top-level view that can be published very quickly after the event occurs, and then expand as more evidence and feedback from the crowd comes in.
If you think there is a dependency that hasn’t been addressed yet for any of the hypotheses, simply add a comment on the analysis page and explain which pieces of evidence should be grouped together in order to accurately assess a common cause.
Still confident the conclusion is wrong? You could be the first to win The Rootclaim $100,000 Challenge.
Rootclaim Launches Open Analysis Platform That Surpasses Human ReasoningA substantial body of research has shown that the human brain is unreliable when it comes to accurately assessing complex problems. This means the only way to navigate a sea of half-truths is to complement humanity's fallible intuition with objective probabilistic analysis.
Anti-Fraud Experts Launch News-Accuracy Site, Find U.S. Probably Blamed Wrong Side for Syria Chemical AttackIn applying the fraud-detection approach, Rootclaim seeks to break news events into similar bite-sized pieces and assign values to the individual pieces of evidence, factoring for uncertainty and source reliability. The individual pieces are then loaded into an algorithm that draws big-picture conclusions.