Part Two: Inference & Evaluation
Critical Thinking
Inference and Evaluation
After defining what is actually being claimed using the tools in Part One, we need to use inference and evaluation to build the bridge from evidence to conclusion.
Inference is what can be derived from evidence.
Evaluation is how you judge the quality of that inference.
There are three main types of inference:
Deductive: If the premises are true and the reasoning is valid, then the conclusion must be true.
Validity: The form of the argument guarantees that if the premises are true, the conclusion cannot be false. (This is why we have to define the claim first. See Part One.)
Soundness: The argument is valid and its premises are actually true.
Inductive: Reaches conclusions from samples and patterns.
A classic example: Since the Sun has risen every morning so far, it can be inferred that it will rise again tomorrow.
Inductive arguments are judged by their strength and cogency.
Strength: Given true premises; how strongly do they support the conclusion.
Cogency: This is strength combined with all the premises being true.
Abductive: Finds the explanation or hypothesis that best explains the facts.
These are assessed by:
Explanatory Power: All the facts are well accounted for in a simple manner in the following aspects:
Plausibility: Fits what we know to be true.
Parsimony: The introduction of the fewest new assumptions that are consistent with the data.
Causal: X causes Y is credible when Y follows X, varies with X, and no better explanation fits.
The claim can be examined in a number of ways:
Time: Something happens before something else. The cause precedes the effect.
Size: Something changes in a meaningful way when something else that it is related to changes.
How: There is a plausible link from cause to effect. A strong association counts in favour of causality, but doesn’t prove it.
Counterfactual: With everything else remaining the same, if something didn’t happen, what would have happened to the something else in question?
Other Causes: Rule out confounders that affect both cause and outcome.
Evidence quality
Systematic reviews & meta-analyses: These can provide an estimate of the overall effect and determine the consistency of the evidence base across various studies. However, conclusions may depend on study quality and publication bias.
Randomised controlled trials: These are designed to isolate cause and effect under controlled conditions.
Well-designed observational studies: These can reveal strong associations when trials are impractical, but care must be taken to avoid bias in design and analysis.
Expert analysis with data: Well-documented case evidence and analysis may suggest a real signal yet often lack controls so care must be taken when establishing causation.
Anecdotes, testimonials, intuition: Personal stories are highly prone to bias. They are useful for generating hypotheses but are not suitable to support general conclusions.
It should be noted that in principle, quality outweighs quantity. Many weak studies do not beat one well-designed study.
Also check:
Primary vs Secondary Sources: It is preferable to use primary sources for original evidence. High-quality secondary sources can then be used for synthesis, context and developing broader conclusions.
Replication: Results should be independently replicated. A single study is usually not enough.
Data Availability: Are the raw data and code available so others can audit, rerun, and extend the analysis?
These are links to the other parts of this critical thinking series:
Part One: Analysing the Claim
Part Two: Inference and Evaluation
Part Three: Biases and Fallacies
Part Four: Challenging a Claim
Part Five: Practical Tools
I have also written a primer covering the basics of critical thinking and you can download a free PDF copy here at my personal website tommurphy.me



