I can’t tell you how many times I’ve heard (or seen) someone post something along the lines of “I’ve done my research,” or “do your own research.” Sadly, most people have no idea what this means, and instead refer to the AI generated paragraph at the top of a Google search. So, I went long and deep this week. In this week’s video, I talk about HOW to do your own research. Parts of a study, hierarchy of evidence, terms, all that fun stuff. Now, you CAN do your own research, and inoculate yourself against the many peddlers of bullshit that are pervasive throughout the interwebz. It’s a long video, or you can read the blog post/summary below it.
Chapters
00:00 Intro & why “do your own research” fails
00:29 Headlines vs primary sources (the CrossFit/NSCA example)
04:45 What “do your own research” really means
05:20 What we’ll cover today
06:00 Scientific method (and why replication rules)
10:16 “Trust the science” & what certainty really means
12:12 Key terms: correlation, causation, certainty
18:12 Stats basics: R, R², p-values, absolute vs relative risk
21:32 Spurious correlations (and why they seduce)
25:41 Confounders & backwards conclusions
26:10 Parts of a study & the bias-proof reading order
33:08 What to do when methods are dense (stay in your lane)
37:04 The reading order recap
38:06 Hierarchy of evidence (what counts most)
43:23 Why RCTs (and when they’re not ethical/feasible)
50:02 Mechanisms vs outcomes (don’t over-extrapolate)
53:46 Language tells you the truth: red flags vs green flags
56:38 Put it to the test: runner age-grading example
1:06:00 Data massaging & averages—what to watch for
1:06:52 When to lean on meta-analyses/systematic reviews
1:08:55 Wrap-up & next steps

TL;DR
Most “do your own research” isn’t research—it’s headline surfing. Real DIY research means: 1) get the primary source, 2) read data → methods → discussion, 3) understand basic statistics (correlation ≠ causation), 4) rank the quality of evidence (RCTs > observational > anecdotes), 5) look for replication and systematic reviews/meta-analyses, and 6) watch out for red-flag claims (absolutes, conspiracy vibes, “secret knowledge,” selling the one true fix).
Why “Do Your Own Research” Usually Goes Wrong
- Telephone effect: Media often quotes media, not the paper. Click through until you hit the actual study PDF/DOI.
- Abstract-only trap: Abstracts summarize and spin. They’re not the data.
- Cherry-picking: One study ≠ consensus. Strong claims need multiple studies pointing the same way.
Latin you’ll love: De omnibus dubitandum — “Doubt everything.” Doubt smartly, not cynically.
The Scientific Method, in Plain English
- Observe → Ask → Research → Hypothesize
- Test by trying to DISPROVE the hypothesis (not prove it).
- Replicate across populations, methods, and funding sources.
- Share, so others can test and challenge you.
- Over time, evidence piles up into consensus (never 100% certainty—just overwhelmingly likely).
Terms You’ll See (and How to Read Them) as you Do Your Own Research
- Linked/Associated/Correlated: The lines move together. Not proof of cause.
- Causation: X makes Y happen. Needs experiments or tight causal inference.
- R (−1 to +1): Strength/direction of a linear relationship.
- R² (0–1): % of variation explained by the model.
- p-value (< .05 typical): Data are unlikely if there were no real effect. Does not measure effect size or practical importance.
- Relative vs Absolute Risk: “25% increase” might mean 4% → 5% (+1% absolute). Always ask for both.
Hierarchy of Evidence (quick pyramid) to Do Your Own Research
Top: Systematic reviews & meta-analyses
Randomized controlled trials (double-blind, placebo-controlled when ethical)
Prospective cohort (longitudinal observational)
Cross-sectional/case-control (observational “snapshot”)
Expert opinion/editorials/anecdotes
Anecdote can inspire hypotheses (especially in coaching), but it isn’t high-quality evidence.

The Order to Read a Study (so you don’t get spun)
- Abstract (skim only): Confirm relevance. Don’t let it bias you.
- Results/Data (tables/figures): What was actually measured and by how much? Units? Effect size?
- Methods: Who/what/when/how? Randomization? Blinding? Controls? Any confounders? Is the method appropriate?
- Discussion/Conclusion: Do the authors overreach beyond their data?
- Intro/Background (optional): For context and references to prior work.
Mechanisms vs Outcomes (don’t confuse the two)
A mechanism (e.g., gluconeogenesis) can exist without meaningfully changing outcomes in real life. Outcomes depend on dose, context, and the whole system. Mechanistic studies are hypothesis fuel, not final answers.
Ethical Limits (why RCTs aren’t always possible)
Placebo-controlled trials can be unethical (e.g., withholding lifesaving treatments). That’s why some questions rely more on strong observational designs plus triangulation from multiple lines of evidence.
Red Flags for Pseudoscience
- Absolutes (“100% safe,” “zero side effects,” “settled forever”).
- Secret knowledge or conspiracy framing (“they don’t want you to know”).
- One study used to nuke a mountain of contrary data.
- Mechanism inflation (“this pathway proves X will transform Y”).
- They’re selling the only fix.
Green flags for credible experts: “It depends.” “I don’t know.” Nuance. Probabilities. Replication.
A Mini Case Study: Runners & “Age Grading”
- Longitudinal data on 8 runners (5K, 10K, Half, Marathon), pre- and post-intervention.
- Age grading lets you compare performance across ages (like Sinclair/Wilks in strength sports).
- Raw race tables showed mixed results (about 50/50 improvements in most distances; marathon skewed worse).
- The paper’s final average across all runners concluded no effect.
- Takeaway: Averages can mask within-distance patterns. Always inspect raw tables, not just the grand mean.
A Practical Checklist for Your Next “Study”
- Find the primary source. Click through until you reach the full paper.
- Scan data first. What changed, by how much, over what time, in whom?
- Interrogate methods. Randomized? Blinded? Adequate controls? Adequate sample size?
- Look for confounders. Lifestyle, baseline differences, measurement bias.
- Separate mechanism from outcome. Does it actually change performance/health?
- Rank the evidence. Where does this land on the pyramid?
- Seek replication. Systematic reviews/meta-analyses trump single trials.
- Translate to practice. What’s the effect size and is it worth the trade-offs?
FAQs
Is correlation ever useful?
Yes—it’s a starting point for hypotheses, not a finish line.
Do p-values prove truth?
No. They flag whether data are surprising under “no effect.” Pair them with effect sizes, CIs, and study quality.
Are anecdotes worthless?
They’re great for generating ideas in coaching—but test them before you scale them.
Want help turning research into results? Book a Free “No-Sweat” Intro at Viking Athletics to build a program backed by evidence and experience.