Wednesday 24 July 2013

Critical thinking

I think that most of us are savvy enough to realise that we shouldn't take stories in the media at face value, but it occasionally strikes me that not enough people critically evaluate the things that they read. In the running world there are many areas where people hold quite fervent views on what is the "best" way to do something; be it relating to diet (e.g. high carb vs. high fat), running style (e.g. forefoot vs. heel striking), shoe choice (barefoot, minimalist, Hokas, et al.), training methods (speed work or not for ultras?), training aids (does compression gear work?), etc. But one of the big problems that I see cropping up time and again, particularly with social media like Facebook and Twitter, are cases of what is termed confirmation bias - the tendency for people to take more notice of evidence that supports a view that they already hold. How often have you seen an advocate of barefoot running Tweet about the latest paper showing that barefoot running is more efficient? But how often do they Tweet about the latest research showing that no difference was found, or even that heel striking was shown to be preferable?

Let me just be clear; I am in no way commenting on the views themselves. I have my own opinions on all of these facets of running that have been borne out of experience (which is limited in terms of time, but extensive in other ways), as well as reading the limited scientific literature that exists. But I in no way claim that these opinions are correct. If I see compelling evidence that convinces me otherwise, I try things out, use what works for me, and change my opinions (as everybody should). To be honest, my view on a lot of these questions in sports science is that there is no right answer. You do what works for you. 


No; what I am commenting on in this post is the use of dodgy evidence to support these views. 


In most of the examples given above, there really is no clear answer one way or the other about which is "best", but you can pretty much always find a paper that leans one way or the other. But if the forefoot runners are waving around Dan Lieberman's pivotal 2004 paper showing that we evolved to run barefoot with a forefoot strike (Bramble et al (2004), 'Endurance running and the evolution of Homo', Nature, 432:345-352), what are we then to make of the more recent study from Brian Richmond that shows that other habitually barefoot populations actually prefer to run with a heel strike pattern, particularly when running at endurance speeds (Hatala et al (2012), 'Variation in Foot Strike Patterns during Running among Habitually Barefoot Populations', PLOS One, 8(1) e52548)? Particularly in a field such as sports science, where typically sample sizes are small and results are relatively open to interpretation, you can usually find research to support any particular view (or at least where the research can be interpreted in such a way as to support it - an important distinction). But taking these studies in isolation, and not considering other research which may show conflicting results, is a process known as cherry picking, and is one of the classic statistical fallacies.


So here's a question; when somebody posts the latest research showing that "such and such is bad for you", or "such and such is the best way to run" (I'm as guilty as anyone of doing this by the way), do you ever go away and read the original research? Or do you just read the newspaper article that reported it? Chances are it's the latter. Which is absolutely fine of course - not everybody cares enough about reading the boring sciencey stuff, and it has already been summarised for you to understand what is going on. The problem is:
  1. You're relying on the fact that the reporter understands the research. Most news agencies will have a science writer (or a team of writers) who can understand the literature, but not all do. Also, "science" is a pretty big field, so even if you do have a <finger quotes>scientist</finger quotes> helping with the reporting, it might be a completely different field than where their expertise lie. So the person reporting the information really might not be the best person to disceminate the information to the general public. There's a lot of skill involved in making science easy to understand to people who haven't spent the last 10 years studying it (I'm rubbish at it!), and it's easy to get the story wrong, or miss important facts out. And let's be honest, The Daily Mail aren't realistically going to have the brightest scientific minds available to them for this, which maybe explains why their list of reported potential cancer cures is quite so extensive...
  2. You're relying on the fact that the reporter hasn't twisted the research for their own agenda. Unfortunately there is no such thing as unbiased media. Of any kind. Whether the author means to do it or not, their outlook and interpretation will be biased by their own beliefs and experiences. Even this blog is biased. I like to think that I make a concerted effort to approach things in an open and objective fashion, but inevitably my own beliefs on certain things will bleed through. Bias can be something small and innocuous (e.g. I notice that the reviews on my blog are biased for equipment that I actually like, as I usually don't use something that I don't like for long enough to be bothered). Or it can be more insidious and intentional. As Churchill said, "There's lies, damn lies, and statistics" - it is frighteningly easy to twist numbers to make a particular point. Somebody with a background in statistics can spot some of the dodgy ways that the media can fudge the numbers, but to most normal people they will happily take the banner headline at face value. Numbers and statistics (even if they're a crock) just make it look more truthy. "Scientists say"...
  3. You're relying on the fact that the research itself is good. Not all research is created equal. Some published work should really be taken with a large pinch of salt.
There are various ways in which research can be deemed to be poor science, or certainly not worth a "Cure for Cancer" headline just yet. Or the research might be absolutely fine, but if it flies in the face of all other research to the contrary perhaps it's worth being a little more critical. In fact, my advice is to follow the ABC of science; Always Be Critical. Ask yourself:
  1. Where was the paper published? Journals have a measure called an Impact Factor attached to them, which is basically a measure of how likely it is that somebody else will trust the work in it enough to reference one of its papers. Journals like Nature and Science have incredibly high Impact Factors, and only the best research (which has to pass very tough criteria to be even considered for publication) is published. The decision to publish follows a process known as peer review - a manuscript is submitted and sent out to a number of experts in the field who decide if it is suitable for publication (and if it requires any additional work to meet the requirements). There are potential issues with this process, and there can be a lot of politics involved, but generally it is a fair way to ensure that published work is of a high quality. Nature rejects about 95% of submitted manuscripts (I'm very proud of the fact that I have several Nature papers to my name), but unfortunately you can probably find a journal somewhere willing to publish your work, even if it is rubbish! So just be aware that just because a piece of work is in a journal, it doesn't mean it is accurate. 
  2. When was the paper published? Science changes. That's, like, its main thing. As new research comes to light, theories change and adapt. A paper published in the 60s is likely to have been superceded by more recent data (research from 10 years ago is often already outdated in the field of genetics for instance), so shouldn't carry too much weight when compared with more recent work. 
  3. Does the data actually say what the authors say it does? I published a post about some work from Dr James O'Keefe showing that running marathons is bad for your heart. However, I fundamentally disagree with his interpretation of the data, in so far as I believe that some of the papers that he references show the complete opposite of what he claims. Often data can be open to differing interpretations, or can be fundamentally flawed (see my point in the above link about the way that they normalised the data in the European Heart Journal paper, which basically says "if we ignore all of the benefits of running, running has no benefit"). Or a big one is falling into the old logical fallacy of assuming that "correlation implies causation". Just because two things are correlated, it does not mean that one causes the other. There is a pretty good correlation (better than I see in some research papers) between the number of Nobel Prize winners and the amount of chocolate eaten per capita - does this mean that eating chocolate makes you clever? Of course not. It's more likely that the amount of chocolate eaten by the population of a nation is related to their average wealth, which also affects the schooling system and access of individuals to universities. 
  4. What was the sample size? One of the Nature papers that I worked on used a sample size of 19,000 individuals - 2,000 people for each of 8 common diseases, and a group of 3,000 control individuals. Even with that huge number, our statistical power was not strong enough to detect everything. In contrast, the Le Gerche paper (2011) that was touted by the media as showing that marathon running is bad for your heart used 40 individuals (of which only 7 were actually marathon runners). Just sayin'. Obviously there is more funding available for disease research than for sports research, so we can't expect such high sample sizes. Also, smaller sample sizes are probably necessary to detect the changes that we want to see. But these tiny studies are still only a fraction of the population and should not be taken de facto as being applicable to the population as a whole. And of course the worse case is to take anecdotal evidence (sample size of one) as being anything other than an interesting insight. 
  5. Was the experiment well designed to avoid placebo effects? The placebo effect is incredibly strong, probably more so than you think. Did you know that 2 sugar pills are better for you than 1? Or that an injection of saline is better for you than a sugar pill? If your brain thinks it's gaining a benefit, it will gain a benefit. So it is important to not mix up a real effect from, say, compression gear with that of a placebo effect. That's not to say that there isn't a real effect, but thus far no experiments have been successfully designed to really separate the two (it's very difficult to design a non-compression control scenario as it's pretty obvious if you're getting compression gear or not). The closest I could find was from Hamlin et al (2012), "Effect of compression garments on short-term recovery of repeated sprint and 3-km running performance in rugby union players", J Strength Cond Res. 26(11):2975-82, which compared the performance of rugby players in repeated sprint intervals when using compression tights as compared to a set of non-compression control tights, to see if the compression gear prevented fatigue. The rugby players weren't told which was which, although I suspect it was pretty obvious. But if you look at the results, there really wasn't a significant difference between the two. Their conclusion is that it is "likely to be worthwhile, and unlikely to be harmful", which is a bit weak if I'm honest.
  6. Was the study performed in a model organism? Ethical questions aside, work performed in animal models can be incredibly useful at directing research in humans. The mouse genome shares many of the same genes as the human genome, and many advances in disease treatment for humans have come about as the direct result of studies on animals. That being said, a mouse is not a human - what is true in a mouse model may not be true in a human.
  7. Are there any conflicts of interest? Most journals require you to express any potential conflicts of interest which might bias your opinions, or potentially even dictate the direction of the project. These are usually related to funding bodies, or if an author has some kind of monetary incentive  for the research (if they are a share-holder in the company selling the proposed cure for cancer for instance). It is hardly surprising that such conflicts of interest exist - the money has to come from somewhere and it is in the interest of Lucozade to study the effects of sports drinks for instance, and it's unsurprising that researchers studying negative effects of eating meat are often vegetarian. But you should certainly be aware of it.
  8. Has the research been independently verified? One important aspect of good science is to make all of your data available to allow other groups to replicate your analyses. Open access to data is one of the most important aspects of science in the internet age in my opinion. Also, all protocols that were followed should be carefully described so that other groups can try it out for themselves. If a completely independent group is able to reproduce the same results then that lends huge credence to its veracity. 
For instance, this figure did the rounds recently and was touted by many people as evidence of a non-refined plant-based diet being better for you (eat more unrefined plant products and you're less likely to die of cancer and heart disease). Largely I approve of the sentiment here (whilst I am not a vegetarian, I do believe that many people could do with improving their diet - myself included), but:
  1. I doubt that the effect is this large. A quick look at the Center for Disease Control and Prevention website indicates that the proportion of deaths attributed to heart disease or cancer is probably less than 50%, not 80% as shown here, so we can already see that these numbers are off. Also, most disease prevalence is demonstrably a combination of genetic factors and environmental factors, with genetic factors making up a large proportion of that.
  2. It's too perfect. Real numbers rarely work like that so it instantly raises alarm bells in my mind.
  3. There are some seriously dodgy stats at play here. One problem is that this falls into the trap of assuming that because there is a correlation, there is a causal relation between them. There could be some other effect at play here - for instance there is a clear financial axis, with the countries on the right typically poorer than those on the left. But the main problem is this; where the hell are the other ~180 countries?! This is a pretty horrible case of cherry picking the data. You could probably pick 12 random countries in such a way to make any two measurements appear to be correlated, and in fact this was done here to show that the forest area of a country is inversely proportional to the number of maternal deaths. As the blogger points out, this means that "we must plant trees in order to save the poor mothers!"
It's interesting to understand how science actually works, and what distinguishes it from so-called pseudo-sciences. The important distinction is the idea of falsifiability - you should be able to design an experiment to prove it wrong. An example of a pseudo-scientifc idea is Bertrand Russell's Celestial Teapot. He hypothesised that there was a China teapot in orbit around the sun, but it was too small to be observed by any telescope. This is an example of a hypothesis that cannot be refuted - how can you design an experiment to show that it isn't there? Can't see it on a telescope? Then your telescope isn't powerful enough. Can't find it by sending a probe out to look for it? Then you're looking in the wrong place. It is unfalsifiable. It is also, obviously, ridiculous (as Russell was well aware). The point here is that this hypothesis adds no additional benefit to any predictive models, and isn't necessary to explain the observed data. The burden of proof lies with the person making such unfalsifiable claims. 

And it's important to be clear on what a theory actually is; it is not just a guess, as the colloquial use of the term may suggest. A scientific theory is a very structured frame-work which not only explains the observed data, but also has been repeatedly confirmed by numerous rigorous tests, and has successfully made predictions on what you should expect to see. The Theory of Evolution is very robust, has stood up to many tests to disprove it, and has made numerous accurate predictions. Don't let the word "theory" fool you into thinking that it is somehow soft. If you believe that, go and jump off a cliff and see how soft the Theory of Gravity is. Also, don't let the fact that science changes and adapts fool you into thinking that this makes science somehow weak. "Don't believe scientists, they used to believe that the earth was flat". Yes, but now they don't. We also used to believe that Pluto was a planet, but have since updated our views having discovered numerous similarly sized objects in the outer reaches of the solar system. Yet astrologers still assign significance to Pluto and not the hundreds of other planetoids that have since been discovered.

It's not a perfect system, and of course it makes mistakes, but it's the best system for gradually improving our understanding of the universe that we know of. Can you think of a better method? The scientific method is the method by which scientific theories are formulated and tested. It goes something like this:
  1. Make observations and gather up all of the evidence that exists
  2. Formulate the most parsimonious model possible that explains the observations (a hypothesis)
  3. Use the hypothesis to make predictions
  4. Test the predictions in an attempt to falsify the claims of the hypothesis
  5. If the hypothesis is falsified, reformulate it to include the latest data
  6. If the hypothesis stands up to rigorous testing, and is able to make accurate predictions, it can be incorporated into a scientific theory
The important thing here is that a model based on observations will not (and in fact cannot) be proved. It simply has not (yet) been disproved. Science doesn't go about proving things. It's impossible to do. What it does is gradually lead us closer and closer to the truth (TM). But we will never know if we have reached the truth (TM) - our attempts to falsify it will just keep failing. If anybody ever points you towards a paper that claims to prove something, immediately be highly skeptical. Also important is the fact that the hypothesis or theory can make predictions about the world. Otherwise what's the point? 

One concept to be mindful of is the idea of parsimony; a hypothesis need only be complicated enough to explain all of the available evidence - no more. The old phrase is "when you hear hooves, think horses not zebras". That is; why add in the extra assumption that the sound is coming from a pretty rare hooved animal rather than a pretty common one, until there is evidence to suggest otherwise? 

I guess that my point is that people need to be a bit more critical about the sort of things that they read and believe, and don't just take things at face value. Of course you should read newspapers and magazines and blogs and posts on Facebook to learn about new research or see other people's opinions on science that interests you. But go away and check the primary source to see if it does actually say what they claim. Take a look at where the research came from - are there any sources of bias that might exist that may influence the results? Have a look at the research on the other side of the argument - is it equally believable? How was the study designed? Can you see any flaws in the methodology? Be aware of common ways that stats can be used to dupe you. And don't be taken in just because a post uses a lot of references (my own posts included). It's very easy to pretend that you know what you're talking about by sticking references to lots of papers everywhere (it makes it look all "sciencey"), but if those references don't actually back up the points being made then this can give an entirely false sense of authority. Also, don't take their word for it that the papers even say what they claim - go away and read them for yourself (or at least take a look at the abstract). 

And don't believe things just because "scientists say". Scientists say a lot of shit, believe me!

And most importantly, never believe things that you read on the internet.

Except for this. This is golden. You can trust me. I'm a scientist...

1 comment:

  1. Long time reader first time poster . . .

    LOVE this post. Thank you for breaking it down for the masses!

    ReplyDelete

Note: only a member of this blog may post a comment.