Life is not a bowl of cherries. All too frequently, when someone directs your attention to a vape story on a newspaper website, eyes are raised to the heavens and angry words might be uttered. People acted as though they’d never seen alternative facts before the recent US Presidential election – but we’ve been reading them for years. Are we paranoid? Do papers really keep reporting nonsense? A recent scientific study has looked into this very issue.
The authors, from the Université de Bordeaux, have released a paper titled Poor replication validity of biomedical association studies reported by newspapers. In other words, from the research that newspapers give coverage to, did any of that research stack up in the long term?
Short answer even shorter: no.
The team “used a database of 4723 primary studies included in 306 meta-analysis articles. These studies associated a risk factor with a disease in three biomedical domains, psychiatry, neurology and four somatic diseases. They were classified into a lifestyle category (e.g. smoking) and a non-lifestyle category (e.g. genetic risk). Using the database Dow Jones Factiva, we investigated the newspaper coverage of each study. Their replication validity was assessed using a comparison with their corresponding meta-analyses.”
The team discovered that papers (they didn’t include television news, but believe it’s led by the print coverage) never covered studies reporting null findings. Unless the research had a juicy attention grabbing ‘these things make this happen to you!’ they simply aren’t interested. Also, if a follow up study fails to demonstrate the same shock results (thereby casting doubt on the initial findings) journalists almost never mention it in follow up pieces. The team write: “This is correlated to an even larger coverage of initial studies in psychiatry. Whereas 234 newspaper articles covered 35 initial studies that were later disconfirmed, only 4 press articles covered a subsequent null finding and mentioned refutation of an initial claim”.
The paper goes on to highlight the bias within the bias: “Newspapers preferentially reported lifestyle association studies linking a pathology to a risk factor on which each reader can act. Non-lifestyle studies related to brain imaging, genetic factor or other inescapable risk factors were less often echoed.”
What does this mean? Well, as we all thought, the coverage of scientific studies (including vape-related ones) only get into newspapers if there is a strong enough commercial pull. The strength of the story, the importance of the value of the science, is discounted when weighed up against the value of the advertising revenue it stands to generate.
The team state as part of the conclusion: “journalists preferentially cover initial findings although they are often contradicted by meta-analyses and rarely inform the public when they are disconfirmed.”
It is all about the shock value or stories that beg the question “What can I do to avoid this awful thing happening to me?” People have attacked journalists such as The Telegraph’s science editor Sarah Knapton on social media, justifiably, for some woeful coverage. The Bordeaux study adds credence to the concerns held by vaping advocates, even if it doesn’t lend a lot of hope for the future.