Ada Ao, a cancer and stem cell biologist, and aspiring science communicator writing for Nature Education's SciTable blog, has an interesting post put up today. She cautions that it is a tirade (according to her, of course; pffft!) against a recently-published PLoS Medicine article by Amélie Yavchitz and associates, titled "Misrepresentation of randomized controlled trials in press releases and news coverage: a cohort study" (Yavchitz et al., PLoS Med., 9(9):e1001308, 2012).
In explaining the motivation behind the study, the PLoS Medicine Editor's Summary indicates:
Findings of randomized controlled trials (RCTs — studies that compare the outcomes of patients randomly assigned to receive alternative interventions), which are the best way to evaluate new treatments, are sometimes distorted in peer-reviewed journals by the use of “spin”...
... which the authors have defined in the PLoS Medicine paper as "specific reporting strategies (intentional or unintentional) emphasizing the beneficial effect of the experimental treatment". The Editor's Summary continues,
For example, a journal article may interpret nonstatistically significant differences as showing the equivalence of two treatments although such results actually indicate a lack of evidence for the superiority of either treatment.
“Spin” can distort the transposition of research into clinical practice and, when reproduced in the mass media, it can give patients unrealistic expectations about new treatments. It is important, therefore, to know where “spin” occurs and to understand the effects of that “spin”.
To this end, the researchers led by Yavchitz used two database indexes, EurekAlert and Lexis Nexis, to evaluate the presence of spin in 70 press releases and corresponding 41 news reports, associated with two-arm, parallel-group RCTs over 4-months. They sought to analyze whether the media coverage contained misinterpretation and/or misrepresentation of RCT results.
The article concluded that about 47-51% of press releases and media coverages of RCTs contained spin; they found that these occurrences of spin correlated positively with similar spin in the article abstracts.
Bummer? Does it cast a shadow over half the clinical trials the authors looked at? Does this indicate that these clinical trials are inherently unreliable? Not really, as Ada explains, expressing her indignation at the implications:Managing the "spin" factor in scientific publishing requires a certain type of finesse. On the one hand, scientists are expected to present their data dispassionately and objectively; at the same time, they are also expected to make their research sound "sexy," or at least relevant and orderly. Scientists must appear to know what they're doing, even though research is a messy, disorganized affair as researchers grope around in the dark in uncharted territory. The unexpected always happens and Murphy's law holds sway. Yet, scientists must appear to be in control and to have an agenda - to understand disease X, or explain phenomenon Y - all to justify public funding, get a paper published, or prop up an image of competence.
So, are there a lot of spin in the published literature? You bet. Does the spin cross from self-promotion to outright fraud? That's a grey area, and like pornography, you know that line's been crossed only when you see it.
Rather eloquently put, I thought, in the last paragraph; Ada seems to have captured the spirit of matter very well in her 'tirade'. In addition, as a professional scientist myself, I wanted to add a little bit of clarification to it, specifically two points.
First, Yavchitz's study focused on spin in "press releases and news coverage", not on spin in actual scientific papers. This is an important distinction. Yavchitz and her associates did bivariate and multivariate analysis to figure out the source of the spin in media coverages and implicated the article abstract (which is an author-written summary, required by the publishers to be sent to indexing services such as PubMed). I submit that the severely abridged nature of the article abstract (often constrained to 250 words or less) often precludes most mentions of the complexities of the research findings. The abstract provides, therefore, an essentially incomplete picture, and Yavchitz's observations, if anything, highlight the inherent danger in trying to assess the merit of the information in a scientific paper from its abstract.
In addition, press releases and news coverages don't necessarily have to serve the truth - though ideally, they should; they serve different masters (such as, say, commerce, or popularity, or attention of funding agencies). In contrast, the only allegiance a scientific paper has (or should have) is to the empirical evidence. In this latter format, there is not much space left for spin.
Every paper tries to tell a story coherently; the introduction and discussion parts are used to lay out available evidence and explain the observations. While it is true that conscious or unconscious bias on part of the authors may get into the interpretation of the observed data, the beauty of a scientific paper is that it still contains the results section with raw and/or derived data; with Open Source publishing, more and more publishers are enabling authors to also make supplementary data available for others. This allows independent scrutiny and evaluation of the observations.
Therefore, when we assume the role of scientists and read a paper, we must delve into the actual results and judge for ourselves the interpretations made by the authors. If we find a contradiction, or some point of an unsatisfactory nature, we must question the author(s) - another process made relatively easier by Open Source publishing.
All this to say that chances of spin influencing scientific papers are minimal, given the intense scrutiny they receive before and after publication. Ada brings out this fundamental point about the peer-reviewed, published scientific papers, when she writes:
There's... an unspoken expectation for the readers to look at the data presented and draw their own conclusions. It's like every paper comes with a presumption of guilt, and the reader's job is to prosecute the hell out of it...
That means applying the same level of common sense and skepticism that we may apply to other aspects of our lives. A science paper isn't meant to tell you what to think, it's meant to be prosecuted vigorously based on the evidence presented.
It's not just the sundry readers. As I have written elsewhere, the scientific process demands independent verification and/or replication of the results by, say, other groups. This recursive process distils out scientifically tenable propositions, which eventually no amount of spin can influence.
The readers of scientific papers (in which I don't, of course, include press releases and news coverage), may be of two types: (a) scientists, or others with some expertise in science and the scientific process (such as veteran science journalists), and (b) the general public. There is an important distinction between the two. Ada understands this; she writes:
... The public was never told, point blank, to read between the lines and seriously critique a paper...
However, the reason for that - to her mind - which is...
... because that would contradict the dispassionate persona science has maintained in the public consciousness — science is supposed to be the distiller of truth.
... is probably not the right way to put it. To me, the reason the general public isn't meant to seriously critique a scientific paper is essentially one of expertise and specialization, in the same way the general public isn't expected to argue about finer points of law, or intricacies of economics, or procedures in medicine. This is why the general public likely relies more on press releases and news coverages to become aware of scientific undertakings and facts.
And this, right there, adds to the responsibility of the scientists, who must engage themselves enthusiastically in the process of science communication and education - beyond what they normally do, i.e. the investigation of natural processes; the scientists must be in a position to comment to the general public, explaining processes, interpreting scientific data, and correcting misrepresentations - in other words, generally counteracting spins. Public engagement and science education have gained a crucial significance like never before. Scientists must take the lead.
No comments:
Post a Comment