top of page

PUBLISH OR PERISH

The Credibility Vacuum

in Scientific Research

One afternoon in Tübingen, neuroscientist and university professor Martin Druffler* had the novel idea of talking to paralyzed individuals with severe brain injury. Patients of interest were in a so-called complete locked-in state, unable to even move their eyes or communicate, but still conscious. Then, in 2017, Druffler showed through a computer-based analysis of blood flow (functional near-infrared spectroscopy) that this near impossible phenomenon - communication - seemed to be possible.

These were exciting times - revolutionary findings even - but there was a problem: another researcher was unable to reproduce the results and so called into question Druffler’s research. The fallout was dramatic. The case raised legal and constitutional issues rooted in accusations of their mishandling of data including: cherry picking, missing data, and incorrect data analysis. The fallout led to severe restrictions on Druffler’s work at the university.

This ‘replication crisis’ in the scientific method is not an exception. Incongruences are not, as we like to tell ourselves, the result of bad apples. In an experiment by Garcia-Berthou & Alcaraz attempting to replicate 100 existing psychology studies, 61 failed to match the original results, with more than a dozen not being similar at all. Similarly striking results were found in a separate study. Scientists investigated the consistency of results in numerous articles in two leading science journals: Nature and the British Medical Journal (BMJ). At least one incongruence was found in 38% of papers implying that poor practice had probably been present. 

"At least one incongruence was found in 38% of papers implying that poor practice had probably been present."

To be clear, the inability to replicate is not usually to do with the fabrication of data. Although most incongruences are innocently created, they are representative of a more systemic problem within scientific research. Researchers usually have significant freedom in the way they prepare, analyse and report their data, which leads to artificially created differences in data groupings.  Sometimes, a very unusual or unique finding is the result of a very small sample size as smaller sample sizes are more likely to produce false-positive results. In other words, the smaller the sample size, the more likely you are to falsely perceive a significant result because there are simply not enough participants to ensure the significant result is uncharacteristic. Worse, some authors may also cherry pick data by excluding data that contradicts their hypothesis, undermining the integrity of the research.

Results can also be misrepresented by the media. For instance, headlines about smokers being less likely to contract COVID-19 have been widely publicised alongside articles stating the contrary. Both continue to receive widespread attention in the media despite neither aligning with the research results or the article’s own conclusion, and both articles are reviewing study findings without reflecting their true content in the title. Headlines deliberately misconstrue science: they are often inaccurate and cast research inauthentically (abstracts of papers often commit the same offence).

By distorting research, the media creates a toxic association between the researcher and the public. On the one hand, the former intends to make science easy to understand whilst keeping it interesting and exciting. Timing is limited and pre-tenure scientists in particular experience enormous pressure to perform and discover something spectacular. Most researchers have temporary contracts and only have access to permanent placements because of the incredibly competitive academic environment. Scientists need to work hard to get grants to fund their research and prove that their own is worth supporting. Non-significant results are usually not deemed worthy of publishing, and typically get little to no reward. More often than not, universities have steep hierarchies, classifying PhD students as lesser researchers with little to no power or influence over their pedestalled professors.

"...investors profit from the prestige of researchers associated with their pharmaceutical products..."

Similarly, a specific order of precedence is present in the toxic relationship between sponsors and researchers. Typically, the professor wants their own research and agenda to be the sole focus for junior scientist colleagues, and that is before we discuss the sources of private or public funding. This problem is especially rife in clinical research on prescriptions for drug companies. Research sponsors often have huge influence over how research is designed, conducted, analysed and published. Investing companies have more and more of a say in how the research is designed, conducted, analysed and published.

In turn, investors profit from the prestige of researchers associated with their pharmaceutical products which lends them more credibility and their product greater popularity. This clear conflict of interest embeds a self-fulfilling prophecy into research. A pharmaceutical could, for example, offer funding support on the condition of sample size restrictions, skewing research results by increasing the likelihood of statistically significant results.

Acknowledging these factors, several clear ways to fight the credibility crisis emerge. While it is important to resist shaming science, we need to make researchers more sensitive to methodological mistakes - the structures that trigger possible data manipulation - and the consequences that people might face for little reproducibility. There are steps which can be taken to help resolve these serious problems. Methodological credibility might be given when scientists pre-register their research. Therefore, they can specify goals, hypotheses, data acquisition and analysis beforehand, leaving less room for changes after they have acquired the data. This would exclude the selection of data and inconsistent hypotheses and incentivize researchers to try and explain alternative outcomes. Researchers can also utilize platforms which require them to publish their raw data and the research methods adopted to analyse it, recording the syntaxes used to ensure research is not only more transparent but also far easier to cross examine and to attempt replication.

A correlation does not automatically mean that one outcome causes another. When reading articles, people should take into account all the possible results and analyses made and examine the author's interpretation in-depth. It is important to seek out the original articles and avoid abstracts and sensationalised headlines, especially in this troubling era of fake news.

"incorrectly computed data, intentionally or otherwise must be treated as a systemic problem, not an exceptional one."

Science, and scientific journals should give more credit to replications and become platforms expressing and discussing different outcomes, instead of fixating on allegedly significant results. Most majors in European universities require theses at the end of their terms, intending to give students an insight into research.

Unfortunately, and especially for undergraduates (Bachelor levels), these research questions usually consist of their professor’s ideas and have no purpose beyond being a requirement to complete their degree. Universities should give students the possibility to reproduce or follow up on pre-existing research. This would be a good way for students to think critically about test findings, data acquisition and analysis. Additionally, established scientists should be given more incentives for replication research. It is often the case that older findings are treated as gospel, when in reality, replicating these experiments using new technology could produce different and no less interesting results. The purpose of research should be an inexhaustible attempt to depict reality, rather than a competition to redefine it.

Obviously, we should not diminish the credibility of research or incentivise black-and-white thinking. However, incorrectly computed data, intentionally or otherwise, cannot be taken lightly and must be treated as a systemic problem, not an exceptional one. Science faces enough stigma and mistrust from external sources as it is already. We cannot afford acts of self-harm as well.

*Name changed

read more

also, read about

this one's good too

if you're hungry

and, if not, here

and if you have time

bottom of page