Bad science methods

Bad science is a term for methods using in scientific publication system for intentional or unintentional fraud and misconduct.
Some scientists and other people use those methods in order to maintain their position in science society or to obtain grants for their research, what leads to contradictions and in some cases, for example in medicine, could be a reason for introduction dangerous drugs or harmful therapies. Also journalists and popular media canals, such science blogs and webpages often misinterpreting results of studies or exaggerate them, what makes them more attractive for publicity. Peer review system does not prevent of misleading publications. Publication bias is one of the most common and most important factor, distorting the research outcome.
Prevalence
As research suggest, practices of bad science are common and widespread and in specific situation it is a source of big problems. For example misleading data can leads to approval of ineffective or dangerous medicine. In 2005 John Ioannidis checked forty nine most cited, high impact researches published in few important medicine journals between 1990 and 2003. He found, that in forty five of those researches, claimed that medical intervention was effective, "7 (16%) were contradicted by subsequent studies, 7 others (16%) had found effects that were stronger than those of subsequent studies, 20 (44%) were replicated, and 11 (24%) remained largely unchallenged ". Only twenty of all studies he checked were later replicated and it's mean, that more than half of single published research about medical interventions are untrustworthy, even the randomized trials.
Daniele Fanelli review in 2009 surveys on falsification and fabrication in science. Near 2% of scientists admitted, that at least once have conducted serious form of misconduct and near 37% admitted other questionable research practices and such finding was confirmation of previous surveys. However when asking for colleagues behavior, they pointed about 14% of them for falsification and 72% for other questionable research practices. Most of the misconducts are probably committed by males scientists.
Similar situation is on forensic science, for example in USA, where most of work performed by crime labolatories suffer from misanalyzing, contamination or lying.
Some research argues, that 79% of published articles on neuroscience are misleading, and 92% of all in brain imaging field, due to small samples of participants.
Near all papers in economy could be wrong because of publication bias and other misconducts.
Randall and Gibson checked in 2013 ninety four studies about ethical beliefs and behavior of organizational members and they found, that "majority of empirical research articles expressed no concern for the reliability or validity of measures, were characterized by low response rates, used convenience samples, and did not offer a theoretic framework, hypotheses, or a definition of ethics". Full methodological detail was provided in less than one half of the checked articles.
More than half biostatisticians know about fraudulent projects in medical research.
One of the bad science practice is the hiding of the raw data. Scientist often don't share data, when the evidence are weak. About 30% of all articles published in the high impact journals don't share raw data.
Surrogates in research
In many studies scientists use surrogates and generalize the observed effect for the whole population, despite such practise is questionable. In social science and psychology popular surrogate are students and some research find, that outcome from studies with students participants differ from studies with non-students. Students as good surrogate could only be considered in specific situations.
In some trials participants could not make the representative samples of whole population and a lot of them are often homeless and poor people or people addicted from alcohol or drugs. In medical research, especially in pharmaceutics, the variety of mammals are used as surrogates for human. Mammals physiology differ across the species, and sometimes the drug which look safe for animals will be dangerous for human. The most known example is the TGN1412 drug, which almost kills a few volunteers in 2006 despite of the very large dose of it was safe for animals. Around three of four drugs are rejected in the I phase of trials on human, despite drugs looked safe with animals trials. Research indicates that animal model could be failure in mimic human clinical diseases. One systematic review revealed, that only about one third of influential studies about drugs tested on animals later was approved for people, partially because animal trials was performing with less care. Most animal trials lack of methodological quality, for example in the case of testing stroke drugs.
Poor statistical standards
In social sciences about 17-25% of all findings are probably false, because of poor statistical standards and publication bias. In psychology 15% of published papers could have a statistical error, which change the conclusion of the paper.
In genetics many studies try to link specific genes with diseases or other biological and even psychological factors like personality traits. However such findings should be treated with caution because of complexity of possible factors. Mathematical analyses showed that small genetics studies suffer from low statistical quality and are less replicable. In 2008 Nicole Allen with colleagues calculated, that in case of schizophrenia only one study of hundred could be right pointing the association between gene combination and disease. Similar odds have been calculated in case of Alzheimer's disease, Parkinson disease, few other diseases and association with violence and aggression or race. However even such studies could suffer from inconsistent results and interpretations.
Genomic method fails in case of prediction human height and research showed, that gene correlated with height could only explain 4-6% of all variance in comparison with 40% in case of 125-year-old technique of averaging the heights of both parents.
Peer review system does not prevent publishing articles with statistical errors.
Mismeasurement
Measure are essential for science. However sometimes mismeasuring leads to problems and misleading findings. In 2008 some scientists have found the temperature bias, when they noticed, that record of oceanic temperatures were performed with different methodology in different periods of twentieth century. In medicine measurement bias could distorting the final results of trials, for example measure of blood pressure and in individual cases could effect misdiagnose, for example in children weight and height.
Epidemiological studies are very often badly interpreted because of large number of possible interactions between confounded factors. Despite it such studies are the base reference for health and diet recommendations. For example big study points that red meat consumption are associated with increased risk of all cause mortality among human, however other recent big study suggest, that it is not red meat, but rather processed meat is a cause and another indicates both. For years overweight and obesity were considered as a main cause of chronic diseases, however some recent studies suggest, that overweight is only accidentally correlated with such events and the main causes are rather lack of physical activity and permanent inflammation.
Conflict of interest
Research show strong correlations between positive findings in medicine and industry sponsorship and studies rarely report such conflict of interest. Some companies paid the university researches for their signature on conducted research, so called ghost-scripting, and it blurring the real scale of problem. In biomedical research rate of conflict of interest is about 30%. Every of the 170 psychiatric experts who contributed to the DSM IV have had the financial ties to the psychiatric drugs manufacturers.
Findings replication
The core of modern science is replication of findings. However some studies found, that most of published findings aren't replicated and even if they so, it is hard to publish replicated study in the high impact journal. Wrong finding are persistently present in the literature for prolonged time and the retraction rate is lower in the prestigious journal, than should be expected. Because there is too much published studies, scientists have a problem keeping up with recent findings.
Real life consequences
Poor quality of published studies and bad science methods could lead to real life consequences. In psychology for example many single studies showed that males and females differ substantially, however more reliable studies, meta-analysies and reviews clearly show, that gender differences are minor and exaggerated. Also many single neuroimaging studies suggested, that gender difference in human psychology could be observed on brain level, and again systematic reviews show, that such differences in brain physiology are rather artifacts and simply doesn't exist. One study found, that reported sex differences in relative size of human corpus callosum are greater in research with smaller samples and another observed, that most studies about any group differences in size of corpus callosum are poor quality with conflicting findings. Despite it such untrustworthy findings could support discriminative beliefs like sexism.
In psychology questionable surrogates, surrogate measures and small samples of participants often lead to false positive findings. Even big reviews and meta-analysis could provide contradicting conclusions from those findings because of publication bias and lack of statistical standards. For example one recent meta-analytic review showed that women clearly change their mate preferences through the menstrual cycle, but another meta-analysis observed no such effect.
In agricultural sciences one big study showed, that organic food contains more nutrients and less pesticides than conventional food, another showed it isn't true and another concluded, that evidence are lack to draw any strong conclusion.
Some research found that 50% of drug trials reporting efficiency of treatment and 65% of drug trials reporting harm outcomes were incompletely reported and scientist have a tendency for excluding inconvenient data. About 31% of studies antidepressants aren't published because of publication bias, and most of them are studies showing inefficiency of the drug and overall efficiency of antidepressants is questionable because of poor quality of evidence. The effect of similar practices could be harmful and some physicians reported, that in Europe 50% available and confirmed drugs are useless, 20% are poorly tolerated, 5% are potentially very dangerous, what cause 20 000 death yearly in France. Even efficiency of major drugs could not be generalized for the whole population and them work only with 25-60% of patients.

Comments