In an increasingly competitive science talent market, publication counts are much more highly valued than moral fiber. Who reads your work or what publications print your manuscripts really matters when you're fighting for an important or influential job [read: a position ineligible for a student discount]. Unfortunately, the only thing that's truly publishable is a study boasting significant differences or relationships among variables. The standard used to guard the term significant - a p-value less than 0.05, is a arbitrary value which dictates that 95% of the time, a 'significant' effect will be a true and valid thing, and 5% of the time it will be an anomaly. Having the designation of being !STATISTICALLY SIGNIFICANT! is also usually the only way to get your findings to your peers whatsoever.
This system sucks balls for several reasons:
1) You might find the cure for cancer, cure for AIDS, but unless your study makes the statistical cut, it's probably going to become a story never told. That treatment that dramatically improved symptomology for a large proportion of individuals in a study? Not valid unless you've got that coveted asterisk (*) over some graphs.
2) This case-by-case system does not account for data/studies in the aggregate. A small effect, which emerges consistently in a number of studies with variable statistical power (possibly due to methodology limitations), might actually be a real relationship, but current statistical standards might prevent that connection between meaningful variables.
3) Studies that have nonsignificant findings are just as informative as studies that do demonstrate significant differences. In a culture where only select information (i.e., significant findings) are disseminated to the scientific community, this means that the same mistake (i.e., null findings) is bound to be committed time after time. It's a waste of time and money to allow studies with null findings to quietly die rather than be shared among the community.
4) This system sometimes forces scientists to make hard choices between success (publishing statistically significant findings) and all-out truthfulness. Statistics can be easily misleading or twisted to suit the purposes of the author. This is accomplished because the emphasis in training is in area-specific knowledge and methodology, rather than an in-depth understanding of the appropriate use, implementation, and interpretation of statistics.
You see, when the scientific and statistical culture requires a "threshold," i.e., an arbitrary and difficult statistical standard to fulfill [i.e., p<.05], those in the culture are incentivized to twist data any which way to publicize their work [i.e. get published]. Until the accepted guidelines and statistical methods are questioned at a basic and fundamental level, science will continue to see in a fucked up and sad grainy black and white picture of what the (probably) messy and vivid technicolor "truth" actually is.
I've been in science just long enough that my desire to succeed (read: publish) tests my desire to tell the unabashed and naked truth. I've run numerous behavioral studies based on an educated guess of what interplay between variables should look like based on previously published work, only to find following examination that differences or relationships were not statistically significant. In order to someday secure the "ultimate" goal of becoming a successful scientist and/or professor, I need to publish my work.
If only statistically significant work is publishable, rather than rigorous work regardless of outcome, it is reasonable that one would be tempted to fudge findings. In the intensely competitive field of science, a career that often requires a slavish commitment to work that forces many people to make a hard choice between personal happiness and only possible success, it also seems reasonable that talented individuals might opt out of this career path with such a bleak outlook for success.
They don't tell you this shit in graduate school. I wish it could be talked about more.
No comments:
Post a Comment