Showing posts with label statistics. Show all posts
Showing posts with label statistics. Show all posts

Sunday, August 18, 2013

It's more fun to compute

If you're not currently addicted to Evernote, I pity you.

Yes, this is meant to sound inflammatory. I am not overselling how much more in control of all the crazy things that go on in my life with the use of this technology.

What is Evernote? 
Evernote is an application that allows you to create "notes," in which you can type information, take and attach an image, record and save an audio file, add reminders, draw images or annotate  and tag notes with labels to categorize them. Also, like a billion other things but that's the gist. It's available for iPhone, iPad, PC, Mac, and online.

Here's some reasons it's amazing for scientists:
1. All those notes/tips/protocols that I lose in my haphazardly organized and bulk lab notebook? Yeah, I took pictures of those motherfuckers and now they're stored electronically forever and I can find them whenever.

2. It offers incredible functionality within a web browser and allows me to "clip" things I select online directly to my collection in a snap.

3. It can scan handwritten notes and makes that shit searchable.*

4. If you imported pdf files, the content is searchable. No more creative file names, no more organizational mishaps or printed stacks of articles.*

5. I've been using it to put together my thoughts/ideas on data analysis, manuscript design, and graphs.

6. You can share the notes with people through links or by email.

7. A note is almost instantly synced to every device on which you have the software, including the web version.

Beware: You will start using Evernote for all of teh things, all of teh time. 
It's also awesome for recipes, music/restaurant recommendations, prescriptions, remembering where you've parked, and just about anything for which you'd like to stash some information to rapid retrieval later.

Want to learn more about Evernote? Try this Beginner's Guide from Mashable.

*These features are available through the paid version of Evernote (called Premium), which is $45 for the year.


Tuesday, June 4, 2013

I can't get out of what I'm into

In an increasingly competitive science talent market, publication counts are much more highly valued than moral fiber. Who reads your work or what publications print your manuscripts really matters when you're fighting for an important or influential job [read: a position ineligible for a student discount]. Unfortunately, the only thing that's truly publishable is a study boasting significant differences or relationships among variables. The standard used to guard the term significant - a p-value less than 0.05, is a arbitrary value which dictates that 95% of the time, a 'significant' effect will be a true and valid thing, and 5% of the time it will be an anomaly. Having the designation of being !STATISTICALLY SIGNIFICANT! is also usually the only way to get your findings to your peers whatsoever.

This system sucks balls for several reasons:

1) You might find the cure for cancer, cure for AIDS, but unless your study makes the statistical cut, it's probably going to become a story never told. That treatment that dramatically improved symptomology for a large proportion of individuals in a study? Not valid unless you've got that coveted asterisk (*) over some graphs.

2) This case-by-case system does not account for data/studies in the aggregate. A small effect, which emerges consistently in a number of studies with variable statistical power (possibly due to methodology limitations), might actually be a real relationship, but current statistical standards might prevent that connection between meaningful variables.

3) Studies that have nonsignificant findings are just as informative as studies that do demonstrate significant differences. In a culture where only select  information (i.e., significant findings) are disseminated to the scientific community, this means that the same mistake (i.e., null findings) is bound to be committed time after time. It's a waste of time and money to allow studies with null findings to quietly die rather than be shared among the community.

4) This system sometimes forces scientists to make hard choices between success (publishing statistically significant findings) and all-out truthfulness. Statistics can be easily misleading or twisted to suit the purposes of the author. This is accomplished because the emphasis in training is in area-specific knowledge and methodology, rather than an in-depth understanding of the appropriate use, implementation, and interpretation of statistics.


You see, when the scientific and statistical culture requires a "threshold," i.e., an arbitrary and difficult statistical standard to fulfill [i.e., p<.05], those in the culture are incentivized to twist data any which way to publicize their work [i.e. get published]. Until the accepted guidelines and statistical methods are questioned at a basic and fundamental level, science will continue to see in a fucked up and sad grainy black and white picture of what the (probably) messy and vivid technicolor "truth" actually is.

I've been in science just long enough that my desire to succeed (read: publish) tests my desire to tell the unabashed and naked truth. I've run numerous behavioral studies based on an educated guess of what interplay between variables should look like based on previously published work, only to find following examination that differences or relationships were not statistically significant. In order to someday secure the "ultimate" goal of becoming a successful scientist and/or professor, I need to publish my work.

If only statistically significant work is publishable, rather than rigorous work regardless of outcome, it is reasonable that one would be tempted to fudge findings. In the intensely competitive field of science, a career that often requires a slavish commitment to work that forces many people to make a hard choice between personal happiness and only possible success, it also seems reasonable that talented individuals might opt out of this career path with such a bleak outlook for success.

They don't tell you this shit in graduate school. I wish it could be talked about more.

Thursday, May 2, 2013

I believe in a thing called love

As a behavioral neuroscientist, the bulk of my scientific data comes directly from the actions of animals. My experience thus far is predominantly from my graduate training in preclinical traumatic brain injury with rats. The tasks I perform with the animals can be as hands-off as activity monitoring (a motor and anxiety measure where the subject is simply placed into a box) or as intimate as forelimb placing (significantly restraining the animal and moving them about a plexiglass plate to stimulate their whiskers to examine reflexes). Despite the 'intimacy' of the task, handling and becoming familiar with the animals is a standard practice before using animals in a study, although its implementation differs wildly across researchers and is seldom mentioned in Methods sections.

The primary goal of handling is to make the animals familiar enough with the handler/researcher that they can satisfactorily perform the behavioral tasks that will be asked of them. At first, this means acquainting the animal with simply being touched, as many animals sit in tiny boxes without social stimulation of any kind as they await use in a study. Later, handling can mean manipulating the hands and feet of the animal, or even getting them used to being held in one's arms. 

But when I first started working with rodents, I was told not to get too emotionally close to the animals. "Don't get attached," was uttered on dozens of occasions by lab members at every level of training. Indeed, the practice of endearingly naming one's subjects is usually the sign that one isn't long for the scientist career.

Sure enough, I named one of my first subjects, a female retired breeder, because of her insatiably sweet yet voraciously intelligent behavior. Sure, "Number Seven" sung in a baby voice isn't a name by the strictest definition since its true label was AS-7, but I loved this animal. Just as everyone had told me not to do. When the day came to sacrifice the animal, my first on my own, I tried to hide how difficult it was for me. I shut the door after I was finally able to administer the euthanasia agent. I shut the door and I cried as I watched the animal die.

It would be almost three more years before I would bestow nicknames on animals again. But this time, I'm much more conscious of what that endearment means. Since then, I have been passionate - nay, outspoken - about how important it is to treat these precious animals with care, rather than data-producing meat sacks. When given the opportunity with my postdoctoral position, I was quick to insist that basic animal care include more enrichment items (for example, we now use nestlets and paper mouse houses). I insist that animals are never singly housed, unless necessitated by health concerns.

I fear that the dire need to produce research littered with p-values below .05, we forget that science should be the pursuit of truth. If our animals are truly modeling the human phenomena, we need to commit to that by treating them as preciously. Anything less is a conscious disassociation with sufficiently rigorous research. Why do some researchers forgo enrichment items for their animal subjects? Because the improvement of the reprehensibly impoverished environment is sufficient to endanger hopeful significant differences among experimental groups.

Scientists shouldn't be so afraid to not find significant differences that they allow subpar care of their animal subjects. I dream about a science that embraces and supports good, rigorous, and creative work. I also dream about manuscripts that write themselves. Sigh... another day, another day.

Saturday, September 10, 2011

Statistics and Neuroscience

Following my undergraduate training, I took a job with a financial planner as their office manager. At orientation, my boss told me: "There's the way things are supposed to be done, the way things are actually done, and a wide grey bridge in between." In short, there are rules but we don't always follow them perfectly.

In my experience, there has been no instance where this has been more true than in the implementation of statistics in research. 

I've always been keen on math, briefly considering a career as a high school math teacher. I really understood math, and appreciated that their was rarely an ambiguity in what was correct. When I received the assignment to be a teaching assistant for research methods and statistics in psychology my second year of graduate school, I was absolutely thrilled. 

My enthusiasm about using statistics in my professional field was much more subdued, however. During my first year of graduate school, I took the required statistical courses and was actually delighted at how difficult they were. The following year when I began to write up my thesis project, my implementation of statistics as I learned in my statistician-lead class was chastised. In short, my mentor reminded me of the grey bridge, and insisted that I adopt the statistical methods accepted within our subdiscipline of neuroscience.

In fact, an article passed around Twitter yesterday  illustrated how statistics can be - and often is - erroneously used in research (http://www.sandernieuwenhuis.nl/pdfs/NieuwenhuisEtAl_NN_Perspective.pdf). And this only details one way in which statistics are incorrectly employed! In truth, the interest of most (neuro)scientists lie in the scientific variables relevant to their field, and not the statistics that ultimately assess their success. As a (neuro)scientist myself, I both completely understand and find the situation reprehensible. 

This is a topic of both incredible breadth and interest to me, so expect to hear plenty on the topic in posts to come.