A brief rhyming interlude concerning responsible conduct of research.

I recently became aware, by way of the Tweet-o-sphere, that I am regarded by some as “the Dr. Seuss of science policy.”

I mentioned this compliment (I think) to the reliably hilarious SciCurious (who also blogs here), and she promptly produced this educational tale, which I post with her kind permission:

The sun did not shine
In the laboratory that day
as we sat with our gels
watching bands slide away

I sat there with Sally
We stared at data, we two
And I said “How I wish
there was something we could do!”

We have tried all our primers
and tweaked all our bands
but this data’s not replicable
at least in our hands

So all we could do was to
try, try, try, try
using all our free hours
for our hypothesis run dry

And then something went BUMP!
How that bump made us jump!

We looked,
we saw him dart out from the -80 freezer
and we saw him!
Our old PI, that hairless old geezer!
And he said to us,
“why do you sit there and stare?”

I know the data’s bad
Fundings not forthcoming
But there’s things we can do
We can do data plumbing!

“I know some good games we could play”
said the guy.
“I know some old tricks”
said that geezer PI.
“A lot of old tricks,
to make your data like new.
Other scientists
will not mind at all if I do”

Then Sally and I
Did not know what to say
For no other PIs were in lab that day.

But our tech said “no no!”
“Make him go away! 
Tell that geezer PI
you do NOT want to play!
I’ve seen this before!
I know what this is about.
He’ll help you fake your data
while your new boss is out!”

“Now now!  Have no fear.
have no fear” said PI
“My tricks are not bad, they’re just how you get by!”
“Why you can have lots of new gels, if you wish”
“and as for the tech, I’ll fire that old fish!”

‘please no’ said the fish.
“not my job at stake!”
“I just don’t want to watch
while the data you fake!’

“Have no fear” said the geezer
I won’t have you watch
You’ll be pipetting all day
as we three data-botch
With our pictures in photoshop
Our SEM’s made all small
And that’s not all I can do,
no, not all!

Look at the data!
Just look at it NOW!
I got it to support our ideas
somehow.
I can match RNA
and DNA too
I can even make
data from proteins come through!

We’ll publish and publish
we’ll write a grant, maybe three!
That’s just how easy faking data can be!

And all these things were happening
To Sally and Me. 
All the papers and grants
And traveling for free
But still we were worried
Something was askew
And our worries got worse
With Reviewer 1 and Reviewer 2.
They ran over us fast
They said “what did you do?”
“Would you like to explain
to Reviewer 1 and Reviewer 2?”

And Sally and I
did not know what to do
We had to tell something
to Reviewer 1 and Reviewer 2.
We were racking our brains
and the tech cried “o ho!”
“Now it will all come out,
everyone will know!”

“You should not have let in
that old geezer PI
he’s taught you how to fake,
taught you how to lie
Science is about truth!
Not about how to get by!”

“Have no fear, my young grads
said the asshole PI.
“These Reviewers are good reviewers
to have by and by.
They are tame.  Oh so tame!
They are old friends of mine
Just let me do explaining
and all will be fine.”

But it turns out his Reviewer friends
were not so tame
And they were not pleased
With what he tried to explain.
It turns out old PI
Had done this before
An investigation ensued
He was shoved out the door.

We were left all alone
In the cold, dark, dank lab
No old geezer advisor
To be glib and blab
And now we had no thesis
And no stipend as well
All our data faking
threw us down the well

And the moral of this tale?
Don’t let pressure make you lie
Science is about TRUTH
and not just “getting by”

Cross-posted at Doing Good Science.

facebooktwittergoogle_pluslinkedinmail
Posted in Ethical research, Ethics 101, Guest post, Pop culture.

3 Comments

  1. in the blogsphere a few months ago, Bruce Booth suggested that an unspoken rule is that at least of 50% of published findings can’t be repeated (http://lifescivc.com/2011/03/academic-bias-biotech-failures/).

    Today, again in the blogging world, Derek Lowe recounts a recent study from Bayer where they lower that estimate to 20-25% (http://pipeline.corante.com/archives/2011/09/02/how_many_new_drug_targets_arent_even_real.php)

    The authors aren’t suggesting fraud as described in the clever”Dr. Suess-type” poem, but “just” irreproducibility.

    My questions: Does this 20-25% success for reproducibility shock/ surprise people or is this an “expected” level?
    What would be an “acceptable” percentage of studies that aren’t reproducible?
    And finally, if each researcher were to ask themselves, how many of their published findings ARE NOT reproducible how would you honestly answer?

    • It’s not really that surprising, I think. There’s real issues in false positives. If you’re familiar with Ioannidis’ work, it’s actually pretty readable for even a non-statistician. Much of his work has been about false positives in medicine: http://www.plosmedicine.org/article/info:doi/10.1371/journal.pmed.0020124

      There’s other people that have looked at factors in the publication process that skew published results, as well. Here’s one, for example: http://www.plosmedicine.org/article/info:doi/10.1371/journal.pmed.0050201

      One of the more commonly known problems is referred to as the “file drawer problem”, which is basically a problem that non-significant results are much less likely to be published. This is particularly troublesome in situations in which one or two groups have found results, but others trying to replicate those results do not get the same effects. It also suggests that those who report the largest effects are the most likely to make publication and the most likely to be cited later.

      So it’s not that surprising, really, even if you do take outright fraud out of the equation. If you test enough hypotheses, some of them are simply false positives. You try to do what you can to see if those are real biological effects, but that’s not always easy.

      • Tybo, thanks for the great links. Much food for thought. As both articles are suggesting perhaps it is time for the science community to rethink and readdress the publication strategy. As you mentioned, it has been notoriously difficult to publish negative findings, but as scientists don’t we learn as much from the absence of a relationship as the presence of a relationship?

        Reproducing data was sometimes challenging as the exact details of the experiment/reagents etc may have been omitted, often due to page restrictions of some journals. (mea culpa–I have trimmed methods sections in order to expand the discussion or be able to insert an additional figure). With the switch from paper to electronic journals there should be no reason to not have complete detailed methods for each manuscript.

        Also, as a community I think we must face the economic incentive for scientists to publish as the numbers and “importance” of papers is often directly linked to the success of grant funding and tenure. We like to think that we are “above that kind of thing” but I think we need to be honest in how much this may play a role.

Leave a Reply

Your email address will not be published. Required fields are marked *