SPSP 2013 Plenary session #4: Sergio Sismondo

SPSP 2013 Plenary session #4: Sergio Sismondo

Tweeted from the 4th biennial conference of the Society for Philosophy of Science in Practice in Toronto, Ontario, Canada, on June 29, 2013.

  1. Last plenary of conference: Sergio Sismondo, “Toward a political economy of epistemic things,” starts in ~10 min #SPSP2013 #SPSP2013Toronto
  2. Knowledge as a quasi-substance (takes work, resources to make; requires infrastructure; moves w/ difficulty) #SPSP2013 #SPSP2013Toronto

    Continue reading

A passing thought about a certain flavor of “citizen science” project.

I think that better public understanding of science (and in particular of the processes by which scientific knowledge is built) is a good thing.

I’m persuaded that one way public understanding of science might be enhanced is through projects that engage members of the public, in various ways, in building the knowledge. Potentially, such “citizen science” initiative could even help develop some public good will for traditional science projects.

But, I think there’s a potential for engagement with the public to go very wrong.

This is especially true in situations in which there’s not a clear line between the citizen-as-participant-in-knowledge-building and the citizen-as-human-subject (who is entitled to certain kinds of protection — e.g., of autonomy, of privacy, from various kinds of harms), and even more so in cases where the citizen scientist-cum-human subject is also a customer of the entity conducting the research.

And, while it may not be the case that heightened ethical oversight (e.g., from an Institutional Review Board) is necessary in cases where the citizen science project is not aimed at publishing results in the scientific literature or bringing a medical product or device to market, it strikes me that scientists engaging with members of the public (citizen scientists-cum-human subjects-cum-customers) might do better to lean on the side of more ethical consideration than less, of more protection of human subjects rather than “caveat emptor”.

Indeed, scientists engaging with members of the public to build the knowledge might be well served to engage with those members of the public in a consideration of the ethics of the research. This could be an opportunity to model how it should be done, not simply what you can get away with under the prevailing regulations. It could also be an opportunity for researchers to listen to the members of the public they’re engaging rather than simply treating them as sources of specimens, funding, and free labor.

Playing fast and loose with ethics in projects that engage citizen scientists-cum-human subjects-cum-customers could have blowback as far as public attitudes toward science and scientists. I suspect such blowback would not be limited to the actual researchers or organizations directly involved, but also to other researchers with citizen science projects (even ethically well-run ones), and probably to scientists and scientific organizations more broadly.

In other words, the scientific community as a whole has an interest in the purveyors of this kind of citizen science getting the ethical engagement with the public right.

* * * * *
These general musings were sparked by more specific questions raised about a specific commercial citizen science project in two posts at The Boundary Layer. Click through and read them.

UPDATE: And Comrade PhysioProf weighs in.

To tweet or not to tweet the professional conference? (Some thoughts in 140-character chunks.)

There’s a lot of discussion kicking around the tubes at the moment about whether it is appropriate to live-tweet a session at a professional conference. The recent round of discussion looks to have originated among English faculty. At the blog Planned Obsolescence, The Modern Language Association’s Director of Scholarly Communication, Kathleen Fitzpatrick, offers sensible advice on tweeting or not at meetings. Meanwhile Prof-like Substance is quizzical about the request to keep private what a scholar is presenting in the public space provided by a professional meeting (while recognizing, of course, that there are venues like Gordon Research Conference that have explicit rules about not publicizing what is presented beyond the bounds of the conference).

It’s no secret that I’ve tweeted a meeting or two in my time. I’ve even mused at some length about the pros and cons of tweeting a meeting, although mostly from the point of view of the meeting attendee (me) absorbing and interacting with what is being presented, compared to taking notes in my notebook instead.

If pressed for a blanket statement on whether tweeting a conference presentation is OK or not OK, I would say: it depends. There are complexities here, many linked to the peculiar disciplinary norms of particular professional communities, and given that those norms are themselves moving targets (changing in response to the will of active members of those communities, among other things), any ruling that somehow got it right at this moment would be bound for obsolescence before very long.

In other words, I don’t have a grand argument covering all the relevant contexts. Instead of trying to frame such an argument, I’m going to give you my thoughts on this, in tweet-sized bites:

  • Tweeting a meeting is a way to include members of the professional community who didn’t have the funds or flexibility to be there IRL.
  • Tweeting a meeting is a way to include interested people beyond the professional community in the audience and the discussion.
  • Since Twitter is interactive, tweeting a meeting is a way to promote discussion of what’s being presented RIGHT AWAY, for better or worse.
  • For worse: discussion may start before relevant facts, ideas are on the table, assuming things speaker isn’t claiming as speaker’s point.
  • For better: speaker can get rapid feedback on which points are persuasive, which seem iffy, as well as on fruitful tangents and connections.
  • Worry: conference tweeters distracted from engaging with speakers, people in room, & asking questions there. Some folks are not on Twitter!
  • Worry: conference tweeters may give inaccurate account of speaker’s claims, fail to distinguish their commentary from reporting of talk.
  • But, if multiple attendees tweet session, more basis to tease out which thoughts are from speaker, which from tweeters responding to talk.
  • Worry: live-tweeting sessions opens speakers to having results/ideas/arguments swiped by someone not at the meeting. Might get scooped.
  • Of course, others in the room could swipe speaker’s results/ideas/arguments. Why assume you couldn’t already get scooped?
  • Is community pressure stronger on people in the room (not to scoop speaker) than on members of the community following tweets? If so, why?
  • Live-tweeting meeting w/proper attribution of speaker could serve as record of results/ideas/arguments and who presented them. Protection!
  • Challenge to proper tweetribution of talk contents: getting speakers’ Twitter handles right. These could be listed in conference program.
  • Could be issues including speakers’ Twitter handles in tweets of their talks if their use of Twitter is primarily personal, not profesisonal
  • Some speakers freaked out by people tweeting their talks (especially in workshop-y/preliminary results scenarios). Should that be respected?
  • Are they also freaked out by people taking notes at their talks? Is worry sharing-beyond-room or rapid amplification potential?
  • Big Q: Does community view its professional meetings as public venues or something more limited? If latter, what is rationale for limits?
  • When meeting tweets help scholars figure something out, will they cite tweets? Easier than citing chat at hotel bar & easier to recall later
  • Expectations different in different disciplines; tweeting interdisciplinary conferences likely to expose differences in norms.
  • Never a bad idea to ask if speakers are OK with having their talks tweeted. If not, talking about why afterwards could be informative.
  • Things not to tweet (identifiably) from a talk: how bored you are, commentary on speaker’s looks. Save that for your notebook.
  • If tweeting conferences becomes a standard thing, might be tensions between “official” tweeters and independent attendee tweets.
  • Mass tweeting might also make serious bandwidth at conference venue a requirement. Expect that would increase registration fees!
  • Some fields likely to have harder time with 140 char limit than others. Push to be concise might be a positive influence on them.

I welcome your thoughts in the comments (and you can use more than 140 characters if you need to).

Marc Hauser makes an excuse for cheating. What he could have done instead.

DrugMonkey notes that Marc Hauser has offered an explanation for faking data (as reported on the Chronicle of Higher Education Percolator blog). His explanation amounts to:

  • being busy with teaching and directing the Mind, Brain & Behavior Program at Harvard
  • being busy serving on lots of fancy editorial boards
  • being busy writing stuff explaining science to an audience of non-scientists
  • being busy working with lots of scientific collaborators
  • being busy running a large research lab with lots of students

DrugMonkey responds that busy is part of the job description, especially if you’re rolling in the prestige of a faculty post at Harvard, and of being a recognized leader in your field. I would add that “I was really busy and I made a bad decision (but just this one time)” is an excuse we professors frequently hear from students we catch cheating. It’s also one that doesn’t work — we expect our students to do honest work and figure out their time management issues. And, we’re expected to work out our own time management issues — even if it means saying “No” to invitations that are sometimes tempting.

By the way, Marc Hauser didn’t actually admit that he faked data, or committed research misconduct of any kind, so much as he “accepts the findings” of the Office of Research Integrity. Moreover, his comments seem to be leaning on that last bullet point (the rigors of supervising a big lab) to deflect what responsibility he does take. From the CHE Percolator:

He also implies that some of the blame may actually belong to others in his lab. Writes Hauser: “I let important details get away from my control, and as head of the lab, I take responsibility for all errors made within the lab, whether or not I was directly involved.”

But that take—the idea that the problems were caused mainly by Hauser’s inattention—doesn’t square with the story told by those in his laboratory. A former research assistant, who was among those who blew the whistle on Hauser, writes in an e-mail that while the report “does a pretty good job of summing up what is known,” it nevertheless “leaves off how hard his co-authors, who were his at-will employees and graduate students, had to fight to get him to agree not to publish the tainted data.”

The former research assistant points out that the report takes into account only the research that was flagged by whistle-blowers. “He betrayed the trust of everyone that worked with him, and especially those of us who were under him and who should have been able to trust him,” the research assistant writes.

So, Hauser is kind of claiming that there were too many students, postdocs, and technicians to supervise properly, and some of them got away from him and falsified methodology and coding and fabricated data. The underlings challenge this account.

In the comments at DrugMonkey’s, hypotheses are being floated as to what might have spurred Hauser’s bad actions. (A perception that he needed to come up with sexy findings to stay a star in his field is one of the frontrunners.) I’m more inclined to come up with a list of options Hauser might have fruitfully pursued instead of faking or allowing fakery to happen on his watch:

  1. He could have agreed not to send out manuscripts with questionable data when his underlings asked him.
  2. He could have asked to be released from some of his teaching and/or administrative duties at Harvard so he could spend the needed time on his research and on properly mentoring the members of his lab.
  3. He could have taken on fewer students in order to better supervise and mentor the students in his charge.
  4. He could have sought the advice of a colleague or a collaborator on ways he might deal with his workload (or with the temptations that workload might be awakening in him).
  5. He could have communicated to his department, his professional societies, and the funding agencies his considered view that the demands on researchers, and operative definitions of productivity, make it unreasonable hard to do the careful research needed to come up with reliable answers to scientific questions.

And those are just off the top of my head.

I’m guessing that the pressure Marc Hauser felt to get results was real enough. What I’m not buying is the same thing that I don’t buy when I get this excuse from student plagiarists: that there was no other choice. Absent a gun to Hauser’s head, there surely were other things he could have done.

Feel free to add to the list of other options someone facing Hauser-like temptations could productively pursue instead of cheating.

Announcing Dr. Free-Ride’s Ethics Line, discreet ethical advice by phone.

Do you have an ethical dilemma?

Are you tired of grappling with it all by yourself?

Would you like to have my capable, experienced hands wrapped around your big, hard ethical problems?

The wait is over!

I’m pleased to introduce the launch of Dr. Free-Ride’s Ethics Line, bringing you discreet ethics consultations by phone for the reasonable rate of $1.99/minute.

Let me talk with you about your unique ethical needs. We can do this one-on-one, or, if you’re feeling adventurous, we can make it a group thing.

Or, tell me about your ethical fantasies hypotheticals. I can’t wait to hear all the details and then describe to you what we will do with them …

On Dr. Free-Ride’s Ethics Line, I will cater to your specific needs.

Want to get down and dirty in the details of federal regulations for research with human subjects or animals? I’ll do that with you.

Ready to work up the courage to disclose your significant financial interests to the world? Disclose them to me first on a private, non-judgmental call.

Tired of manipulating that same old figure for each journal submission? I’m prepared to tell you just how naughty you’ve been, to give you the punishment you’ve secretly been wanting, and to help you develop a plan to use your new data and figures that show off their natural beauty.

You know you want to. Click the payment button to get started.

Frequently Asked Questions:

Who else have you provided with this kind of, um, “consulting” service?

Lots of scientists, with ethical problems of all shapes and sizes. I can’t give you their names, though, and I’ll never reveal yours — confidentiality is just that important to me.

Does giving ethical advice for money compromise your objectivity?

Who gave you that idea? And do you really think $1.99/minute is enough to move me from my principles?

Look, if you call Dr. Free-Ride’s Ethics Line, I’m going to tell you what you need to hear, not necessarily what you want to hear. It may hurt at first, but you will love it. And if you don’t, at $1.99/minute, at least the pain isn’t costing you much.

Do you have any conflicts of interest to disclose here?

Not until the big corporations or universities that might be able to use my ethical advice decide to pay for my services. (You know where to find me, big corporations and universities!)

If I’m paying by credit card or PayPal, is our consulting session really confidential?

Yes! The charge for our session will appear on your statement as “Free-Ride’s Sexy Phone Time”.

Are you ready to show me your ethical quandaries? Click the payment button now to get the ball rolling!

Scientific authorship: guests, courtesy, contributions, and harms.

DrugMonkey asks, where’s the harm in adding a “courtesy author” (also known as a “guest author”) to the author line of a scientific paper?

I think this question has interesting ethical dimensions, but before we get into those, we need to say a little bit about what’s going on with authorship of scientific papers.

I suppose there are possible worlds in which who is responsible for what in a scientific paper might not matter. In the world we live in now, however, it’s useful to know who designed the experimental apparatus and got the reaction to work (so you can email that person your questions when you want to set up a similar system), who did the data analysis (so you can share your concerns about the methodology), who made the figures (so you can raise concerns about digital fudging of the images), etc. Part of the reason people put their names on scientific papers is so we know who stands behind the research — who is willing to stake their reputation on it.

The other reason people put their names on scientific papers is to claim credit for their hard work and their insights, their contribution to the larger project of scientific knowledge-building. If you made a contribution, the scientific community ought to know about it so they can give you props (and funding, and tenure, and the occasional Nobel Prize).

But, we aren’t in a possition to make accurate assignments of credit or responsibility if we have no good information about what an author’s actual involvement in the project may have been. We don’t know who’s really in a position to vouch for the data, or who really did heavy intellectual lifting in bringing the project to fruition. We may understand, literally, the claim, “Joe Schmoe is second author of this paper,” but we don’t know what that means, exactly.

I should note that there is not one universally recognized authorship standard for all of the Tribe of Science. Rather, different scientific disciplines (and subdisciplines) have different practices as far as what kind of contribution is recognized as worthy of inclusion as an author on a paper, and as far as what the order in which the authors are listed is supposed to communicate about the magnitude of each contribution. In some fields, authors are always listed alphabetically, no matter what they contributed. In others, being first in the list means you made the biggest contribution, followed by the second author (who made the second-biggest contribution), and so forth. It is usually the case that the principal investigator (PI) is identified as the “corresponding author” (i.e., the person to whom questions about the work should be directed), and often (but not always) the PI takes the last slot in the author line. Sometimes this is an acknowledgement that while the PI is the brains of the lab’s scientific empire, particular underlings made more immediately important intellectual contributions to the particular piece of research the paper is communicating. But authorship practices can be surprisingly local. Not only do different fields do it differently, but different research groups in the same field — at the same university — do it differently. What this means is it’s not obvious at all, from the fact that your name appears as one of the authors of a paper, what your contribution to the project was.

There have been attempts to nail down explicit standards for what kinds of contributions should count for authorship, with the ICMJE definition of authorship being one widely cited effort in this direction. Not everyone in the Tribe of Science, or even in the subset of the tribe that publishes in biomedical journals, thinks this definition draws the lines in the right places, but the fact that journal editors grapple with formulating such standards suggests at least the perception that scientists need a clear way to figure out who is responsible for the scientific work in the literature. We can have a discussion about how to make that clearer, but we have to acknowledge that at the present moment, just noting that someone is an author without some definition of what that entails doesn’t do the job.

Here’s where the issue of “guest authorship” comes up. A “guest author” is someone whose name appears in a scientific paper’s author line even though she has not made a contribution that is enough (under whatever set of standards one recognizes for proper authorship) to qualify her as an author of the paper.

A guest is someone who is visiting. She doesn’t really live here, but stays because of the courtesy and forebearance of the host. She eats your food, sleeps under your roof, uses your hot water, watches your TV — in short, she avails herself of the amenities the host provides. She doesn’t pay the rent or the water bill, though; that would transform her from a guest to a tenant.

To my way of thinking, a guest author is someone who is “just visiting” the project being written up. Rather than doing the heavy lifting in that project, she is availing herself of the amenities offered by association (in print) with that project, and doing so because of the courtesy and forebearance of the “host” author.

The people who are actually a part of the project will generally be able to recognize the guest author as a “guest” (as opposed to an actual participant). The people receiving the manuscript will not. In other words, the main amenity the guest author partakes in is credit for the labors of the actual participants. Even if all the participants agreed to this (and didn’t feel the least bit put out at the free-rider whose “authorship” might be diluting his or her own share of credit), this makes it impossible for those outside the group to determine what the guest author’s actual contribution was (or, in this case, was not). Indeed, if people outside the arrangement could tell that the guest author was a free-rider, there wouldn’t be any point in guest authorship.

Science strives to be a fact-based enterprise. Truthful communication is essential, and the ability to connect bits of knowledge to the people who contributed is part of how the community does quality control on that knowledge base. Ambiguity about who made the knowledge may lead to ambiguity about what we know. Also, developing too casual a relationship with the truth seems like a dangerous habit for a scientist to get into.

Coming back to DrugMonkey’s question about whether courtesy authorship is a problem, it looks to me like maybe we can draw a line between two kinds of “guests,” one that contributes nothing at all to the actual design, execution, evaluation, or communication of the research, and one who contributes something here, just less than what the conventions require for proper authorship. If these characters were listed as authors on a paper, I’d be inclined to call the first one a “guest author” and the second a “courtesy author” in an attempt to keep them straight; the cases with which DrugMonkey seems most concerned are the “courtesy authors” in my taxonomy. In actual usage, however, the two labels seem to be more or less interchangeable. Naturally, this makes it harder to distinguish who actually did what — but it strikes me that this is just the kind of ambiguity people are counting on when they include a “guest author” or “courtesy author” in the first place.

What’s the harm?

Consider a case where the PI of a research group insists on giving authorship of a paper to a postdoc who hasn’t gotten his experimental system to work at all and is almost out of funding. The PI gives the justification that “He needs some first-author papers or his time here will have been a total waste.” As it happens, giving this postdoc authorship bumps the graduate student who did all the experimental work (and the conceptual work, and data analysis, and drafting of the manuscript) out of first author slot — maybe even off the paper entirely.

There is real harm here, to multiple parties. In this case, someone got robbed of appropriate credit, and the person identified as most responsible for the published work will be a not-very-useful person to contact with deeper questions about the work (since he didn’t do any of it or at best participated on the periphery of the project).

Consider another kind of case, where authorship is given to a well-known scientist with a lot of credibility in his field, but who didn’t make a significant intellectual contribution to work (at least, not one that rises to the level of meriting authorship under the recognized standards). This is the kind of courtesy authorship that was extended to Gerald Schatten in a 2005 paper in Science another of whose authors was Hwang Woo Suk. This paper had 25 authors listed, with Schatten identified as the senior author. Ultimately, the paper was revealed to be fraudulent, at which point Schatten claimed mostly to have participated in writing the paper in good English — a contribution recognized as less than what one would expect from an author (especially the senior author).

Here, including Schatten as an author seemed calculated to give the appearance (to the journal editors while considering the manuscript, and to the larger scientific community consuming the published work)that the work was more important and/or credible, because of the big name associated with it. But this would only work because listing that big name in the author line amounts to claiming the big name was actually involved in the work. When the paper fell apart, Schatten swiftly disavowed responsibility — but such a disavowal was only necessary because of what was communicated by the author line, and I think it’s naïve to imagine that this “ambiguity” or “miscommunication” was accidental.

In cases like this, I think it’s fair to say courtesy authorship does harm, undermining the baseline of trust in the scientific community. It’s hard to engage in efficient knowledge-building with people you think are trying to put one over on you.

The cases where DrugMonkey suggests courtesy authorship might be innocuous strike me as interestingly different. They are cases where someone has actually made a real contribution of some sort to the work, but where that contribution may be judged (under whatever you take to be the accepted standards of your scientific discipline) as not quite rising to the level of authorship. Here, courtesy authorship could be viewed as inflating the value of the actual contribution (by listing the person who made it in the author line, rather than the acknowledgements), or alternatively as challenging where the accepted standards of your discipline draw the line between a contribution that qualifies you as an author and one that does not. For example, DrugMonkey writes:

First, the exclusion of those who “merely” collect data is stupid to me. I’m not going to go into the chapter and verse but in my lab, anyway, there is a LOT of ongoing trouble shooting and refining of the methods in any study. It is very rare that I would have a paper’s worth of data generated by my techs or trainees and that they would have zero intellectual contribution. Given this, the asymmetry in the BMJ position is unfair. In essence it permits a lab head to be an author using data which s/he did not collect and maybe could not collect but excludes the technician who didn’t happen to contribute to the drafting of the manuscript. That doesn’t make sense to me. The paper wouldn’t have happened without both of the contributions.

I agree with DrugMonkey that there’s often a serious intellectual contribution involved in conducting the experiments, not just in designing them (and that without the data, all we have are interesting hunches, not actual scientific knowledge, to report). Existing authorship standards like those from ICMJE or BMJ can unfairly exclude those who do the experimental labor from authorship by failing to recognize this as an intellectual contribution. Pushing to have these real contributions recognized with appropriate career credit is important. As well, being explicit about who made these contributions to the research being reported in the paper makes it much easier for other scientists following up on the published work (e.g., comparing it to their own results in related experiments, or trying to use some of the techniques described in the paper to set up new experiments) to actually get in touch with the people most likely to be able to answer their questions.

Changing how might weight experimental prowess is given in the career scorekeeping may be an uphill battle, especially when the folks distributing the rewards for the top scores are administrators (focused on the money the people they’re scoring can bring to an institution) and PIs (who frequently have more working hours devoted to conception and design of project for their underlings rather than to the intellectual labor of making those projects work, and to writing the proposals that bring in the grant money and the manuscripts that report the happy conclusion of the projects funded by such grants). That doesn’t mean it’s not a fight worth having.

But, I worry that using courtesy authorship as a way around this unfair setting of the authorship bar actually amounts to avoiding the fight rather than addressing these issues and changing accepted practices.

DrugMonkey also writes:

Assuming that we are not talking about pushing someone else meaningfully* out of deserved credit, where lies the harm even if it is a total gift?

Who is hurt? How are they damaged?
__
*by pushing them off the paper entirely or out of first-author or last-author position. Adding a 7th in the middle of the authorship list doesn’t affect jack squat folks.

Here, I wonder: if dropping in a courtesy author as the seventh author of a paper can’t hurt, how either can we expect it to help the person to whom this “courtesy” is extended?

Is it the case that no one actually expects that the seventh author made anything like a significant contribution, so no one is being misled in judging the guest in the number seven slot as having made a comparable contribution to the scientist who earned her seventh-author position in another paper? If listing your seventh-author paper on your CV is automatically viewed as not contributing any points in your career scorekeeping, why even list it? And why doesn’t it count for anything? Is it because the seventh author never makes a contribution worth career points … or is it because, for all we know, the seventh author may be a courtesy author, there for other reasons entirely?

If a seventh-author paper is actually meaningless for career credit, wouldn’t it be more help to the person to whom you might extend such a “courtesy” if you actually engaged her in the project in such a way that she could make an intellectual contribution recognized as worthy of career credit?

In other words, maybe the real problem with such courtesy authorship is that it gives the appearance of help without actually being helpful.

(Cross-posted at Doing Good Science)

Limits of ethical recycling.

In the “Ethics in Science” course I regularly teach, we spend some time discussing case studies to explore some of the situations students may encounter in their scientific training or careers where they will want to be able to make good ethical decisions.

A couple of these cases touch on the question of “recycling” pieces of old grant proposals or journal articles — say, the background and literature review.

There seem to be cases where the right thing to do is pretty straightforward. For example, helping yourself to the background section someone else had written for her own grant proposal would be wrong. This would amount to misappropriating someone else’s words and ideas without her permission and without giving her credit. (Plagiarism anyone?) Plus, it would be weaseling out of one’s own duty to actually read the relevant literature, develop a view about what it’s saying, and communicate clearly why it matters in motivating the research being proposed.

Similarly, reusing one’s own background section seems pretty clearly within the bounds of ethical behavior. You did the intellectual labor yourself, and especially in the case where you are revising and resubmitting your own proposal, there’s no compelling reason for you to reinvent that particular wheel (unless, if course, reviewer comments indicate that the background section requires serious revision, the literature cited ought to take account of important recent developments that were missing in the first round, etc.).

Between these two extremes, my students happened upon a situation that seemed less clear-cut. How acceptable is it to recycle the background section (or experimental protocol, for that matter) from an old grant proposal you wrote in collaboration with someone else? Does it make a difference whether that old grant proposal was actually funded? Does it matter whether you are “more powerful” or “less powerful” (however you want to cash that out) within the collaboration? Does it require explicit permission from the person with whom you collaborated on the original proposal? Does it require clear citation of the intellectual contribution of the person with whom you collaborated on the original proposal, even if she is not officially a collaborator on the new proposal?

And, in your experience, does this kind of recycling make more sense than just sitting down and writing something new?

Evaluating scientific reports (and the reliability of the scientists reporting them).

One of the things scientific methodology has going for it (at least in theory) is a high degree of transparency. When scientists report findings to other scientists in the community (say, in a journal article), it is not enough for them to just report what they observed. They must give detailed specifications of the conditions in the field or in the lab — just how did they set up and run that experiment, choose their sample, make their measurement. They must explain how they processed the raw data they collected, giving a justification for processing it this way. And, in drawing conclusions from their data, they must anticipate concerns that the data might have been due to something other than the phenomenon of interest, or that the measurements might better support an alternate conclusion, and answer those objections.

A key part of transparency in scientific communications is showing your work. In their reports, scientists are supposed to include enough detailed information so that other scientists could set up the same experiments, or could follow the inferential chain from raw data to processed data to conclusions and see if it holds up to scrutiny.

Of course, scientists try their best to apply hard-headed scrutiny to their own results before they send the manuscript to the journal editors, but the whole idea of peer review, and indeed the communication around a reported result that continues after publication, is that the scientific community exercises “organized skepticism” in order to discern which results are robust and reflective of the system under study rather than wishful thinking or laboratory flukes. If your goal is accurate information about the phenomenon you’re studying, you recognize the value of hard questions from your scientific peers about your measurements and your inferences. Getting it right means catching your mistakes and making sure your conclusions are well grounded.

What sort of conclusions should we draw, then, when a scientist seems resistant to transparency, evasive in responding to concerns raised by peer reviewers, and indignant when mistakes are brought to light?

It’s time to revisit the case of Stephen Pennycook and his research group at Oak Ridge National Laboratory. In an earlier post I mused on the saga of this lab’s 1993 Nature paper [1] and its 2006 correction [2] (or “corrigendum” for the Latin fans), in light of allegations that the Pennycook group had manipulated data in another recent paper submitted to Nature Physics. (In addition to the coverage in the Boston Globe (PDF), the situation was discussed in a news article in Nature [3] and a Nature editorial [4].)

Now, it’s time to consider the recently uploaded communication by J. Silcox and D. A. Muller (PDF) [5] that analyzes the corrigendum and argues that a retraction, not a correction, was called for.

It’s worth noting that this communication was (according to a news story at Nature about how the U.S. Department of Energy handles scientific misconduct allegations [6]) submitted to Nature as a technical comment back in 2006 and accepted for publication “pending a reply by Pennycook.” Five years later, uploading the technical comment to arXiv.org makes some sense, since a communication that never sees the light of day doesn’t do much to further scientific discussion.

Given the tangle of issues at stake here, we’re going to pace ourselves. In this post, I lay out the broad details of Silcox and Muller’s argument (drawing also on the online appendix to their communication) as to what the presented data show and what they do not show. In a follow-up post, my focus will be on what we can infer from the conduct of the authors of the disputed 1993 paper and 2006 corrigendum in their exchanges with peer reviewers, journal editors, and the scientific community. Then, I’ll have at least one more post discussing the issues raised by the Nature news story and the related Nature editorial on the DOE’s procedures for dealing with alleged misconduct [7].

Continue reading

Does the punishment fit the crime? Luk Van Parijs has his day in court.

Earlier this month, the other shoe finally dropped on the Luk Van Parijs case.

You may recall that Van Parijs, then an associate professor of biology at MIT, made headlines back in October of 2005 when MIT fired him after spending nearly a year investigating charges that he had falsified and fabricated data and finding those charges warranted. We discussed the case as it was unfolding (here and here), and discussed also the “final action” by the Office of Research Intergrity on the case (which included disbarment from federal funding through December 21, 2013).

But losing the MIT position and five year’s worth of eligibility for federal funding (counting from when Van Parijs entered the Voluntary Exclusion Agreement with the feds) is not the extent of the formal punishment to be exacted for his crimes — hence the aforementioned other shoe. As well, the government filed criminal charges against Van Parijs and sought jail time.

As reported in a news story posted 28 June 2011 at Nature (“Biologist spared jail for grant fraud” by Eugenie Samuel Reich, doi:10.1038/474552a):

In February 2011, US authorities filed criminal charges against Van Parijs in the US District Court in Boston, citing his use of fake data in a 2003 grant application to the National Institutes of Health, based in Bethesda, Maryland. Van Parijs entered a guilty plea, and the government asked Judge Denise Casper for a 6-month jail term because of the seriousness of the fraud, which involved a $2-million grant. “We want to discourage other researchers from engaging in similar behaviour,” prosecutor Gregory Noonan, an assistant US attorney, told Nature.

On 13 June, Casper opted instead for six months of home detention with electronic monitoring, plus 400 hours of community service and a payment to MIT of $61,117 — restitution for the already-spent grant money that MIT had to return to the National Institutes of Health. She cited assertions from the other scientists that Van Parijs was truly sorry. “I believe that the remorse that you’ve expressed to them, to the probation office, and certainly to the Court today, is heartfelt and deeply held, and I don’t think it’s in any way contrived for this Court,” she said.

Let me pause for a moment to let you, my readers, roll your eyes or howl or do whatever else you deem appropriate to express your exasperation that Van Parijs’s remorse counts for anything in his sentencing.

Verily, it is not hard to become truly sorry once you have been caught doing bad stuff. The challenge is not to do the bad stuff in the first place. And, the actual level of remorse in Van Parijs’s heart does precisely nothing to mitigate the loss (in time and money, to name just two) suffered by other researchers relying on Van Parijs to make honest representations in his journal articles and grant proposals.

Still, there’s probably a relevant difference (not just ethically, but also pragmatically) between the scientist caught deceiving the community who gets what such deception is a problem and manifests remorse and the scientist caught deceiving the community who doesn’t see what the big deal is (because surely everyone does this sort of thing, at least occasionally, to survive in the high-pressure environment). With the remorseful cheater, there might at least be some hope of rehabilitation.

Indeed, the article notes:

Luk Van Parijs was first confronted with evidence of data falsification by members of his laboratory in 2004, when he was an associate professor of biology at the Massachusetts Institute of Technology (MIT) in Cambridge. Within two days, he had confessed to several acts of fabrication and agreed to cooperate with MIT’s investigation.

A confession within two days of being confronted with the evidence is fairly swift. Other scientific cheaters in the headlines seem to dig their heels in and protest their innocence (or that the post-doc or grad student did it) for significantly longer than that.

Anyway, I think it’s reasonable for us to ask here what the punishment is intended to accomplish in a case like this. If the goal is something beyond satisfying our thirst for vengeance, then maybe we will find that the penalty imposed on Van Parijs is useful even if it doesn’t include jail time.

As it happens, one of the scientists who asked the judge in the case for clemency on his behalf suggests that jail time might be a penalty that actually discourages the participation of other members of the scientific community in rooting out fabrication and falsification. Of course, not everyone in the scientific community agrees:

[MIT biologist Richard] Hynes argued that scientific whistleblowers might be reluctant to come forwards if they thought their allegations might result in jail for the accused.

But that is not how the whistleblowers in this case see it. One former member of Van Parijs’ MIT lab, who spoke to Nature on condition of anonymity, says he doesn’t think the prospect of Van Parijs’ imprisonment would have deterred the group from coming forwards. Nor does he feel the punishment is adequate. “Luk’s actions resulted in many wasted years as people struggled to regain their career paths. How do you measure the cost to the trainees when their careers have been derailed and their reputations brought into question?” he asks. The court did not ask these affected trainees for their statements before passing sentence on Van Parijs.

This gets into a set of questions we’ve discussed before:

I’m inclined to think that the impulse to deal with science’s youthful offenders privately is a response to the fear that handing them over to federal authorities has a high likelihood of ending their scientific careers forever. There is a fear that a first offense will be punished with the career equivalent of the death penalty.

Permanent expulsion or a slap on the wrist is not much of a range of penalties. And, I suspect neither of these options really address the question of whether rehabilitation is possible and in the best interests of both the individual and the scientific community. …

If no errors in judgment are tolerated, people will do anything to conceal such errors. Mentors who are trying to be humane may become accomplices in the concealment. The conversations about how to make better judgments may not happen because people worry that their hypothetical situations will be scrutinized for clues about actual screw-ups.

Possibly we need to recognize that it’s an empirical question what constellation of penalties (including jail time) encourage or discourage whisteblowing — and to deploy some social scientists to get reliable empirical data that might usefully guide decisions about institutional structures of rewards and penalties that will best encourage the kinds of individual behaviors that lead to robust knowledge-building activities and effectively coordinated knowledge-building communities.

But, it’s worth noting that even though he won’t be doing jail time, Van Parijs doesn’t escape without punishment.

He will be serving the same amount of time under home detention (with electronic monitoring) as he would have served in jail if the judge had given the sentence the government was asking for. In other words, he is not “free” for those six months. (Indeed, assuming he serves this home detention in the home shared by his wife and their three young children, I reckon there is a great deal of work that he might be called on to do with respect to child care and household chores, work that he might escape in a six-month jail sentence.)

Let’s not forget that it costs money to incarcerate people. The public picks up the tab for those expenses. Home detention almost certainly costs the public less. And, Van Parijs is probably less in a position to reoffend during his home detention, even if he slipped out of his ankle monitor, than is the guy who robs convenience stores. What university is going to be looking at his data?

Speaking of the public’s money, recall that another piece of the sentence is restitution — paying back to MIT the $61,117 that MIT spent when it returned Van Parijs’s grant money to NIH. Since Van Parijs essentially robbed the public of the grant money (by securing it with lies and/or substitute lies for the honest scientific results the grant-supported research was supposed to be generating), it is appropriate that Van Parijs dip into his own pocket to pay this money back.

It’s a little more complicated, since he needs to pay MIT back. MIT seems to have recognized that paying the public back as soon as the problem was established was the right thing to do, or a good way to reassure federal funding agencies and the the public that universities like MIT take their obligations to the public very seriously, or both. A judgment that doesn’t make MIT eat that loss, in turn, should encourage other universities that find themselves in similar situations to step up right away and make things right with the funding agencies.

And, in recognition that the public may have been hurt by Van Parijs’s deception beyond the monetary cost of it, Van Parijs will be doing 400 hours of community service. In inclined to believe that given the current fiscal realities of federal, state, and local governments, there is some service the community needs — and doesn’t have adequate funds to pay for — that Van Parijs might provide in those 400 hours. Perhaps it will not be a service he finds intellectually stimulating to provide, but that’s part of what makes it punishment.

Undoubtedly, there are members of the scientific community or of the larger public that will feel that this punishment just isn’t enough — that Van Parijs committed crimes against scientific integrity that demand harsher penalties.

Pragmatically, though, I think we need to ask what it would cost to secure those penalties. We cannot ignore the costs to the universities and to the federal agencies to conduct their investigations (here Van Parijs confessed rather denying the charges and working to obstruct the fact-finding), or to prosecutors to go to trial (here again, Van Parijs pled guilty rather than mounting a vigorous defense). Maybe there was a time where there were ample resources to spend on full-blown investigations and trials of this sort, but that time ain’t now.

And, we might ask what jailing Van Parijs would accomplish beyond underlining that fabrication and falsification on the public’s dime is a very bad thing to do.

Would jail time make it harder for Van Parijs to find another position within the tribe of science than it will already be for him? (Asked another way, would being sentenced to home detention take any of the stink off the verdict of fraud against him?) I reckon the convicted fraudster scientist has a harder time finding a job than your average ex-con — and that scientists who feel his punishment is not enough can lobby the rest of their scientific community to keep a skeptical eye on Van Parijs (should he publish more papers, apply for jobs within the tribe of science, or what have you).