Does the punishment fit the crime? Luk Van Parijs has his day in court.

Earlier this month, the other shoe finally dropped on the Luk Van Parijs case.

You may recall that Van Parijs, then an associate professor of biology at MIT, made headlines back in October of 2005 when MIT fired him after spending nearly a year investigating charges that he had falsified and fabricated data and finding those charges warranted. We discussed the case as it was unfolding (here and here), and discussed also the “final action” by the Office of Research Intergrity on the case (which included disbarment from federal funding through December 21, 2013).

But losing the MIT position and five year’s worth of eligibility for federal funding (counting from when Van Parijs entered the Voluntary Exclusion Agreement with the feds) is not the extent of the formal punishment to be exacted for his crimes — hence the aforementioned other shoe. As well, the government filed criminal charges against Van Parijs and sought jail time.

As reported in a news story posted 28 June 2011 at Nature (“Biologist spared jail for grant fraud” by Eugenie Samuel Reich, doi:10.1038/474552a):

In February 2011, US authorities filed criminal charges against Van Parijs in the US District Court in Boston, citing his use of fake data in a 2003 grant application to the National Institutes of Health, based in Bethesda, Maryland. Van Parijs entered a guilty plea, and the government asked Judge Denise Casper for a 6-month jail term because of the seriousness of the fraud, which involved a $2-million grant. “We want to discourage other researchers from engaging in similar behaviour,” prosecutor Gregory Noonan, an assistant US attorney, told Nature.

On 13 June, Casper opted instead for six months of home detention with electronic monitoring, plus 400 hours of community service and a payment to MIT of $61,117 — restitution for the already-spent grant money that MIT had to return to the National Institutes of Health. She cited assertions from the other scientists that Van Parijs was truly sorry. “I believe that the remorse that you’ve expressed to them, to the probation office, and certainly to the Court today, is heartfelt and deeply held, and I don’t think it’s in any way contrived for this Court,” she said.

Let me pause for a moment to let you, my readers, roll your eyes or howl or do whatever else you deem appropriate to express your exasperation that Van Parijs’s remorse counts for anything in his sentencing.

Verily, it is not hard to become truly sorry once you have been caught doing bad stuff. The challenge is not to do the bad stuff in the first place. And, the actual level of remorse in Van Parijs’s heart does precisely nothing to mitigate the loss (in time and money, to name just two) suffered by other researchers relying on Van Parijs to make honest representations in his journal articles and grant proposals.

Still, there’s probably a relevant difference (not just ethically, but also pragmatically) between the scientist caught deceiving the community who gets what such deception is a problem and manifests remorse and the scientist caught deceiving the community who doesn’t see what the big deal is (because surely everyone does this sort of thing, at least occasionally, to survive in the high-pressure environment). With the remorseful cheater, there might at least be some hope of rehabilitation.

Indeed, the article notes:

Luk Van Parijs was first confronted with evidence of data falsification by members of his laboratory in 2004, when he was an associate professor of biology at the Massachusetts Institute of Technology (MIT) in Cambridge. Within two days, he had confessed to several acts of fabrication and agreed to cooperate with MIT’s investigation.

A confession within two days of being confronted with the evidence is fairly swift. Other scientific cheaters in the headlines seem to dig their heels in and protest their innocence (or that the post-doc or grad student did it) for significantly longer than that.

Anyway, I think it’s reasonable for us to ask here what the punishment is intended to accomplish in a case like this. If the goal is something beyond satisfying our thirst for vengeance, then maybe we will find that the penalty imposed on Van Parijs is useful even if it doesn’t include jail time.

As it happens, one of the scientists who asked the judge in the case for clemency on his behalf suggests that jail time might be a penalty that actually discourages the participation of other members of the scientific community in rooting out fabrication and falsification. Of course, not everyone in the scientific community agrees:

[MIT biologist Richard] Hynes argued that scientific whistleblowers might be reluctant to come forwards if they thought their allegations might result in jail for the accused.

But that is not how the whistleblowers in this case see it. One former member of Van Parijs’ MIT lab, who spoke to Nature on condition of anonymity, says he doesn’t think the prospect of Van Parijs’ imprisonment would have deterred the group from coming forwards. Nor does he feel the punishment is adequate. “Luk’s actions resulted in many wasted years as people struggled to regain their career paths. How do you measure the cost to the trainees when their careers have been derailed and their reputations brought into question?” he asks. The court did not ask these affected trainees for their statements before passing sentence on Van Parijs.

This gets into a set of questions we’ve discussed before:

I’m inclined to think that the impulse to deal with science’s youthful offenders privately is a response to the fear that handing them over to federal authorities has a high likelihood of ending their scientific careers forever. There is a fear that a first offense will be punished with the career equivalent of the death penalty.

Permanent expulsion or a slap on the wrist is not much of a range of penalties. And, I suspect neither of these options really address the question of whether rehabilitation is possible and in the best interests of both the individual and the scientific community. …

If no errors in judgment are tolerated, people will do anything to conceal such errors. Mentors who are trying to be humane may become accomplices in the concealment. The conversations about how to make better judgments may not happen because people worry that their hypothetical situations will be scrutinized for clues about actual screw-ups.

Possibly we need to recognize that it’s an empirical question what constellation of penalties (including jail time) encourage or discourage whisteblowing — and to deploy some social scientists to get reliable empirical data that might usefully guide decisions about institutional structures of rewards and penalties that will best encourage the kinds of individual behaviors that lead to robust knowledge-building activities and effectively coordinated knowledge-building communities.

But, it’s worth noting that even though he won’t be doing jail time, Van Parijs doesn’t escape without punishment.

He will be serving the same amount of time under home detention (with electronic monitoring) as he would have served in jail if the judge had given the sentence the government was asking for. In other words, he is not “free” for those six months. (Indeed, assuming he serves this home detention in the home shared by his wife and their three young children, I reckon there is a great deal of work that he might be called on to do with respect to child care and household chores, work that he might escape in a six-month jail sentence.)

Let’s not forget that it costs money to incarcerate people. The public picks up the tab for those expenses. Home detention almost certainly costs the public less. And, Van Parijs is probably less in a position to reoffend during his home detention, even if he slipped out of his ankle monitor, than is the guy who robs convenience stores. What university is going to be looking at his data?

Speaking of the public’s money, recall that another piece of the sentence is restitution — paying back to MIT the $61,117 that MIT spent when it returned Van Parijs’s grant money to NIH. Since Van Parijs essentially robbed the public of the grant money (by securing it with lies and/or substitute lies for the honest scientific results the grant-supported research was supposed to be generating), it is appropriate that Van Parijs dip into his own pocket to pay this money back.

It’s a little more complicated, since he needs to pay MIT back. MIT seems to have recognized that paying the public back as soon as the problem was established was the right thing to do, or a good way to reassure federal funding agencies and the the public that universities like MIT take their obligations to the public very seriously, or both. A judgment that doesn’t make MIT eat that loss, in turn, should encourage other universities that find themselves in similar situations to step up right away and make things right with the funding agencies.

And, in recognition that the public may have been hurt by Van Parijs’s deception beyond the monetary cost of it, Van Parijs will be doing 400 hours of community service. In inclined to believe that given the current fiscal realities of federal, state, and local governments, there is some service the community needs — and doesn’t have adequate funds to pay for — that Van Parijs might provide in those 400 hours. Perhaps it will not be a service he finds intellectually stimulating to provide, but that’s part of what makes it punishment.

Undoubtedly, there are members of the scientific community or of the larger public that will feel that this punishment just isn’t enough — that Van Parijs committed crimes against scientific integrity that demand harsher penalties.

Pragmatically, though, I think we need to ask what it would cost to secure those penalties. We cannot ignore the costs to the universities and to the federal agencies to conduct their investigations (here Van Parijs confessed rather denying the charges and working to obstruct the fact-finding), or to prosecutors to go to trial (here again, Van Parijs pled guilty rather than mounting a vigorous defense). Maybe there was a time where there were ample resources to spend on full-blown investigations and trials of this sort, but that time ain’t now.

And, we might ask what jailing Van Parijs would accomplish beyond underlining that fabrication and falsification on the public’s dime is a very bad thing to do.

Would jail time make it harder for Van Parijs to find another position within the tribe of science than it will already be for him? (Asked another way, would being sentenced to home detention take any of the stink off the verdict of fraud against him?) I reckon the convicted fraudster scientist has a harder time finding a job than your average ex-con — and that scientists who feel his punishment is not enough can lobby the rest of their scientific community to keep a skeptical eye on Van Parijs (should he publish more papers, apply for jobs within the tribe of science, or what have you).

Dispatch from PSA 2010: Symposium session on ClimateGate.

The Philosophy of Science Association Biennial Meeting included a symposium session on the release of hacked e-mails from the Climate Research Unit at the University of East Anglia. Given that we’ve had occasion to discuss ClimateGate here before, i thought I’d share my notes from this session.

Symposium: The CRU E-mails: Perspectives from Philosophy of Science.

Naomi Oreskes (UC San Diego), gave a talk called “Why We Resist the Results of Climate Science.”

She mentioned the attention brought to the discovery of errors in the IPCC report, noting that while mistakes are obviously to be avoided, it would be amazing for there to be a report that ran thousands of pages that did not have some mistakes. (Try to find a bound dissertation — generally only in the low hundreds of pages — without at least one typo.) The public’s assumption, though, was that these mistakes, once revealed, were smoking guns — a sign that something improper must have occurred.

Oreskes noted the boundary scientists of all sorts (including climate scientists) have tried to maintain between the policy-relevant and the policy-prescriptive. This is a difficult boundary to police, though, as climate science has an inescapable moral dimension. To the extent that climate change is driven by consumption (especially but not exclusively the burning of fossil fuels), we have a situation where the people reaping the benefits are not the ones who will be paying for that benefit (since people in the developed world will have the means to respond to the effects of climate change and those in the developing world will not). The situation seems to violate our expectations of intergenerational equity (since future generations will have to cope with the consequences of the consumption of past and current generations), as well as of inter-specific equity (since the species likely to go extinct in response to climate change are not the ones contributing the most to climate change).

The moral dimension of climate change, though, doesn’t make this a scientific issue about which the public feels a sense of clarity. Rather, the moral issues are such that Americans feel like their way of life is on trial. Those creating the harmful effects have done something wrong, even if it was accidental.

And this is where the collision occurs: Americans believe they are good; climate science seems to be telling them that they are bad. (To the extent that people strongly equate capitalism with democracy and the American way of life, that’s an issue too, given that consumption and growth are part of the problem.)

The big question Oreskes left us with, then, is how else to frame the need for changes in behavior, so that such a need would not make Americans so defensive that they would reflexively reject the science. I’m not sure the session ended with a clear answer to that question.

* * * * *

Wendy S. Parker (Ohio University) gave a talk titled “The Context of Climate Science: Norms, Pressures, and Progress.” A particular issue she took up was the ideal of transparency and how it came up in the context of climate scientists interactions with each other and with the public.

Parker noted that there had been numerous requests for access to raw data by people climate scientists did not recognize as part of the climate science community. The CRU denied many such requests, and the ClimateGate emails made it clear that the scientists generally didn’t want to cooperate with these requests.

Here, Parker observed that while we tend to look favorably on transparency, we probably need to say more about what transparency should amount to. Are we talking about making something available and open to scrutiny (i.e., making “transparency” roughly the opposite of “secrecy”)? Are we talking about making something understandable or usable, perhaps by providing fully explained nontechnical accounts of scientific methods and findings for the media (i.e., making “transparency” roughly the opposite of “opacity”)?

What exactly do we imagine ought to be made available? Research methods? Raw and/or processed data? Computer code? Lab notebooks? E-mail correspondence?

To whom ought the materials to be made available? Other members of one’s scientific community seems like a good bet, but how about members of the public at large? (Or, for that matter, members of industry or of political lobbying groups?)

And, for that matter, why do we value transparency? What makes it important? Is it primarily a matter of ensuring the quality of the shared body of scientific knowledge, and of improving the rate of scientific progress? Or, do we care about transparency as a matter of democratic accountability? As Parker noted, these values might be in conflict. (As well, she mentioned, transparency might conflict with other social values, like the privacy of human subjects.)

Here, if the public imputed nefarious motives to the climate researchers, the scientists themselves viewed some of the requests for access to their raw data as attempts by people with political motivations to obstruct the progress (or acceptance) of their research. It was not that the scientists feared that bad science would be revealed if the data were shared, but rather that they worried that yahoos from outside the scientific community were going to waste their time, or worse to cherry pick the shared data to make allegations that the scientists to which would then have to respond, wasting even more time.

In the numerous investigations that followed on the heels of the leak of stolen CRU e-mails, about the strongest charge against the involved climate scientists that stood was that they failed to display “the proper degree of openness”, and that they seemed to have a ethos of minimal compliance (or occasionally non-compliance) with regard to Freedom of Information Act (FOIA) requests. They were chided that the requirements of FOIA must not be seen as impositions, but as part of their social contract with the public (and something likely to make their scientific knowledge better).

Compliance, of course, takes resources (one of the most important of these being time), so it’s not free. Indeed, it’s hard not to imagine that at least some FOIA requests to climate scientists had “unintended consequences” (in terms of the expenditure of tim and other resources) on climate scientists that were precisely what the requesters intended.

However, as Parker noted, FOIA originated with the intent of giving citizens access to the workings of their government — imposing it on science and scientists is a relatively new move. It is true that many scientists (although not all) conduct publicly funded research, and thereby incur some obligations to the public. But there’s a question of how far this should go — ought every bit of data generated with the aid of any government grant to be FOIA-able?

Parker discussed the ways that FOIA seems to demand an openness that doesn’t quite fit with the career reward structures currently operating within science. Yet ClimateGate and its aftermath, and the heightened public scrutiny of, and demands for openness from, climate scientists in particular, seem to be driving (or at least putting significant pressure upon) the standards for data and code sharing in climate science.

I got to ask one of the questions right after Parker’s talk. I wondered whether the level of public scrutiny on climate scientists might be enough to drive them into the arms of the “open science” camp — which would, of course, require some serious rethinking of the scientific reward structures and the valorization of competition over cooperation. As we’ve discussed on this blog on many occasions, institutional and cultural change is hard. If openness from climate scientists is important enough to the public, though, could the public decide that it’s worthwhile to put up the resources necessary to support this kind of change in climate science?

I guess it would require a public willing to pay for the goodies it demands.

* * * * *

The next talk, by Kristin Shrader-Frechette (University of Notre Dame), was titled “Scientifically Legitimate Ways to Cook and Trim Data: The Hacked and Leaked Climate Emails.”

Shrader-Frechette discussed what statisticians (among others) have to say about conditions in which it is acceptable to leave out some of your data (and indeed, arguably misleading to leave it in rather than omitting it). There was maybe not as much unanimity here as one might like.

There’s general agreement that data trimming in order to make your results fit some predetermined theory is unacceptable. There’s less agreement about how to deal with outliers. Some say that deleting them is probably OK (although you’d want to be open that you have done so). On the other hand, many of the low probability/high consequence events that science would like to get a handle on are themselves outliers.

So when and how to trim data is one of those topics where it looks like scientists are well advised to keep talking to their scientific peers, the better not to mess it up.

Of the details in the leaked CRU e-mails, one that was frequently identified as a smoking gun indicating scientific shenanigans was the discussion of the “trick” to “hide the decline” in the reconstruction of climatic temperatures using proxy data from tree-rings. Shrader-Frechette noted that what was being “hidden” was not a decline in temperatures (as measured instrumentally) but rather in the temperatures reconstructed from one particular proxy — and that other proxies the climate scientists were using didn’t show this decline.

The particular incident raises a more general methodological question: scientifically speaking, is it better to include the data from proxies (once you have reason to believe it’s bad data) in your graphs? Is including it (or leaving it out) best seen as scrupulous honesty or as dishonesty?

And, does the answer differ if the graph is intended for use in an academic, bench-science presentation or a policy presentation (where it would be a very bad thing to confuse your non-expert audience)?

As she closed her talk, Shrader-Frechette noted that welfare-affecting science cannot be treated merely as pure science. She also mentioned that while FOIA applies to government-funded science, it does not apply to industry-funded science — which means that the “transparency” available to the public is pretty asymmetrical (and that industry scientists are unlikely to have to devote their time to responding to requests from yahoos for their raw data).

* * * * *

Finally, James McAllister (University of Leiden) gave a talk titled “Errors, Blunders, and the Construction of Climate Change Facts.” He spoke of four epistemic gaps climate scientists have to bridge: between distinct proxy data sources, between proxy and instrumental data, between historical time series (constructed of instrumental and proxy data) and predictive scenarios, and between predictive scenarios and reality. These epistemic gaps can be understood in the context of the two broad projects climate science undertakes: the reconstruction of past climate variation, and the forecast of the future.

As you might expect, various climate scientists have had different views about which kinds of proxy data are most reliable, and about how the different sorts of proxies ought to be used in reconstructions of past climate variation. The leaked CRU e-mails include discussions where climate scientists dedicate themselves to finding the “common denominator” in this diversity of expert opinion — not just because such a common denominator might be expected to be closer to the objective reality of things, but also because finding common ground in the diversity of opinion could be expected to enhance the core group’s credibility. Another effect, of course, is that the common denominator is also denied to outsiders, undermining their credibility (and effectively excluding them as outliers).

McAllister noted that the emails simultaneously revealed signs of internal disagreement, and of a reaching for balance. Some of the scientists argued for “wise use” of proxies and voiced judgments about how to use various types of data.

The data, of course, cannot actually speak for themselves.

As the climate scientists worked to formulate scenario-based forecasts that public policy makers would be able to use, they needed to grapple with the problems of how to handle the link between their reconstructions of past climate trends and their forecasts. They also had to figure out how to handle the link between their forecasts and reality. The e-mails indicate that some of the scientists were pretty resistant to this latter linkage — one asserted that they were “NOT supposed to be working with the assumption that these scenarios are realistic,” rather using them as internally consistent “what if?” storylines.

One thing the e-mails don’t seem to contain is any explicit discussion of what would count as an ad hoc hypothesis and why avoiding ad hoc hypotheses would be a good thing. This doesn’t mean that the climate scientists didn’t avoid them, just that it was not a methodological issue they felt they needed to be discussing with each other.

This was a really interesting set of talks, and I’m still mulling over some of the issues they raised for me. When those ideas are more than half-baked, I’ll probably write something about them here.

Harvard Dean sheds (a little) more light on Hauser misconduct case.

Today ScienceInsider gave an update on the Marc Hauser misconduct case, one that seems to support the accounts of other researchers in the Hauser lab. From ScienceInsider:

In an e-mail sent earlier today to Harvard University faculty members, Michael Smith, dean of the Faculty of Arts and Sciences (FAS), confirms that cognitive scientist Marc Hauser “was found solely responsible, after a thorough investigation by a faculty member investigating committee, for eight instances of scientific misconduct under FAS standards.”

ScienceInsider reprints the Dean’s email in its entirety. Here’s the characterization of the nature of Hauser’s misconduct from that email:

Continue reading

Data release, ethics, and professional survival.

In recent days, there have been signs on the horizon of an impending blogwar. Prof-like Substance fired the first volley:

[A]lmost all major genomics centers are going to a zero-embargo data release policy. Essentially, once the sequencing is done and the annotation has been run, the data is on the web in a searchable and downloadable format.

Yikes.

How many other fields put their data directly on the web before those who produced it have the opportunity to analyze it? Now, obviously no one is going to yank a genome paper right out from under the group working on it, but what about comparative studies? What about searching out specific genes for multi-gene phylogenetics? Where is the line for what is permissible to use before the genome is published? How much of a grace period do people get with data that has gone public, but that they* paid for?

—–
*Obviously we are talking about grant-funded projects, so the money is tax payer money not any one person’s. Nevertheless, someone came up with the idea and got it funded, so there is some ownership there.

Then, Mike the Mad Biologist fired off this reply:

Several of the large centers, including the one I work at, are funded by NIAID to sequence microorganisms related to human health and disease (analogous programs for human biology are supported by NHGRI). There’s a reason why NIH is hard-assed about data release:

Funding agencies learned this the hard way, as too many early sequencing centers resembled ‘genomic roach motels’: DNA checks in, but sequence doesn’t check out.

The funding agencies’ mission is to improve human health (or some other laudable goal), not to improve someone’s tenure package. This might seem harsh unless we remember how many of these center-based genome projects are funded. The investigator’s grant is not paying for the sequencing. In the case of NIAID, there is a white paper process. Before NIAID will approve the project, several goals have to be met in the white paper (Note: while I’m discussing NIAID, other agencies have a similar process, if different scientific objectives).

Obviously, the organism and collection of strains to be sequenced have to be relevant to human health. But the project also must have significant community input. NIAID absolutely does not want this to be an end-run around R01 grants. Consequently, these sequencing projects should not be a project that belongs to a single lab, and which lacks involvement by others in the subdiscipline (“this looks like an R01” is a pejorative). It also has to provide a community resource. In other words, data from a successful project should be used rapidly by other groups: that’s the whole point (otherwise, write an R01 proposal). The white paper should also contain a general description of the analysis goals of the project (and, ideally, who in the collaborative group will address them). If you get ‘scooped’, that’s, in part, a project planning issue.

NIAID, along with other agencies and institutes, is pushing hard for rapid public release. Why does NIAID get to call the shots? Because it’s their money.

Which brings me to the issue of ‘whose’ genomes these are. The answer is very simple: NIH’s (and by extension, the American people’s). As I mentioned above, NIH doesn’t care about your tenure package, or your dissertation (given that many dissertations and research programs are funded in part or in their entirely by NIH and other agencies, they’re already being generous†). What they want is high-quality data that are accessible to as many researchers as possible as quickly as possible. To put this (very) bluntly, medically important data should not be held hostage by career notions. That is the ethical position.

Prof-like substance hurled back a hefty latex pillow of a rejoinder:

People feel like anything that is public is free to use, and maybe they should. But how would you feel as the researcher who assembled a group of researchers from the community, put a proposal together, drummed up support from the community outside of your research team, produced and purified the sample to be sequenced (which is not exactly just using a Sigma kit in a LOT of cases), dealt with the administration issues that crop up along the way, pushed the project through (another aspect woefully under appreciated) the center, got your research community together once they data were in hand to make sense of it all and herded the cats to get the paper together? Would you feel some ownership, even if it was public dollars that funded the project?

Now what if you submitted the manuscript and then opened your copy of Science and saw the major finding that you centered the genome paper around has been plucked out by another group and publish in isolation? Would you say, “well, the data’s publicly available, what’s unscrupulous about using it?”

[L]et’s couch this in the reality of the changing technology. If your choice is to have the sequencing done for free, but risk losing it right off the machine, OR to do it with your own funds (>$40,000) and have exclusive right to it until the paper is published, what are you going to choose? You can draw the line regarding big and small centers or projects all you want, but it is becoming increasingly fuzzy.

This is all to get back to my point that if major sequencing centers want to stay ahead of the curve, they have to have policies that are going to encourage, not discourage, investigators to use them.

It’s fair to say that I don’t know from genomics. However, I think the ethical landscape of this disagreement bears closer examination.

Continue reading

The value of (unrealistic) case studies in ethics education.

Dr. Isis posted a case study about a postdoc’s departure from approved practices and invited her readers to discuss it. DrugMonkey responded by decrying the ridiculousness of case studies far more black and white than what scientists encounter in real life:

This is like one of those academic misconduct cases where they say “The PI violates the confidence of review, steals research ideas that are totally inconsistent with anything she’d been doing before, sat on the paper review unfairly, called the editor to badmouth the person who she was scooping and then faked up the data in support anyway. Oh, and did we mention she kicked her cat?”.

This is the typical and useless fare at the ethical training course. Obvious, overwhelmingly clear cases in which the black hats and white hats are in full display and provide a perfect correlation with malfeasance.

The real world is messier and I think that if we are to make any advances in dealing with the real problems, the real cases of misconduct and the real cases of dodgy animal use in research, we need to cover more realistic scenarios.

I’m sympathetic to DrugMonkey’s multiple complaints: that real life is almost always more complicated than the canned case study; that hardly anyone puts in the years of study and training to become a scientist if her actual career objective is to be a super-villain; and especially that the most useful sort of ethics training for the scientist will be in day to day conversation with scientific mentors and colleagues rather than in isolated ethics courses, training modules, or workshops.

However, used properly, I think that case studies — even unrealistic ones — play a valuable role in ethics education.

Continue reading

IGERT meeting: what do grown-up interdisciplinary scientists do for a living?

One of the most interesting sessions at the NSF IGERT 2010 Project Meeting was a panel of men and women who participated in the IGERT program as students and are now working in a variety of different careers. The point of the panel was to hear about the ways that they felt their experiences as IGERT trainees prepared them for their current positions, as well as to identify aspects of their current jobs where more preparation might have been helpful.
The session was moderated by Judy Giordan (President and Co-Founder, Visions in Education, Inc.). The IGERT alums who participated in the panel were:
Fabrisia Ambrosio (University of Pittsburgh)
Abigail Anthony (Environment Northeast, a non-profit)
Edward Hederick (Congressional Fellow)
Lisa Kemp (Co-founder, Ablitech, Inc.)
Henry Lin (Amgen, Inc.)
Yaniria Sanchez de Leon (University of Puerto Rico)
Andrew Todd (U.S. Geological Survey)
Marie Tripp (Intel)
What helped you prepare for your current role?

Continue reading

IGERT meeting: some general thoughts.

About three weeks ago, I was in Washington, D.C. for the NSF IGERT 2010 Project Meeting. I was invited to speak on a panel on Digital Science (with co-panelists Chris Impey, Moshe Pritzker, and Jean-Claude Bradley, who blogged about it), and later in the meeting I helped to facilitate some discussions of ethics case studies.
I’ll have more to say about our panel in the next post, but first I wanted to share some broad observations about the meeting.
IGERT stands for “Integrative Graduate Education and Research Traineeship”, and the program is described thusly:

Continue reading

Ask Dr. Free-Ride: The university and the pirate.

Recently in my inbox, I found a request for advice unlike any I’d received before. Given the detail in the request, I don’t trust myself to paraphrase it. As you’ll see, I’ve redacted the names of the people, university, and government agency involved. I have, however, kept the rest of the query (including the original punctuation) intact.

Continue reading

Tempering justice with mercy: the question of youthful offenders in the tribe of science.

Recently, I wrote a post about two researchers at the University of Alabama at Birmingham (UAB) who were caught falsifying data in animal studies of immune suppressing drugs. In the post, I conveyed that this falsification was very bad indeed, and examined some of the harm it caused. I also noted that the Office of Research Integrity (ORI) meted out somewhat different penalties to the principal investigator (ten year voluntary exclusion from government funding and from serving in any advisory capacity with the PHS) and to her postdoc (three year voluntary exclusion from government funding and PHS advisory roles). Moreover, UAB had left open the possibility that the postdoc might work on other people’s research projects under very strict mentoring. (Owing to the ORI ruling, these research projects would have to be ones funded by someone other than the U.S. government, however.)

On that post, commenter Paul Browne disagreed with my suggestion that rehabilitation of the postdoc in this case might be an end worth seeking:

“While such an obvious departure from an experimental protocol — especially an in an experiment involving animal use — isn’t much of an ethical gray area, I think there’s something to be said for treating early-career scientists as potentially redeemable in the aftermath of such ethical screw-ups.”

You have got to be kidding.

We’re not talking about an honest mistake, or deviating from an approved protocol with the best of intentions, or excluding a few outliers from the analysis but rather a decade of repeatedly lying to their funders, their IACUC and to other scientists working in their field.

What they did almost makes me wish that science has a ceremony similar to the old military drumming out.

At the very least they should both be charged with fraud, and since they presumably included their previous falsified results in support of NIH grant applications it shouldn’t be too hard to get a conviction.

Believe me, I understand where Paul is coming from. Given the harm that cheaters can do to the body of shared knowledge on which the scientific community relies, and to the trust within the scientific community that makes coordination of effort possible, I understand the impulse to remove cheaters from the community once and for all.

But this impulse raises a big question: Can a scientist who has made an ethical misstep be rehabilitated and reintegrated as a productive member of the scientific community? Or is your first ethical blunder grounds for permanent expulsion from the community? In practice, this isn’t just a question about the person who commits the ethical violation. It’s also a question about what other scientists in the community can stomach in dealing with the offenders — especially when the offender turns out to be a close colleague or a trainee.

Continue reading