A question for the PIs: How involved do you get in your trainees’ results?

In the wake of this post that touched on recently released documents detailing investigations into Bengü Sezen’s scientific misconduct, and that noted that a C & E News article described Sezen as a “master of deception”, I had an interesting chat on the Twitters:

@UnstableIsotope (website) tweeted:

@geernst @docfreeride I scoff at the idea that Sezen was a master at deception. She lied a lot but plenty of opportunities to get caught.

@geernst (website) tweeted back:

@UnstableIsotope Maybe evasion is a more accurate word.

@UnstableIsotope:

@geernst I’d agree she was a master of evasion. But she was caught be other group members but sounds like advisor didn’t want to believe it.

@docfreeride (that’s me!):

@UnstableIsotope @geernst Possible that she was master of deception only in environment where people didn’t guard against being deceived?

@UnstableIsotope:

@docfreeride @geernst I agree ppl didn’t expect deception, my read suggests she was caught by group members but protected by advisor.

@UnstableIsotope:

@docfreeride @geernst The advisor certainly didn’t expect deception and didn’t encourage but didn’t want to believe evidence

@docfreeride:

@UnstableIsotope @geernst Not wanting to believe the evidence strikes me as a bad fit with “being a scientist”.

@UnstableIsotope:

@docfreeride @geernst Yes, but it is human. Not wanting to believe your amazing results are not amazing seems like a normal response to me.

@geernst:

@docfreeride @UnstableIsotope I agree. Difficult to separate scientific objectivity from personal feelings in those circumstances.

@docfreeride:

@geernst @UnstableIsotope But isn’t this exactly the argument for not taking scrutiny of your results, data, methods personally?

@UnstableIsotope:

@docfreeride @geernst Definitely YES. I look forward to people repeating my experiments. I’m nervous if I have the only result.

@geernst:

@docfreeride @UnstableIsotope Couldn’t agree more.

This conversation prompted a question I’d like to ask the PIs. (Trainees: I’m going to pose the complementary question to you in the very next post!)

In your capacity as PI, your scientific credibility (and likely your name) is tied to all the results that come out of your research group — whether they are experimental measurements, analyses of measurements, modeling results, or whatever else it is that scientists of your stripe regard as results. What do you do to ensure that the results generated by your trainees are reliable?

Now, it may be the case that what you see as the appropriate level of involvement/quality control/”let me get up in your grill while you repeat that measurement for me” would still not have been enough to deter — or to detect — a brazen liar. If you want to talk about that in the comments, feel free.

Commenting note: You may feel more comfortable commenting with a pseudonym for this particular discussion, and that’s completely fine with me. However, please pick a unique ‘nym and keep it for the duration of this discussion, so we’re not in the position of trying to sort out which “Anonymous” is which. Also, if you’re a regular commenter who wants to go pseudonymous for this discussion, you’ll probably want to enter something other than your regular email address in the commenting form — otherwise, your Gravatar may give your other identity away!

What are honest scientists to do about a master of deception?

A new story posted at Chemical & Engineering News updates us on the fraud case of Bengü Sezen (who we discussed here, here, and here at much earlier stages of the saga).

William G. Schultz notes that documents released (PDF) by the Department of Health and Human Services (which houses the Office of Research Integrity) detail some really brazen misconduct on Sezen’s part in her doctoral dissertation at Columbia University and in at least three published papers.

From the article:

The documents—an investigative report from Columbia and HHS’s subsequent oversight findings—show a massive and sustained effort by Sezen over the course of more than a decade to dope experiments, manipulate and falsify NMR and elemental analysis research data, and create fictitious people and organizations to vouch for the reproducibility of her results. …

A notice in the Nov. 29, 2010, Federal Register states that Sezen falsified, fabricated, and plagiarized research data in three papers and in her doctoral thesis. Some six papers that Sezen had coauthored with Columbia chemistry professor Dalibor Sames have been withdrawn by Sames because Sezen’s results could not be replicated. …

By the time Sezen received a Ph.D. degree in chemistry in 2005, under the supervision of Sames, her fraudulent activity had reached a crescendo, according to the reports. Specifically, the reports detail how Sezen logged into NMR spectrometry equipment under the name of at least one former Sames group member, then merged NMR data and used correction fluid to create fake spectra showing her desired reaction products.

Apparently, her results were not reproducible because those trying to reproduce them lacked her “hand skills” with Liquid Paper.

Needless to say, this kind of behavior is tremendously detrimental to scientific communities trying to build a body of reliable knowledge about the world. Scientists are at risk of relying on published papers that are based in wishes (and lies) rather than actual empirical evidence, which can lead them down scientific blind alleys and waste their time and money. Journal editors devoted resources to moving her (made-up) papers through peer review, and then had to devote more resources to dealing with their retractions. Columbia University and the U.S. government got to spend a bunch of money investigating Sezen’s wrongdoing — the latter expenditures unlikely to endear scientific communities to an already skeptical public. Even within the research lab where Sezen, as a grad student, was concocting her fraudulent results, her labmates apparently wasted a lot of time trying to reproduce her results, questioning their own abilities when they couldn’t.

And to my eye, one of the big problems in this case is that Sezen seems to have been the kind of person who projected confidence while lying her pants off:

The documents paint a picture of Sezen as a master of deception, a woman very much at ease with manipulating colleagues and supervisors alike to hide her fraudulent activity; a practiced liar who would defend the integrity of her research results in the face of all evidence to the contrary. Columbia has moved to revoke her Ph.D.

Worse, the reports document the toll on other young scientists who worked with Sezen: “Members of the [redacted] expended considerable time attempting to reproduce Respondent’s results. The Committee found that the wasted time and effort, and the onus of not being able to reproduce the work, had a severe negative impact on the graduate careers of three (3) of those students, two of whom [redacted] were asked to leave the [redacted] and one of whom decided to leave after her second year.”

In this matter, the reports echo sources from inside the Sames lab who spoke with C&EN under conditions of anonymity when the case first became public in 2006. These sources described Sezen as Sames’ “golden child,” a brilliant student favored by a mentor who believed that her intellect and laboratory acumen provoked the envy of others in his research group. They said it was hard to avoid the conclusion that Sames retaliated when other members of his group questioned the validity of Sezen’s work.

What I find striking here is that Sezen’s vigorous defense of her’s own personal integrity was sufficient, at least for awhile, to convince her mentor that those questioning the results were in the wrong — not just incompetent to reproduce the work, but jealous and looking to cause trouble. And, it’s deeply disappointing that this judgment may have been connected to the departure of those fellow graduate students who raised questions from their graduate program.

How could this have been avoided?

Maybe a useful strategy would have been to treat questions about the scientific work (including its reproducibility) first and foremost as questions about the scientific work.

Getting results that others cannot reproduce is not prima facie evidence that you’re a cheater-pants. It may just mean that there was something weird going on with the equipment, or the reagents, or some other component of the experimental system when you did the experiment that yielded the exciting but hard to replicate results. Or, it may mean that the folks trying to replicate the results haven’t quite mastered the technique (which, in the case that they are your colleagues in the lab, could be addressed by working with them on their technique). Or, it may mean that there’s some other important variable in the system that you haven’t identified as important and so have not worked out (or fully described) how to control.

In this case, of course, it’s looking like the main reason that Sezen’s results were not reproducible was that she made them up. But casting the failure to replicate presumptively as one scientist’s mad skillz and unimpeachable integrity against another’s didn’t help get to the bottom of the scientific facts. It made the argument personal rather than putting the scientists involved on the same team in figuring out what was really going on with the scientific systems being studied.

Of all of the Mertonian norms imputed to the Tribe of Science, organized skepticism is probably the one nearest and dearest to most scientists’ basic understanding of how they get the knowledge-building job done. Figuring out what’s going on with particular phenomena in the world can be hard, not least because lining up solid evidence to support your conclusions requires identifying evidence that others trying to repeat your work can reliably obtain themselves. This is more than just a matter of making sure your results are robust. Rather, you want others to be able to reproduce your work so that you know you haven’t fooled yourself.

Organized skepticism, in other words, should start at home.

There is a risk of being too skeptical of your own results, and there are chances to overlook something important as noise because it doesn’t fit with what you expect to observe. However, the scientist who refuses to entertain the possibility that her work could be wrong — indeed, who regards questions about the details of her work as a personal affront — should raise a red flag for the rest of her scientific community, no matter what her career stage or her track record of brilliance to date.

In a world where every scientist’s findings are recognized as being susceptible to error, the first response to questions about findings might be to go back to the phenomena together, helping each other to locate potential sources of error and to avoid them. In such a world, the master of deception trying to ride personal reputation (or good initial impressions) to avoid scrutiny of his or her work will have a much harder time getting traction.

Evaluating scientific reports (and the reliability of the scientists reporting them).

One of the things scientific methodology has going for it (at least in theory) is a high degree of transparency. When scientists report findings to other scientists in the community (say, in a journal article), it is not enough for them to just report what they observed. They must give detailed specifications of the conditions in the field or in the lab — just how did they set up and run that experiment, choose their sample, make their measurement. They must explain how they processed the raw data they collected, giving a justification for processing it this way. And, in drawing conclusions from their data, they must anticipate concerns that the data might have been due to something other than the phenomenon of interest, or that the measurements might better support an alternate conclusion, and answer those objections.

A key part of transparency in scientific communications is showing your work. In their reports, scientists are supposed to include enough detailed information so that other scientists could set up the same experiments, or could follow the inferential chain from raw data to processed data to conclusions and see if it holds up to scrutiny.

Of course, scientists try their best to apply hard-headed scrutiny to their own results before they send the manuscript to the journal editors, but the whole idea of peer review, and indeed the communication around a reported result that continues after publication, is that the scientific community exercises “organized skepticism” in order to discern which results are robust and reflective of the system under study rather than wishful thinking or laboratory flukes. If your goal is accurate information about the phenomenon you’re studying, you recognize the value of hard questions from your scientific peers about your measurements and your inferences. Getting it right means catching your mistakes and making sure your conclusions are well grounded.

What sort of conclusions should we draw, then, when a scientist seems resistant to transparency, evasive in responding to concerns raised by peer reviewers, and indignant when mistakes are brought to light?

It’s time to revisit the case of Stephen Pennycook and his research group at Oak Ridge National Laboratory. In an earlier post I mused on the saga of this lab’s 1993 Nature paper [1] and its 2006 correction [2] (or “corrigendum” for the Latin fans), in light of allegations that the Pennycook group had manipulated data in another recent paper submitted to Nature Physics. (In addition to the coverage in the Boston Globe (PDF), the situation was discussed in a news article in Nature [3] and a Nature editorial [4].)

Now, it’s time to consider the recently uploaded communication by J. Silcox and D. A. Muller (PDF) [5] that analyzes the corrigendum and argues that a retraction, not a correction, was called for.

It’s worth noting that this communication was (according to a news story at Nature about how the U.S. Department of Energy handles scientific misconduct allegations [6]) submitted to Nature as a technical comment back in 2006 and accepted for publication “pending a reply by Pennycook.” Five years later, uploading the technical comment to arXiv.org makes some sense, since a communication that never sees the light of day doesn’t do much to further scientific discussion.

Given the tangle of issues at stake here, we’re going to pace ourselves. In this post, I lay out the broad details of Silcox and Muller’s argument (drawing also on the online appendix to their communication) as to what the presented data show and what they do not show. In a follow-up post, my focus will be on what we can infer from the conduct of the authors of the disputed 1993 paper and 2006 corrigendum in their exchanges with peer reviewers, journal editors, and the scientific community. Then, I’ll have at least one more post discussing the issues raised by the Nature news story and the related Nature editorial on the DOE’s procedures for dealing with alleged misconduct [7].

Continue reading

Does the punishment fit the crime? Luk Van Parijs has his day in court.

Earlier this month, the other shoe finally dropped on the Luk Van Parijs case.

You may recall that Van Parijs, then an associate professor of biology at MIT, made headlines back in October of 2005 when MIT fired him after spending nearly a year investigating charges that he had falsified and fabricated data and finding those charges warranted. We discussed the case as it was unfolding (here and here), and discussed also the “final action” by the Office of Research Intergrity on the case (which included disbarment from federal funding through December 21, 2013).

But losing the MIT position and five year’s worth of eligibility for federal funding (counting from when Van Parijs entered the Voluntary Exclusion Agreement with the feds) is not the extent of the formal punishment to be exacted for his crimes — hence the aforementioned other shoe. As well, the government filed criminal charges against Van Parijs and sought jail time.

As reported in a news story posted 28 June 2011 at Nature (“Biologist spared jail for grant fraud” by Eugenie Samuel Reich, doi:10.1038/474552a):

In February 2011, US authorities filed criminal charges against Van Parijs in the US District Court in Boston, citing his use of fake data in a 2003 grant application to the National Institutes of Health, based in Bethesda, Maryland. Van Parijs entered a guilty plea, and the government asked Judge Denise Casper for a 6-month jail term because of the seriousness of the fraud, which involved a $2-million grant. “We want to discourage other researchers from engaging in similar behaviour,” prosecutor Gregory Noonan, an assistant US attorney, told Nature.

On 13 June, Casper opted instead for six months of home detention with electronic monitoring, plus 400 hours of community service and a payment to MIT of $61,117 — restitution for the already-spent grant money that MIT had to return to the National Institutes of Health. She cited assertions from the other scientists that Van Parijs was truly sorry. “I believe that the remorse that you’ve expressed to them, to the probation office, and certainly to the Court today, is heartfelt and deeply held, and I don’t think it’s in any way contrived for this Court,” she said.

Let me pause for a moment to let you, my readers, roll your eyes or howl or do whatever else you deem appropriate to express your exasperation that Van Parijs’s remorse counts for anything in his sentencing.

Verily, it is not hard to become truly sorry once you have been caught doing bad stuff. The challenge is not to do the bad stuff in the first place. And, the actual level of remorse in Van Parijs’s heart does precisely nothing to mitigate the loss (in time and money, to name just two) suffered by other researchers relying on Van Parijs to make honest representations in his journal articles and grant proposals.

Still, there’s probably a relevant difference (not just ethically, but also pragmatically) between the scientist caught deceiving the community who gets what such deception is a problem and manifests remorse and the scientist caught deceiving the community who doesn’t see what the big deal is (because surely everyone does this sort of thing, at least occasionally, to survive in the high-pressure environment). With the remorseful cheater, there might at least be some hope of rehabilitation.

Indeed, the article notes:

Luk Van Parijs was first confronted with evidence of data falsification by members of his laboratory in 2004, when he was an associate professor of biology at the Massachusetts Institute of Technology (MIT) in Cambridge. Within two days, he had confessed to several acts of fabrication and agreed to cooperate with MIT’s investigation.

A confession within two days of being confronted with the evidence is fairly swift. Other scientific cheaters in the headlines seem to dig their heels in and protest their innocence (or that the post-doc or grad student did it) for significantly longer than that.

Anyway, I think it’s reasonable for us to ask here what the punishment is intended to accomplish in a case like this. If the goal is something beyond satisfying our thirst for vengeance, then maybe we will find that the penalty imposed on Van Parijs is useful even if it doesn’t include jail time.

As it happens, one of the scientists who asked the judge in the case for clemency on his behalf suggests that jail time might be a penalty that actually discourages the participation of other members of the scientific community in rooting out fabrication and falsification. Of course, not everyone in the scientific community agrees:

[MIT biologist Richard] Hynes argued that scientific whistleblowers might be reluctant to come forwards if they thought their allegations might result in jail for the accused.

But that is not how the whistleblowers in this case see it. One former member of Van Parijs’ MIT lab, who spoke to Nature on condition of anonymity, says he doesn’t think the prospect of Van Parijs’ imprisonment would have deterred the group from coming forwards. Nor does he feel the punishment is adequate. “Luk’s actions resulted in many wasted years as people struggled to regain their career paths. How do you measure the cost to the trainees when their careers have been derailed and their reputations brought into question?” he asks. The court did not ask these affected trainees for their statements before passing sentence on Van Parijs.

This gets into a set of questions we’ve discussed before:

I’m inclined to think that the impulse to deal with science’s youthful offenders privately is a response to the fear that handing them over to federal authorities has a high likelihood of ending their scientific careers forever. There is a fear that a first offense will be punished with the career equivalent of the death penalty.

Permanent expulsion or a slap on the wrist is not much of a range of penalties. And, I suspect neither of these options really address the question of whether rehabilitation is possible and in the best interests of both the individual and the scientific community. …

If no errors in judgment are tolerated, people will do anything to conceal such errors. Mentors who are trying to be humane may become accomplices in the concealment. The conversations about how to make better judgments may not happen because people worry that their hypothetical situations will be scrutinized for clues about actual screw-ups.

Possibly we need to recognize that it’s an empirical question what constellation of penalties (including jail time) encourage or discourage whisteblowing — and to deploy some social scientists to get reliable empirical data that might usefully guide decisions about institutional structures of rewards and penalties that will best encourage the kinds of individual behaviors that lead to robust knowledge-building activities and effectively coordinated knowledge-building communities.

But, it’s worth noting that even though he won’t be doing jail time, Van Parijs doesn’t escape without punishment.

He will be serving the same amount of time under home detention (with electronic monitoring) as he would have served in jail if the judge had given the sentence the government was asking for. In other words, he is not “free” for those six months. (Indeed, assuming he serves this home detention in the home shared by his wife and their three young children, I reckon there is a great deal of work that he might be called on to do with respect to child care and household chores, work that he might escape in a six-month jail sentence.)

Let’s not forget that it costs money to incarcerate people. The public picks up the tab for those expenses. Home detention almost certainly costs the public less. And, Van Parijs is probably less in a position to reoffend during his home detention, even if he slipped out of his ankle monitor, than is the guy who robs convenience stores. What university is going to be looking at his data?

Speaking of the public’s money, recall that another piece of the sentence is restitution — paying back to MIT the $61,117 that MIT spent when it returned Van Parijs’s grant money to NIH. Since Van Parijs essentially robbed the public of the grant money (by securing it with lies and/or substitute lies for the honest scientific results the grant-supported research was supposed to be generating), it is appropriate that Van Parijs dip into his own pocket to pay this money back.

It’s a little more complicated, since he needs to pay MIT back. MIT seems to have recognized that paying the public back as soon as the problem was established was the right thing to do, or a good way to reassure federal funding agencies and the the public that universities like MIT take their obligations to the public very seriously, or both. A judgment that doesn’t make MIT eat that loss, in turn, should encourage other universities that find themselves in similar situations to step up right away and make things right with the funding agencies.

And, in recognition that the public may have been hurt by Van Parijs’s deception beyond the monetary cost of it, Van Parijs will be doing 400 hours of community service. In inclined to believe that given the current fiscal realities of federal, state, and local governments, there is some service the community needs — and doesn’t have adequate funds to pay for — that Van Parijs might provide in those 400 hours. Perhaps it will not be a service he finds intellectually stimulating to provide, but that’s part of what makes it punishment.

Undoubtedly, there are members of the scientific community or of the larger public that will feel that this punishment just isn’t enough — that Van Parijs committed crimes against scientific integrity that demand harsher penalties.

Pragmatically, though, I think we need to ask what it would cost to secure those penalties. We cannot ignore the costs to the universities and to the federal agencies to conduct their investigations (here Van Parijs confessed rather denying the charges and working to obstruct the fact-finding), or to prosecutors to go to trial (here again, Van Parijs pled guilty rather than mounting a vigorous defense). Maybe there was a time where there were ample resources to spend on full-blown investigations and trials of this sort, but that time ain’t now.

And, we might ask what jailing Van Parijs would accomplish beyond underlining that fabrication and falsification on the public’s dime is a very bad thing to do.

Would jail time make it harder for Van Parijs to find another position within the tribe of science than it will already be for him? (Asked another way, would being sentenced to home detention take any of the stink off the verdict of fraud against him?) I reckon the convicted fraudster scientist has a harder time finding a job than your average ex-con — and that scientists who feel his punishment is not enough can lobby the rest of their scientific community to keep a skeptical eye on Van Parijs (should he publish more papers, apply for jobs within the tribe of science, or what have you).

I want to live in their world.

In my “Ethics in Science” course, we talk a lot about academia. It’s not that all the science majors in the class are committed to becoming academic scientists, but many of them are planning to continue their scientific training, which usually means pursuing a graduate degree of some sort, thereby putting them in contact with a bunch of academic scientists and the sociopolitical world they inhabit.

But, as we’re talking about the dynamics of the academic sector of the tribe of science, the students express some interesting, often charming, assumptions about how that world works. Two recent examples that stick with me:

  • There is some mechanism (analogous to student evaluations of a course and its instructor at the end of the term) by which graduate students regularly evaluate their graduate advisors/lab heads. And, these evaluations of the advisor have actual consequences for the advisor.
  • Getting tenure ensures financial stability for the rest of your life.

Would that it were so.

Some years ago I wrote a glossary of academic science jargon for the class. I’m on the verge of adding to it a brief description of a generic training lab (though maybe not properly a “typical” one, given the amount of local variation in labs), sketching out different career and training stages, levels of connection to the institution (with the financial security and power, or lack thereof, that go with them), and so on.

But now I’m tempted to get each new batch of students to tell me how they imagine it works before I point them towards the description of how it tends to be. Some of the features of the world they imagine are much nicer.

Question for the hivemind: workplace policies and MYOB.

The students in my “Ethics in Science” course have, as usual, reminded me why I love teaching. (Impressively, they manage to do this while generating ever larger quantities of the stuff I don’t love about teaching, written work that needs to be graded.) But, recently, they’ve given me some indications that my take on the world and theirs may differ in interesting ways.

For example, last week they discussed a case study in which a graduate student is trying to figure out what to do about his difficulties with his research in a lab where the most successful student is also romantically involved with the boss.

In the discussion, there was about the range of opinions you’d expect about the acceptability of this kind of relationship and its likely effects on the collegiality of the training environment.

But there was a certain flavor of response that really confused me. It boiled down to something like this: The boss and the fellow grad student are responsible adults who can date anyone they want. Get over it, get back to your research, and for goodness sake don’t go blabbing about their relationship because if the department chair finds out about it, they could both get in big trouble, maybe even losing their jobs.

Am I wrong that there seems to be a contradiction here?

If the professor and his graduate student can get in official trouble, at the hands of the department chair, for their romantic involvement, doesn’t that suggest that the relationship is, in the official work context, not OK?

Or, looking at it from the other direction, if such a romance is something that they and any of their lab members who happen to have discovered it need to keep on the down-low, doesn’t this suggest that there is some problematic issue with the relationship? Otherwise, why is the secrecy necessary?

I’m thinking the crux of this response — they can date if they want to, but no one with authority over them must know about it — may be a presumption that workplace policies are unreasonably intrusive, especially when it comes to people’s personal lives. Still, it strikes me that at least some workplace policies might exist for good reasons — and that in some instances the personal lives of coworkers (and bosses) could have real impacts on the work environment.

Is “mind your own business” a reasonable policy here, official work policies be damned?

Dispatch from PSA 2010: Symposium session on ClimateGate.

The Philosophy of Science Association Biennial Meeting included a symposium session on the release of hacked e-mails from the Climate Research Unit at the University of East Anglia. Given that we’ve had occasion to discuss ClimateGate here before, i thought I’d share my notes from this session.

Symposium: The CRU E-mails: Perspectives from Philosophy of Science.

Naomi Oreskes (UC San Diego), gave a talk called “Why We Resist the Results of Climate Science.”

She mentioned the attention brought to the discovery of errors in the IPCC report, noting that while mistakes are obviously to be avoided, it would be amazing for there to be a report that ran thousands of pages that did not have some mistakes. (Try to find a bound dissertation — generally only in the low hundreds of pages — without at least one typo.) The public’s assumption, though, was that these mistakes, once revealed, were smoking guns — a sign that something improper must have occurred.

Oreskes noted the boundary scientists of all sorts (including climate scientists) have tried to maintain between the policy-relevant and the policy-prescriptive. This is a difficult boundary to police, though, as climate science has an inescapable moral dimension. To the extent that climate change is driven by consumption (especially but not exclusively the burning of fossil fuels), we have a situation where the people reaping the benefits are not the ones who will be paying for that benefit (since people in the developed world will have the means to respond to the effects of climate change and those in the developing world will not). The situation seems to violate our expectations of intergenerational equity (since future generations will have to cope with the consequences of the consumption of past and current generations), as well as of inter-specific equity (since the species likely to go extinct in response to climate change are not the ones contributing the most to climate change).

The moral dimension of climate change, though, doesn’t make this a scientific issue about which the public feels a sense of clarity. Rather, the moral issues are such that Americans feel like their way of life is on trial. Those creating the harmful effects have done something wrong, even if it was accidental.

And this is where the collision occurs: Americans believe they are good; climate science seems to be telling them that they are bad. (To the extent that people strongly equate capitalism with democracy and the American way of life, that’s an issue too, given that consumption and growth are part of the problem.)

The big question Oreskes left us with, then, is how else to frame the need for changes in behavior, so that such a need would not make Americans so defensive that they would reflexively reject the science. I’m not sure the session ended with a clear answer to that question.

* * * * *

Wendy S. Parker (Ohio University) gave a talk titled “The Context of Climate Science: Norms, Pressures, and Progress.” A particular issue she took up was the ideal of transparency and how it came up in the context of climate scientists interactions with each other and with the public.

Parker noted that there had been numerous requests for access to raw data by people climate scientists did not recognize as part of the climate science community. The CRU denied many such requests, and the ClimateGate emails made it clear that the scientists generally didn’t want to cooperate with these requests.

Here, Parker observed that while we tend to look favorably on transparency, we probably need to say more about what transparency should amount to. Are we talking about making something available and open to scrutiny (i.e., making “transparency” roughly the opposite of “secrecy”)? Are we talking about making something understandable or usable, perhaps by providing fully explained nontechnical accounts of scientific methods and findings for the media (i.e., making “transparency” roughly the opposite of “opacity”)?

What exactly do we imagine ought to be made available? Research methods? Raw and/or processed data? Computer code? Lab notebooks? E-mail correspondence?

To whom ought the materials to be made available? Other members of one’s scientific community seems like a good bet, but how about members of the public at large? (Or, for that matter, members of industry or of political lobbying groups?)

And, for that matter, why do we value transparency? What makes it important? Is it primarily a matter of ensuring the quality of the shared body of scientific knowledge, and of improving the rate of scientific progress? Or, do we care about transparency as a matter of democratic accountability? As Parker noted, these values might be in conflict. (As well, she mentioned, transparency might conflict with other social values, like the privacy of human subjects.)

Here, if the public imputed nefarious motives to the climate researchers, the scientists themselves viewed some of the requests for access to their raw data as attempts by people with political motivations to obstruct the progress (or acceptance) of their research. It was not that the scientists feared that bad science would be revealed if the data were shared, but rather that they worried that yahoos from outside the scientific community were going to waste their time, or worse to cherry pick the shared data to make allegations that the scientists to which would then have to respond, wasting even more time.

In the numerous investigations that followed on the heels of the leak of stolen CRU e-mails, about the strongest charge against the involved climate scientists that stood was that they failed to display “the proper degree of openness”, and that they seemed to have a ethos of minimal compliance (or occasionally non-compliance) with regard to Freedom of Information Act (FOIA) requests. They were chided that the requirements of FOIA must not be seen as impositions, but as part of their social contract with the public (and something likely to make their scientific knowledge better).

Compliance, of course, takes resources (one of the most important of these being time), so it’s not free. Indeed, it’s hard not to imagine that at least some FOIA requests to climate scientists had “unintended consequences” (in terms of the expenditure of tim and other resources) on climate scientists that were precisely what the requesters intended.

However, as Parker noted, FOIA originated with the intent of giving citizens access to the workings of their government — imposing it on science and scientists is a relatively new move. It is true that many scientists (although not all) conduct publicly funded research, and thereby incur some obligations to the public. But there’s a question of how far this should go — ought every bit of data generated with the aid of any government grant to be FOIA-able?

Parker discussed the ways that FOIA seems to demand an openness that doesn’t quite fit with the career reward structures currently operating within science. Yet ClimateGate and its aftermath, and the heightened public scrutiny of, and demands for openness from, climate scientists in particular, seem to be driving (or at least putting significant pressure upon) the standards for data and code sharing in climate science.

I got to ask one of the questions right after Parker’s talk. I wondered whether the level of public scrutiny on climate scientists might be enough to drive them into the arms of the “open science” camp — which would, of course, require some serious rethinking of the scientific reward structures and the valorization of competition over cooperation. As we’ve discussed on this blog on many occasions, institutional and cultural change is hard. If openness from climate scientists is important enough to the public, though, could the public decide that it’s worthwhile to put up the resources necessary to support this kind of change in climate science?

I guess it would require a public willing to pay for the goodies it demands.

* * * * *

The next talk, by Kristin Shrader-Frechette (University of Notre Dame), was titled “Scientifically Legitimate Ways to Cook and Trim Data: The Hacked and Leaked Climate Emails.”

Shrader-Frechette discussed what statisticians (among others) have to say about conditions in which it is acceptable to leave out some of your data (and indeed, arguably misleading to leave it in rather than omitting it). There was maybe not as much unanimity here as one might like.

There’s general agreement that data trimming in order to make your results fit some predetermined theory is unacceptable. There’s less agreement about how to deal with outliers. Some say that deleting them is probably OK (although you’d want to be open that you have done so). On the other hand, many of the low probability/high consequence events that science would like to get a handle on are themselves outliers.

So when and how to trim data is one of those topics where it looks like scientists are well advised to keep talking to their scientific peers, the better not to mess it up.

Of the details in the leaked CRU e-mails, one that was frequently identified as a smoking gun indicating scientific shenanigans was the discussion of the “trick” to “hide the decline” in the reconstruction of climatic temperatures using proxy data from tree-rings. Shrader-Frechette noted that what was being “hidden” was not a decline in temperatures (as measured instrumentally) but rather in the temperatures reconstructed from one particular proxy — and that other proxies the climate scientists were using didn’t show this decline.

The particular incident raises a more general methodological question: scientifically speaking, is it better to include the data from proxies (once you have reason to believe it’s bad data) in your graphs? Is including it (or leaving it out) best seen as scrupulous honesty or as dishonesty?

And, does the answer differ if the graph is intended for use in an academic, bench-science presentation or a policy presentation (where it would be a very bad thing to confuse your non-expert audience)?

As she closed her talk, Shrader-Frechette noted that welfare-affecting science cannot be treated merely as pure science. She also mentioned that while FOIA applies to government-funded science, it does not apply to industry-funded science — which means that the “transparency” available to the public is pretty asymmetrical (and that industry scientists are unlikely to have to devote their time to responding to requests from yahoos for their raw data).

* * * * *

Finally, James McAllister (University of Leiden) gave a talk titled “Errors, Blunders, and the Construction of Climate Change Facts.” He spoke of four epistemic gaps climate scientists have to bridge: between distinct proxy data sources, between proxy and instrumental data, between historical time series (constructed of instrumental and proxy data) and predictive scenarios, and between predictive scenarios and reality. These epistemic gaps can be understood in the context of the two broad projects climate science undertakes: the reconstruction of past climate variation, and the forecast of the future.

As you might expect, various climate scientists have had different views about which kinds of proxy data are most reliable, and about how the different sorts of proxies ought to be used in reconstructions of past climate variation. The leaked CRU e-mails include discussions where climate scientists dedicate themselves to finding the “common denominator” in this diversity of expert opinion — not just because such a common denominator might be expected to be closer to the objective reality of things, but also because finding common ground in the diversity of opinion could be expected to enhance the core group’s credibility. Another effect, of course, is that the common denominator is also denied to outsiders, undermining their credibility (and effectively excluding them as outliers).

McAllister noted that the emails simultaneously revealed signs of internal disagreement, and of a reaching for balance. Some of the scientists argued for “wise use” of proxies and voiced judgments about how to use various types of data.

The data, of course, cannot actually speak for themselves.

As the climate scientists worked to formulate scenario-based forecasts that public policy makers would be able to use, they needed to grapple with the problems of how to handle the link between their reconstructions of past climate trends and their forecasts. They also had to figure out how to handle the link between their forecasts and reality. The e-mails indicate that some of the scientists were pretty resistant to this latter linkage — one asserted that they were “NOT supposed to be working with the assumption that these scenarios are realistic,” rather using them as internally consistent “what if?” storylines.

One thing the e-mails don’t seem to contain is any explicit discussion of what would count as an ad hoc hypothesis and why avoiding ad hoc hypotheses would be a good thing. This doesn’t mean that the climate scientists didn’t avoid them, just that it was not a methodological issue they felt they needed to be discussing with each other.

This was a really interesting set of talks, and I’m still mulling over some of the issues they raised for me. When those ideas are more than half-baked, I’ll probably write something about them here.

What do cancer researchers owe cancer patients?

As promised, today I’m returning to this essay (PDF) by Scott E. Kern about the sorry state of cancer researchers at Johns Hopkins to consider the assumptions he seems to be making about what cancer patients can demand from researchers (or any other members of society), and on what basis.

Let’s review the paragraph of Kern’s essay that dropped his framing of the ethical issue like an anvil:

During the survey period, off-site laypersons offer comments on my observations. “Don’t the people with families have a right to a career in cancer research also?” I choose not to answer. How would I? Do the patients have a duty to provide this “right”, perhaps by entering suspended animation? Should I note that examining other measures of passion, such as breadth of reading and fund of knowledge, may raise the same concern and that “time” is likely only a surrogate measure? Should I note that productive scientists with adorable family lives may have “earned” their positions rather than acquiring them as a “right”? Which of the other professions can adopt a country-club mentality, restricting their activities largely to a 35–40 hour week? Don’t people with families have a right to be police? Lawyers? Astronauts? Entrepreneurs?

There’s a bit of weirdness here that I will note and then set aside, namely formulating the question as one of whether people with families have a right to a career in cancer research, rather than whether cancer researchers have a right to have families (or any other parts of their lives that exist beyond their careers).

Framing it this way, it’s hard not to suspect that Kern is the guy on the search committee who is poised to torpedo the job application of any researcher with the temerity to show any evidence of a life that might need balancing with work — the guy on the search committee who is open about wanting to hire workaholics who have no place else to go but the lab and thus can be expected to yield a higher research output for the same salary. Talented applicants with families (or aspirations to have them), or even hobbies, are a bad risk to a guy like this. And besides, if they need that other stuff too, how serious can they be about research?

If Hopkins has a policy of screening out applicants for research positions on the basis that they have families, or hobbies, or interests that they intend to pursue beyond their work duties, I’m sure that they make this policy clear in their job advertisements. Surely, this would be the sort of information a university would want to share with job seekers.

For our discussion here, let’s start with what I take to be the less odd formulation of the question: Do cancer researchers have a right to a life outside of work?

Kern’s suggestion is that this “right,” when exercised by researchers, is something that cancer patients end up paying for with their lives (unless they go into suspended animation while cancer researchers are spending time with their families or puttering around their gardens).

The big question, then, is what the researcher’s obligations are to the cancer patient — or to society in general.

For that matter, what are society’s obligations to the cancer patient? What are society’s obligations to researchers? And what are the cancer patient’s obligations in all of this?

I’ve written before about the assertion that scientists are morally obligated to practice science (including conducting research). I’ll quote some of the big reasons offered to bolster this assertion from my earlier post:

  • society has paid for the training the scientists have received (through federal funding of research projects, training programs, etc.)
  • society has pressing needs that can best (only?) be addressed if scientific research is conducted
  • those few members of society who have specialized skills that are needed to address particular societal needs have a duty to use those skills to address those needs (i.e., if you can do research and most other people can’t, then to the extent that society as a whole needs the research that you can do, you ought to do it)

Needless to say, finding cures and treatments for cancer would be among those societal needs.

This is the whole Spider Man thing: with great power comes great responsibility, and scientific researchers have great power. If cancer researchers won’t help find cures and treatments for cancer, who else can?

Here, I think we should pause to note that there may well be an ethically relevant difference between offering help and doing everything you possibly can. It’s one thing to donate a hundred bucks to charity and quite another to give all your money and sell all your worldly goods in order to donate the proceeds. It’s a different thing for a healthy person to donate one kidney than to donate both kidneys plus the heart and lungs.

In other words, there is help you can provide, but there seems also to be a level of help that it would be wrong for anyone else to demand of you.*

And once we recognize that such a line exists, I think we have to recognize that the needs of cancer patients do not — and should not — trump every other interest of other individuals or of society as a whole. If a cancer patient cannot lay claim to the heart and lungs of a cancer researcher, then neither can that cancer patient lay claim to every moment of a cancer researcher’s time.

Indeed, in this argument of duties that spring from ability, it seems fair to ask why it is not the responsibility of everyone who might get cancer to train as a cancer researcher and contribute to the search for a cure. Why should tuning out in high school science classes, or deciding to pursue a degree in engineering or business or literature, excuse one from responsibility here? (And imagine how hard it’s going to be to get kids to study for their AP Chemistry or AP Biology classes when word gets out that their success is setting them up for a career where they ought never to take a day off, go to the beach, or cultivate friendships outside the workplace. Nerds can connect the dots.)

Surely anyone willing to argue that cancer researchers owe it to cancer patients to work the kind of hours Kern seems to think would be appropriate ought to be asking what cancer patients — and the precancerous — owe here.

Does Kern think researchers owe all their waking hours to the task because there are so few of them who can do this research? Reports from job seekers over the past several years suggest that there are plenty of other trained scientists who could do this research but have not secured employment as cancer researchers. Some may be employed in other research fields. Others, despite their best efforts, may not have secured research positions at all. What are their obligations here? Ought those employed in other research areas to abandon their current research to work on cancer, departments and funders be damned? Ought those who are not employed in a research field to be conducting their own cancer research anyway, without benefit of institution or facilities, research funding or remuneration?

Why would we feel scientific research skills, in particular, should make the individuals who have them so subject to the needs of others, even to the exclusion of their own needs?

Verily, if scientific researchers and the special skills they have are so very vital to providing for the needs of other members of society — vital enough that people like Kern feel it’s appropriate to harangue them for wanting any time out of the lab — doesn’t society owe it to its members to give researchers every resource they need for the task? Maybe even to create conditions in which everyone with the talent and skills to solve the scientific problems society wants solved can apply those skills and talents — and live a reasonably satisfying life while doing so?

My hunch is that most cancer patients would actually be less likely than Kern to regard cancer researchers as of merely instrumental value. I’m inclined to think that someone fighting a potentially life-threatening disease would be reluctant to deny someone else the opportunity to spend time with loved ones or to savor an experience that makes life worth living. To the extent that cancer researchers do sacrifice some aspects of the rest of their life to make progress on their work, I reckon most cancer patients appreciate these sacrifices. If more is needed for cancer patients, it seems reasonable to place this burden on society as a whole — teeming with potential cancer patients and their relatives and friends — to enable more (and more effective) cancer research to go on without enslaving the people qualified to conduct it, or writing off their interests in their own human flourishing.

Kern might spend some time talking with cancer patients about what they value in their lives — maybe even using this to help him extrapolate some of the things his fellow researchers might value in their lives — rather than just using them to prop up his appeal to pity.

_____
*Possibly there is also a level of help that it would be wrong for you to provide because it harms you in a fundamental and/or irreparable way.

More is better? Received widsom in the tribe of science.

Because it’s turning out to be that kind of semester, I’m late to the party in responding to this essay (PDF) by Scott E. Kern bemoaning the fact that more cancer researchers at Johns Hopkins aren’t passionate enough to be visible in the lab on a Sunday afternoon. But I’m sure as shooting going to respond.

First, make sure you read the thoughtful responses from Derek Lowe, Rebecca Monatgue, and Chemjobber.

Kern’s piece describes a survey he’s been conducting (apparently over the course of 25 years) in which he seemingly counts the other people in evidence in his cancer center on Saturdays and Sundays, and interviews them with “open-ended, gentle questions, such as ‘Why are YOU here? Nobody else is here!'” He also deigns to talk to the folks found working at the center 9 to 5 on weekdays to record “their insights about early morning, evening and weekend research.” Disappointingly, Kern doesn’t share even preliminary results from his survey. However, he does share plenty of disdain for the trainees and PIs who are not bustling through the center on weekends waiting for their important research to be interrupted by a guy with a clipboard conducting a survey.

Kern diagnoses the absence of all the researchers who might have been doing research as an indication of their lack of passion for scientific research. He tracks the amount of money (in terms of facilities and overhead, salaries and benefits) that is being thrown away in this horrific weekend under-utilization of resources. He suggests that the researchers who have escaped the lab on a weekend are falling down on their moral duty to cure cancer as soon as humanly possible.

Sigh.

The unsupported assumptions in Kern’s piece are numerous (and far from novel). Do we know that having each research scientist devote more hours in the lab increases the rate of scientific returns? Or might there plausibly be a point of diminishing returns, where additional lab-hours produce no appreciable return? Where’s the economic calculation to consider the potential damage to the scientists from putting in 80 hours a week (to their health, their personal relationships, their experience of a life outside of work, maybe even their enthusiasm for science)? After all, lots of resources are invested in educating and training researchers — enough so that one wouldn’t want to break them on the basis of an (unsupported) hypothesis offered in the pages of Cancer Biology & Therapy.

And while Kern is doing economic calculations, he might want to consider the impact on facilities of research activity proceeding full-tilt, 24/7. Without some downtime, equipment and facilities might wear out faster than they would otherwise.

Nowhere here does Kern consider the option of hiring more researchers to work 40 hour weeks, instead of shaming the existing research workforce into spending 60, 80, 100 hours a week in the lab.

They might still end up bringing work home (if they ever get a chance to go home).

Kern might dismiss this suggestion on purely economic grounds — organizations are more likely to want to pay for fewer employees (with benefits) who can work more hours than to pay to have the same number of hours of work done my more employees. He might also dismiss it on the basis that the people who really have the passion needed to do the research to cure cancer will not prioritize anything else in their lives above doing that research and finding that cure.

If that is so, it’s not clear how the problem is solved by browbeating researchers without this passion into working more hours because they owe it to cancer patients. Indeed, Kern might consider, in light of the relative dearth of researchers with such passion (as he defines it), the necessity of making use of the research talents and efforts of people who don’t want to spend 60 hours a week in the lab. Kern’s piece suggests he’d have a preference for keeping such people out of the research ranks, but by his own account there would hardly be enough researchers left in that case to keep research moving forward.

Might not these conditions prompt us to reconsider whether the received wisdom of scientific mentors is always so wise? Wouldn’t this be a reasonable place to reevaluate the strategy for accomplishing the grand scientific goal?

And Kern does not even consider a pertinent competing hypothesis, that people often have important insights into how to move research forward in the moments when they step back and allow their minds to wander. Perhaps less time away from one’s project means fewer of these insights.

The part of Kern’s piece that I find most worrisome is the cudgel he wields near the end:

During the survey period, off-site laypersons offer comments on my observations. “Don’t the people with families have a right to a career in cancer research also?” I choose not to answer. How would I? Do the patients have a duty to provide this “right”, perhaps by entering suspended animation? Should I note that examining other measures of passion, such as breadth of reading and fund of knowledge, may raise the same concern and that “time” is likely only a surrogate measure? Should I note that productive scientists with adorable family lives may have “earned” their positions rather than acquiring them as a “right”? Which of the other professions can adopt a country-club mentality, restricting their activities largely to a 35–40 hour week? Don’t people with families have a right to be police? Lawyers? Astronauts? Entrepreneurs?

How dare researchers go home to their families until they have cured cancer?

Indeed, Kern’s framing here warrants an examination of just what cancer patients can demand from researchers (or any other members of society), and on what basis. But that is a topic so meaty that it will require it’s own post.

Besides which, I have a pile of work I brought home that I have to start plowing through.

Punishment, redemption, and celebrity status: still more on the Hauser case.

Yesterday in the New York Times, Nicholas Wade wrote another article about the Marc Hauser scientific misconduct and its likely fallout. The article didn’t present much in the way of new facts, as far as I could tell, but I found this part interesting:

Some forms of scientific error, like poor record keeping or even mistaken results, are forgivable, but fabrication of data, if such a charge were to be proved against Dr. Hauser, is usually followed by expulsion from the scientific community.

“There is a difference between breaking the rules and breaking the most sacred of all rules,” said Jonathan Haidt, a moral psychologist at the University of Virginia. The failure to have performed a reported control experiment would be “a very serious and perhaps unforgivable offense,” Dr. Haidt said.

Dr. Hauser’s case is unusual, however, because of his substantial contributions to the fields of animal cognition and the basis of morality. Dr. [Gerry] Altmann [editor of the journal Cognition] held out the possibility of redemption. “If he were to give a full and frank account of the errors he made, then the process can start of repatriating him into the community in some form,” he said.

I’m curious what you all think about this.

Do you feel that some of the rules of scientific conduct are more sacred than others? That some flavors of scientific misconduct are more forgivable than others? That a scientist who has made “substantial contributions” in his or her field of study might be entitled to more forgiveness for scientific misconduct than your typical scientific plodder?

I think these questions touch on the broader question of whether the tribe of science (or the general public putting up the money to support scientific research) believes rehabilitation is possible for those caught in scientific misdeeds. (This is something we’ve discussed before in the context of why members of the tribe of science might be inclined to let “youthful offenders” slide by with a warning rather than exposing them to punishments that are viewed as draconian.)

But the Hauser case adds an element to this question. What should we make of the case where the superstar is caught cheating? How should we weigh the violation of trust against the positive contribution this researcher has made to the body of scientific knowledge? Can we continue to trust that his or her positive contribution to that body of knowledge was an actual contribution, or ought we to subject it to extra scrutiny on account of the cheating for which we have evidence? Are we forced to reexamine the extra credence we may have been granting the superstar’s research on account of that superstar status?

And, in a field of endeavor that strives for objectivity, are we really OK with the suggestion that members of the tribe of science who achieve a certain status should be held to different rules than those by which everyone else in the tribe is expected to play?