Harvard Dean sheds (a little) more light on Hauser misconduct case.

Today ScienceInsider gave an update on the Marc Hauser misconduct case, one that seems to support the accounts of other researchers in the Hauser lab. From ScienceInsider:

In an e-mail sent earlier today to Harvard University faculty members, Michael Smith, dean of the Faculty of Arts and Sciences (FAS), confirms that cognitive scientist Marc Hauser “was found solely responsible, after a thorough investigation by a faculty member investigating committee, for eight instances of scientific misconduct under FAS standards.”

ScienceInsider reprints the Dean’s email in its entirety. Here’s the characterization of the nature of Hauser’s misconduct from that email:

Continue reading

Data release, ethics, and professional survival.

In recent days, there have been signs on the horizon of an impending blogwar. Prof-like Substance fired the first volley:

[A]lmost all major genomics centers are going to a zero-embargo data release policy. Essentially, once the sequencing is done and the annotation has been run, the data is on the web in a searchable and downloadable format.

Yikes.

How many other fields put their data directly on the web before those who produced it have the opportunity to analyze it? Now, obviously no one is going to yank a genome paper right out from under the group working on it, but what about comparative studies? What about searching out specific genes for multi-gene phylogenetics? Where is the line for what is permissible to use before the genome is published? How much of a grace period do people get with data that has gone public, but that they* paid for?

—–
*Obviously we are talking about grant-funded projects, so the money is tax payer money not any one person’s. Nevertheless, someone came up with the idea and got it funded, so there is some ownership there.

Then, Mike the Mad Biologist fired off this reply:

Several of the large centers, including the one I work at, are funded by NIAID to sequence microorganisms related to human health and disease (analogous programs for human biology are supported by NHGRI). There’s a reason why NIH is hard-assed about data release:

Funding agencies learned this the hard way, as too many early sequencing centers resembled ‘genomic roach motels’: DNA checks in, but sequence doesn’t check out.

The funding agencies’ mission is to improve human health (or some other laudable goal), not to improve someone’s tenure package. This might seem harsh unless we remember how many of these center-based genome projects are funded. The investigator’s grant is not paying for the sequencing. In the case of NIAID, there is a white paper process. Before NIAID will approve the project, several goals have to be met in the white paper (Note: while I’m discussing NIAID, other agencies have a similar process, if different scientific objectives).

Obviously, the organism and collection of strains to be sequenced have to be relevant to human health. But the project also must have significant community input. NIAID absolutely does not want this to be an end-run around R01 grants. Consequently, these sequencing projects should not be a project that belongs to a single lab, and which lacks involvement by others in the subdiscipline (“this looks like an R01” is a pejorative). It also has to provide a community resource. In other words, data from a successful project should be used rapidly by other groups: that’s the whole point (otherwise, write an R01 proposal). The white paper should also contain a general description of the analysis goals of the project (and, ideally, who in the collaborative group will address them). If you get ‘scooped’, that’s, in part, a project planning issue.

NIAID, along with other agencies and institutes, is pushing hard for rapid public release. Why does NIAID get to call the shots? Because it’s their money.

Which brings me to the issue of ‘whose’ genomes these are. The answer is very simple: NIH’s (and by extension, the American people’s). As I mentioned above, NIH doesn’t care about your tenure package, or your dissertation (given that many dissertations and research programs are funded in part or in their entirely by NIH and other agencies, they’re already being generous†). What they want is high-quality data that are accessible to as many researchers as possible as quickly as possible. To put this (very) bluntly, medically important data should not be held hostage by career notions. That is the ethical position.

Prof-like substance hurled back a hefty latex pillow of a rejoinder:

People feel like anything that is public is free to use, and maybe they should. But how would you feel as the researcher who assembled a group of researchers from the community, put a proposal together, drummed up support from the community outside of your research team, produced and purified the sample to be sequenced (which is not exactly just using a Sigma kit in a LOT of cases), dealt with the administration issues that crop up along the way, pushed the project through (another aspect woefully under appreciated) the center, got your research community together once they data were in hand to make sense of it all and herded the cats to get the paper together? Would you feel some ownership, even if it was public dollars that funded the project?

Now what if you submitted the manuscript and then opened your copy of Science and saw the major finding that you centered the genome paper around has been plucked out by another group and publish in isolation? Would you say, “well, the data’s publicly available, what’s unscrupulous about using it?”

[L]et’s couch this in the reality of the changing technology. If your choice is to have the sequencing done for free, but risk losing it right off the machine, OR to do it with your own funds (>$40,000) and have exclusive right to it until the paper is published, what are you going to choose? You can draw the line regarding big and small centers or projects all you want, but it is becoming increasingly fuzzy.

This is all to get back to my point that if major sequencing centers want to stay ahead of the curve, they have to have policies that are going to encourage, not discourage, investigators to use them.

It’s fair to say that I don’t know from genomics. However, I think the ethical landscape of this disagreement bears closer examination.

Continue reading

The value of (unrealistic) case studies in ethics education.

Dr. Isis posted a case study about a postdoc’s departure from approved practices and invited her readers to discuss it. DrugMonkey responded by decrying the ridiculousness of case studies far more black and white than what scientists encounter in real life:

This is like one of those academic misconduct cases where they say “The PI violates the confidence of review, steals research ideas that are totally inconsistent with anything she’d been doing before, sat on the paper review unfairly, called the editor to badmouth the person who she was scooping and then faked up the data in support anyway. Oh, and did we mention she kicked her cat?”.

This is the typical and useless fare at the ethical training course. Obvious, overwhelmingly clear cases in which the black hats and white hats are in full display and provide a perfect correlation with malfeasance.

The real world is messier and I think that if we are to make any advances in dealing with the real problems, the real cases of misconduct and the real cases of dodgy animal use in research, we need to cover more realistic scenarios.

I’m sympathetic to DrugMonkey’s multiple complaints: that real life is almost always more complicated than the canned case study; that hardly anyone puts in the years of study and training to become a scientist if her actual career objective is to be a super-villain; and especially that the most useful sort of ethics training for the scientist will be in day to day conversation with scientific mentors and colleagues rather than in isolated ethics courses, training modules, or workshops.

However, used properly, I think that case studies — even unrealistic ones — play a valuable role in ethics education.

Continue reading

Some thoughts on online training courses.

I don’t know how it is where you are, but my summer “break” (such as it is) is rapidly winding down. Among other things, it means that I spent a few hours today in front of my computer completing online training courses.

I find myself of two minds (at least) on these courses.

On the one hand, many of these courses do a reasonable (or even excellent) job of conveying important information — broken down into modules that convey reasonably sized bites of content, enhanced with videos, case studies, and links to further information which one might bookmark for future reference. Indeed, the online training courses themselves can be accessed as a source of information later on, when one needs it.

It’s hard to beat the convenience of the online delivery of these courses. You start them when you’re ready to take them, and you can do a few modules of a course at a time, or pound through them all in one sitting. You don’t need to show up to a particular place for a particular interval of time, you don’t need to find a parking space, you don’t even need to change out of your pajamas.

Plus, many of these online training courses simplify record-keeping for whomever is responsible for ensuring that the folks who are supposed to take the course have actually taken it (and performed to the specified level on the accompanying quizzes) by emailing the completion reports to the designated official.

On the other hand … if you’re pounding through a 26-module course in one sitting (as I did today), you have to wonder a little about retention. Passing a quiz on a module immediately after you’ve read through that module may be do-able, but I’m less certain that it would be as easy to pass a month later. Indeed, if there had been a single big quiz after the 26 modules (rather than a quiz on each module that you take immediately after the module), I’m not sure I would have scored as well.

I imagine, too, that this mode of training is not necessarily beloved by people who have not made their peace with multiple choice tests. As well, for people who need to discuss material in order to understand it, the online delivery of modules may be a lot less effective than a live training with other participants.

What have your experiences with online training courses been? To you find them an adequate tool for the job, a poor fit for your learning style, or a big old waste of time?

What kind of problem is it when data do not support findings?

And, whose problem is it?

Yesterday, The Boston Globe published an article about Harvard University psychologist Marc Hauser, a researcher embarking on a leave from his appointment in the wake of a retraction and a finding of scientific misconduct in his lab. From the article:

In a letter Hauser wrote this year to some Harvard colleagues, he described the inquiry as painful. The letter, which was shown to the Globe, said that his lab has been under investigation for three years by a Harvard committee, and that evidence of misconduct was found. He alluded to unspecified mistakes and oversights that he had made, and said he will be on leave for the upcoming academic year. …

Much remains unclear, including why the investigation took so long, the specifics of the misconduct, and whether Hauser’s leave is a punishment for his actions.

The retraction, submitted by Hauser and two co-authors, is to be published in a future issue of Cognition, according to the editor. It says that, “An internal examination at Harvard University . . . found that the data do not support the reported findings. We therefore are retracting this article.’’

The paper tested cotton-top tamarin monkeys’ ability to learn generalized patterns, an ability that human infants had been found to have, and that may be critical for learning language. The paper found that the monkeys were able to learn patterns, suggesting that this was not the critical cognitive building block that explains humans’ ability to learn language. In doing such experiments, researchers videotape the animals to analyze each trial and provide a record of their raw data. …

The editor of Cognition, Gerry Altmann, said in an interview that he had not been told what specific errors had been made in the paper, which is unusual. “Generally when a manuscript is withdrawn, in my experience at any rate, we know a little more background than is actually published in the retraction,’’ he said. “The data not supporting the findings is ambiguous.’’

Gary Marcus, a psychology professor at New York University and one of the co-authors of the paper, said he drafted the introduction and conclusions of the paper, based on data that Hauser collected and analyzed.

“Professor Hauser alerted me that he was concerned about the nature of the data, and suggested that there were problems with the videotape record of the study,’’ Marcus wrote in an e-mail. “I never actually saw the raw data, just his summaries, so I can’t speak to the exact nature of what went wrong.’’
The investigation also raised questions about two other papers co-authored by Hauser. The journal Proceedings of the Royal Society B published a correction last month to a 2007 study. The correction, published after the British journal was notified of the Harvard investigation, said video records and field notes of one of the co-authors were incomplete. Hauser and a colleague redid the three main experiments and the new findings were the same as in the original paper. …

“This retraction creates a quandary for those of us in the field about whether other results are to be trusted as well, especially since there are other papers currently being reconsidered by other journals as well,’’ Michael Tomasello, co-director of the Max Planck Institute for Evolutionary Anthropology in Leipzig, Germany, said in an e-mail. “If scientists can’t trust published papers, the whole process breaks down.’’ …

In 1995, he [Hauser] was the lead author of a paper in the Proceedings of the National Academy of Sciences that looked at whether cotton-top tamarins are able to recognize themselves in a mirror. Self-recognition was something that set humans and other primates, such as chimpanzees and orangutans, apart from other animals, and no one had shown that monkeys had this ability.

Gordon G. Gallup Jr., a professor of psychology at State University of New York at Albany, questioned the results and requested videotapes that Hauser had made of the experiment.

“When I played the videotapes, there was not a thread of compelling evidence — scientific or otherwise — that any of the tamarins had learned to correctly decipher mirrored information about themselves,’’ Gallup said in an interview.

A quick rundown of what we get from this article:

  • Someone raised a concern about scientific misconduct that led to the Harvard inquiry, which in turn led to the discovery of “evidence of misconduct” in Hauser’s lab.
  • We don’t, however, have an identification of what kind of misconduct is suggested by the evidence (fabrication? falsification? plagiarism? other serious deviations from accepted practices?) or of who exactly committed it (Hauser or one of the other people in his lab).
  • At least one paper has been retracted because “the data do not support the reported findings”.
  • However, we don’t know the precise issue with the data here — e.g., whether the reported findings were bolstered by reported data that turned out to be fabricated or falsified (and are thus not being included anymore in “the data”).
  • Apparently, the editor of the journal that published the retracted paper doesn’t know the precise issue with the data, either, and found this unusual enough a situation with respect to the retraction of the paper to merit comment.
  • Other papers from the Hauser group may be under investigation for similar reasons at this point, and other researchers in the field seem to be nervous about those papers and their reliability in light of the ongoing inquiry and the retraction of the paper in Cognition.

There’s already been lots of good commentary on what might be going on with the Hauser case. (I say “might” because there are many facts still not in evidence to those of us not actually on the Harvard inquiry panel. As such, I think it’s necessary to refrain from drawing conclusions not supported by the facts that are in evidence.)

John Hawks situates the Hauser case in terms of the problem of subjective data.

Melody has a nice discussion of the political context of getting research submitted to journals, approved by peer reviewers, and anointed as knowledge.

David Dobbs wonders whether the effects of the Hauser case (and of the publicity it’s getting) will mean backing off from overly strong conclusions drawn from subjective data, or backing off too far from a “hot” scientific field that may still have a bead on some important phenomena in our world.

Drugmonkey critiques the Boston Globe reporting and reminds us that failure to replicate a finding is not evidence of scientific misconduct or fraud. That’s a hugely important point, and one that bears repeating. Repeatedly.

This is the kind of territory where we start to notice common misunderstandings about how science works. It’s usually not the case that we can cut nature at the joints along nicely dotted lines that indicate just where those cuts should be. Collecting reliable data and objectively interpreting that data is hard work. Sometimes as we go, we learn more about better conditions for collecting reliable data, or better procedures for interpreting the data without letting our cognitive biases do the driving. And sometimes, a data set we took to be reliable and representative of the phenomenon we’re trying to understand just isn’t.

That’s part of why scientific conclusions are always tentative. Scientists expect to update their current conclusions in the light of new results down the road — and in the light of our awareness that some of our old results just weren’t as solid or reproducible as we took them to be. It’s good to be sure they’re reproducible enough before you announce a finding to your scientific peers, but to be absolutely certain of total reproducibility, you have to solve the problem of induction, which isn’t terribly practical.

Honest scientific work can lead to incorrect conclusions, either because that honest work yielded wonk data from which to draw conclusions, or because good data can still be consistent with incorrect conclusions.

And, there’s a similar kind of disconnect we should watch out for. For the “corrected” 2007 paper in Proceedings of the Royal Society B, the Boston Globe article reports that videotapes and field notes (the sources of the data to support the reported conclusions) were “incomplete”. But, Hauser and a colleague redid the experiments and found data that supported the conclusions reported in this paper. One might think that as long as reported results are reproducible, they’re necessarily sufficiently ethical and scientifically sound and all that good stuff. That’s not how scientific knowledge-building works. The rules of the game are that you lay your data-cards on the table and base your findings on those data. Chancing upon an answer that turns out to be right but isn’t supported by the data you actually have doesn’t count, nor does having a really strong hunch that turns out to be right. In the scientific realm, empirical data is our basis for knowing what we know about the phenomena. Thus, doing the experiments over in the face of insufficient data is not “playing it safe” so much as “doing the job you were supposed to have done in the first place”.

Now, given the relative paucity of facts in this particular case, I find myself interested by a more general question: What are the ethical duties of a PI who discovers that he has published a paper whose findings are not, in fact, supported by the data?.

It seems reasonable that at least one of his or her duties involves correcting the scientific literature.

This could involve retracting the paper, in essence saying, “Actually, we can’t conclude this based on the data we have. Our bad!”

It could also involve correcting the paper, saying, “We couldn’t conclude this based on the data we have; instead, we should conclude this other thing,” or, “We couldn’t conclude this based on the data we originally reported, but we’ve gone and done more experiments (or have repeated the experiments we described), obtained this data, and are now confident that on the basis of these data, the conclusion in well-supported.”

If faulty data were reported, I would think that the retraction or correction should probably explain how the data were faulty — what’s wrong with them? If the problem had its source in an honest mistake, it might also be valuable to identify that honest mistake so other researchers could avoid it themselves. (Surely this would be a kindness; is it also a duty?)

Beyond correcting the scientific literature, does the PI in this situation have other relevant duties?

Would these involve ratcheting up the scrutiny of data within the lab group in advance of future papers submitted for publication? Taking the skepticism of other researchers in the field more seriously and working that much harder to build a compelling case for conclusions from the data? (Or, perhaps, working hard to identify the ways that the data might argue against the expected conclusion?) Making serious efforts to eliminate as much subjectivity from the data as possible?

Assuming the PI hasn’t fabricated or falsified the data (and that if someone in the lab group has, that person has been benched, at least for the foreseeable future), what kind of steps ought that PI to take to make things right — not just for the particular problematic paper(s), but for his or her whole research group moving forward and interacting with other researchers in the field? How can they earn back trust?

Research methods and primary literature.

At Uncertain Principles, Chad opines that “research methods” look different on the science-y side of campus than they do for his colleagues in the humanities and social sciences:

When the college revised the general education requirements a few years ago, one of the new courses created had as one of its key goals to teach students the difference between primary and secondary sources. Which, again, left me feeling like it didn’t really fit our program– as far as I’m concerned, the “primary source” in physics is the universe. If you did the experiment yourself, then your data constitute a primary source. Anything you can find in the library is necessarily a secondary source, whether it’s the original research paper, a review article summarizing the findings in some field, or a textbook writing about it years later.

In many cases, students are much better off reading newer textbook descriptions of key results than going all the way back to the “primary source” in the literature. Lots of important results in science were initially presented in a form much different than the fuller modern understanding. Going back to the original research articles often requires deciphering cumbersome and outdated notation, when the same ideas are presented much more clearly in newer textbooks.

That’s not really what they’re looking for in the course in question, though– they don’t want it to be a lab course. But then it doesn’t feel like a “research methods” class at all– while we do occasional literature searches, for the most part that’s accomplished by tracing back direct citations from recent articles. When I think about teaching students “research methods,” I think of things like teaching basic electronics, learning to work an oscilloscope, basic laser safety and operation, and so on. The library is a tiny, tiny part of what I do when I do research, and the vast majority of the literature searching I do these days can be done from my office computer.

I’m going to share some observations which maybe complicate Chad’s “two cultures” framing of research (and of what sorts of research methods one might reasonably impart to undergraduates in a course focused on research methods in a particular discipline).

Continue reading

In search of accepted practices: the final report on the investigation of Michael Mann (part 3).

Here we continue our examination of the final report (PDF) of the Investigatory Committee at Penn State University charged with investigating an allegation of scientific misconduct against Dr. Michael E. Mann made in the wake of the ClimateGate media storm. The specific question before the Investigatory Committee was:

“Did Dr. Michael Mann engage in, or participate in, directly or indirectly, any actions that seriously deviated from accepted practices within the academic community for proposing, conducting, or reporting research or other scholarly activities?”

In the last two posts, we considered the committee’s interviews with Dr. Mann and with Dr. William Easterling, the Dean of the College of Earth and Mineral Sciences at Penn State, and with three climate scientists from other institutions, none of whom had collaborated with Dr. Mann. In this post, we turn to the other sources of information to which the Investigatory Committee turned in its efforts to establish what counts as accepted practices within the academic community (and specifically within the community of climate scientists) for proposing, conducting, or reporting research.

Continue reading

In search of accepted practices: the final report on the investigation of Michael Mann (part 2).

When you’re investigating charges that a scientist has seriously deviated from accepted practices for proposing, conducting, or reporting research, how do you establish what the accepted practices are? In the wake of ClimateGate, this was the task facing the Investigatory Committee at Penn State University investigating the allegation (which the earlier Inquiry Committee deemed worthy of an investigation) that Dr. Michael E. Mann “engage[d] in, or participate[d] in, directly or indirectly, … actions that seriously deviated from accepted practices within the academic community for proposing, conducting, or reporting research or other scholarly activities”.
One strategy you might pursue is asking the members of a relevant scientific or academic community what practices they accept. In the last post, we looked at what the Investigatory Committee learned from its interviews about this question with Dr. Mann himself and with Dr. William Easterling, Dean, College of Earth and Mineral Sciences, The Pennsylvania State University. In this post, we turn to the committee’s interviews with three climate scientists from other institutions, none of whom had collaborated with Dr. Mann, and at least one of whom has been very vocal about his disagreements with Dr. Mann’s scientific conclusions.

Continue reading

In search of accepted practices: the final report on the investigation of Michael Mann (part 1).

Way back in early February, we discussed the findings of the misconduct inquiry against Michael Mann, an inquiry that Penn State University mounted in the wake of “numerous communications (emails, phone calls, and letters) accusing Dr. Michael E. Mann of having engaged in acts that included manipulating data, destroying records and colluding to hamper the progress of scientific discourse around the issue of global warming from approximately 1998″. Those numerous communications, of course, followed upon the well-publicized release of purloined email messages from the Climate Research Unit (CRU) webserver at the University of East Anglia — the storm of controversy known as ClimateGate.
You may recall that the misconduct inquiry, whose report (PDF) is here, looked into four allegations against Dr. Mann and found no credible evidence to support three of them. On the fourth allegation, the inquiry committee was unable to make a definitive finding. Here’s what I wrote about the inquiry committee’s report on this allegation:

[T]he inquiry committee is pointing out that researchers at the university has a duty not to commit fabrication, falsification, or plagiarism, but also a positive duty to behave in such a way that they maintain the public’s trust. The inquiry committee goes on to highlight specific sections of policy AD-47 that speak to cultivating intellectual honesty, being scrupulous in presentation of one’s data (and careful not to read those data as being more robust than they really are), showing due respect for their colleagues in the community of scholars even when they disagree with their findings or judgments, and being clear in their communications with the public about when they are speaking in their capacity as researchers and when they are speaking as private citizens. …
[W]e’re not just looking at scientific conduct here. Rather, we’re looking at scientific conduct in an area about which the public cares a lot.
What this means is that the public here is paying rather more attention to how climate scientists are interacting with each other, and to the question of whether these interactions are compatible with the objective, knowledge-building project science is supposed to be.
[T]he purloined emails introduce new data relevant to the question of whether Dr. Mann’s research activities and interactions with other scientists — both those with whose conclusions he agrees and those with whose conclusions he does not agree — are consistent with or deviate from accepted scientific practices.
Evaluating the data gleaned from the emails, in turns, raises the question of what the community of scholars and the community of research scientists agree counts as accepted scientific practices.

Decision 4. Given that information emerged in the form of the emails purloined from CRU in November 2009, which have raised questions in the public’s mind about Dr. Mann’s conduct of his research activity, given that this may be undermining confidence in his findings as a scientist, and given that it may be undermining public trust in science in general and climate science specifically, the inquiry committee believes an investigatory committee of faculty peers from diverse fields should be constituted under RA-10 to further consider this allegation.

In sum, the overriding sentiment of this committee, which is composed of University administrators, is that allegation #4 revolves around the question of accepted faculty conduct surrounding scientific discourse and thus merits a review by a committee of faculty scientists. Only with such a review will the academic community and other interested parties likely feel that Penn State has discharged it responsibility on this matter.

What this means is that the investigation of allegation #4 that will follow upon this inquiry will necessarily take up the broad issue of what counts as accepted scientific practices. This discussion, and the findings of the investigation committee that may flow from it, may have far reaching consequences for how the public understands what good scientific work looks like, and for how scientists themselves understand what good scientific work looks like.

Accordingly, an Investigatory Committee was constituted and charged to examine that fourth allegation, and its report (PDF) has just been released. We’re going to have a look at what the Investigatory Committee found, and at its strategies for getting the relevant facts here.
Since this report is 19 pages long (the report of the inquiry committee was just 10), I won’t be discussing all the minutiae of how the committee was constituted, nor will I be discussing this report’s five page recap of the earlier committee’s report (since I’ve already discussed that report at some length). Instead, I’ll be focusing on this committee’s charge:

The Investigatory Committee’s charge is to determine whether or not Dr. Michael Mann engaged in, or participated in, directly or indirectly, any actions that seriously deviated from accepted practices within the academic community for proposing, conducting, or reporting research or other scholarly activities.

and on the particular strategies the Investigatory Committee used to make this determination.
Indeed, establishing what might count as a serious deviation from accepted practices within the academic community is not trivially easy (which is one reason people have argued against appending the “serious deviations” clause to fabrication, falsification, and plagiarism in official definitions of scientific misconduct). Much turns on the word “accepted” here. Are we talking about the practices a scientific or academic community accepts as what members of the community ought to do, or about practices that are “accepted” insofar as members of the community actually do them or are aware of others doing them (and don’t do a whole lot to stop them)? The Investigatory Committee here seems to be trying to establish what the relevant scientific community accepts as good practices, but there are a few places in the report where the evidence upon which they rely may merely establish the practices the community tolerates. There is a related question about whether the practices the community accepts as good can be counted on reliably to produce the good outcomes the community seems to assume they do, something I imagine people will want to discuss in the comments.
Let’s dig in. Because of how much there is to discuss, we’ll take it in three posts. This post will focus on the committee’s interviews with Dr. Mann and with Dr. William Easterling, Dean, College of Earth and Mineral Sciences, The Pennsylvania State University (and Mann’s boss, to the degree that the Dean of one’s College is one’s boss).
The second post will examine the committee’s interviews with Dr. William Curry, Senior Scientist, Geology and Geophysics Department, Woods Hole Oceanographic Institution; Dr. Jerry McManus, Professor, Department of Earth and Environmental Sciences, Columbia University; and Dr. Richard Lindzen, Alfred P. Sloan Professor, Department of Earth, Atmospheric and Planetary Sciences, Massachusetts Institute of Technology.
The third post will then examine the other sources of information besides the interviews that the Investigatory Committee relied upon to establish what counts as accepted practices within the academic community (and specifically within the community of climate scientists) for proposing, conducting, or reporting research. All blockquotes from here on out are from the Investigatory Committee’s final report unless otherwise noted.

Continue reading

Drag your lazy ass back to the lab! Don’t you know postdocs are a dime a dozen?

Via Abi, I learn that Chemistry Blog has posted an interesting letter from a PI to his postdoc dated July 27, 1996. The letter, on official Caltech Division of Chemistry and Chemical Engineering letterhead, suggests that not all the stories one hears about the unreasonable work hours demanded of postdocs are exaggerated. Indeed, the most surprising thing about the letter is that it puts the PI’s expectations in writing.
The letter reads:

Continue reading