State budget cuts mean … executive salary increases?

The California State University Board of Trustees met July 12. As expected, they approved yet another student fee increase. Because, how could they not when they’re facing down at least a $650 million budget cut for the 23-campus system?

Not so expected (at least by those of us not on the inside) was their vote to pay Elliot Hirshman, the new president of San Diego State University, an annual salary of $400,000. Because … remember that $650 million budget cut for the 23-campus system?

How is this supposed to work again?

It should be noted that California Governor Jerry Brown sent a letter to the Board of Trustees urging them not to approve this salary increase. The letter (PDF here) is so clear in identifying the problem that I’m going to quote the whole thing:

As this Board well knows, California is still struggling to overcome the effects of the great recession which forced tens of billions of dollars in state budget cuts.

The state university system has been particularly hard hit with painful sacrifices on the part of faculty and students alike. As trustees, you have to make tough calls and strive as best you can to protect our proud system of higher education.

It is in this context, and prompted by the salary decision you are about to make today, that I write to express my concern about the ever-escalating pay packages awarded to your top administrators.

I fear your approach to compensation is setting a pattern for public service that we cannot afford.

I have reviewed the Mercer compensation study and have reflected on its market premises, which provide the justification for your proposed salary boost of more than $100,000. The assumption is that you cannot find a qualified man or woman to lead the university unless paid twice that of the Chief Justice of the United States. I reject this notion.

At a time when the state is closing its courts, laying off public school teachers and shutting senior centers, it is not right to be raising the salaries of leaders who — of necessity — must demand sacrifice from everyone else.

If it were me writing the letter, in the last paragraph I might also mention the already deep cuts to the CSU (many of which I’m sure have been obvious to the students at SDSU, as they have been to students here at SJSU). This is a situation where maybe President Hirshman and the trustees are banking on the students not following state news, because if they do keep up on current events, it might get awkward.

Of course, it’s being reported that some of the 12 trustees who voted to approve Hirshman’s compensation package (3 voted against it) say it’s necessary to pay him so much because of “the complexity of running a major university and the salaries that other university presidents around the country are paid”. This strikes me as a variant of the old saw that “we need to pay administrators so much because of how much they could be making in the private sector”.

Maybe it’s the larger class sizes, the absence of funds for graders or for work-related travel, or possibly the fact that the existence of my public-employee pension (which, given the way this job is going, I might not live long enough to use) has been used to demonize me and other CSU faculty like me in the minds of the voters, but I need to call shenanigans on this.

Could university administrators be making buckets of money in the private sector? It’s not clear to me that the private sector is doing a lot of hiring these days. But if they are, I’m inclined to tell those administrators, Vaya con Dios. Do what you must to feed your family, to fortify your compound, to get your yacht ready for the sailing season, to find fulfillment, but right now we can’t afford you. In a perfect world, maybe we could pay you what you feel you’re worth, but this, my dear, is nothing like a perfect world.

I’m sure your cash-strapped students can fill you in on some of the local details of its imperfection.

In the meantime, it’s worth noting that this newly increased salary doesn’t put President Hirshman close to the top slot of best-compensated California public employee. That honor goes to UC Berkeley football coach Jeff Tedford, at $2.3 million a year, with UCLA basketball coach Ben Howland ($2.1 million) a close second.

For comparison, Jerry Brown earns $173,987 a year to be Governor of the state.

Blogospheric navel-gazing: where’s the chemistry communication?

The launch last week of the new Scientific American Blog Network* last week prompted a round of blogospheric soul searching (for example here, here, and here): Within the ecosystem of networked science blogs, where are all the chem-bloggers?

Those linked discussions do a better job with the question and its ramifications than I could, so as they say, click through and read them. But the fact that these discussions are so recent is an interesting coincidence in light of the document I want to consider in this post.

I greeted with interest the release of a recent publication from the National Academy of Sciences titled Chemistry in Primetime and Online: Communicating Chemistry in Informal Environments (PDF available for free here). The document aims to present a summary of a one-and-a-half day workshop, organized by the Chemical Sciences Roundtable and held in May 2010.

Of course, I flipped right to the section that took up the issue of blogs.

The speaker invited to the workshop to talk about chemistry on blogs was Joy Moore, representing Seed Media Group.

She actually started by exploring how much chemistry coverage there was in Seed magazine and professed surprise that there wasn’t much:

When she talked to one of her editors about why, what he told her was very similar to what others had mentioned previously in the workshop. He said, “part of the reason behind the apparent dearth of chemistry content is that chemistry is so easily subsumed by other fields and bigger questions, so it is about the ‘why’ rather than the how.'” For example, using chemistry to create a new clinical drug is often not reported or treated as a story about chemistry. Instead, it will be a story about health and medicine. Elucidating the processes by which carbon compounds form in interstellar space is typically not treated as a chemistry story either; it will be an astronomy-space story.

The Seed editor said that in his experience most pure research in chemistry is not very easy to cover or talk about in a compelling and interesting way for general audiences, for several reasons: the very long and easily confused names of many organic molecules and compounds, the frequent necessity for use of arcane and very specific nomenclature, and the tendency for most potential applications to boil down to an incremental increase in quality of a particular consumer product. Thus, from a science journalist point of view, chemistry is a real challenge to cover, but he said, “That doesn’t mean that there aren’t a lot of opportunities.” (24)

A bit grumpily, I will note that this editor’s impression of chemistry and what it contains is quite a distance from my own. Perhaps it’s because I was a physical chemist rather than an organic chemist (so I mostly dodged the organic nomenclature issue in my own research), and because the research I did had no clear applications to any consumer products (and many of my friends in chemistry were in the same boat), and because the lot of us learned how to explain what was interesting and important and cool to each other (and to our friends who weren’t in chemistry, or even in school) without jargon. It can be done. It’s part of this communication strategy called “knowing your audience and meeting them where they are.”

Anyway, after explaining why Seed didn’t have much chemistry, Moore shifted her focus to ScienceBlogs and its chemistry coverage. Here again, the pickings seemed slim:

Moore said there is no specific channel in ScienceBlogs dedicated to chemsitry, but there are a number of bloggers who use chemistry in their work.

Two chemistry-related blogs were highlighted by Moore. The first one, called Speakeasy Science, is by a new blogger Deborah Bloom [sic]. Bloom is not a scientist, but chemistry informs her writing, especially her new book on the birth of forensic toxicology. Moore also showed a new public health blog from Seed called the Pump Handle. Seed has also focused more on chemistry, in particular environmental toxins. Moore added, “So again, as we go through we can find the chemistry as the supporting characters, but maybe not as the star of the show.” (26)

While I love both The Pump Handle and Speakeasy Science (which has since relocated to PLoS Blogs), Moore didn’t mention a bunch of blogs at ScienceBlogs that could be counted on for chemistry content in a starring role. These included Molecule of the Day, Terra Sigillata (which has since moved to CENtral Science), and surely the pharmacology posts on Neurotopia. That’s just three off the top of my head. Indeed, even my not-really-a-chemistry-blog had a “Chemistry” category populated with posts that really focused on chemistry.

And, of course, I shouldn’t have to point out that ScienceBlogs is not now, and was not then, the entirety of the science blogosphere. There have always been seriously awesome chem-bloggers writing entertaining, accessible stuff outside the bounds of the Borg.

Ignoring their work (and their readership) is more than a little lazy. (Maybe a search engine would help?)

Anyway, Moore also told the workshop about Research Blogging:

Moore said that Research Blogging is a tagging and aggregating tool for bloggers who write about journal articles. Bloggers who occasionally discuss journal articles on their blog sites can join the Seed Research Blogging community. Seed provides the blogger with some code to put into blog posts that allows Seed to pick up those blog posts and aggregate them. Seed then offers the blogger on its website Researchvolume.org [sic]. This allows people to search across the blog posts within these blogs. Moore said that bloggers can also syndicate comments through the various Seed feeds, widgets, and other websites. It basically brings together blog posts about peer-reviewed research. At the same time, Seed gives a direct link back to the journal article, so that people can read the original source.

“Who are these bloggers?” Moore asked. She said the blog posts take many different forms. Sometimes someone is simply pointing out an interesting article or picking a topic and citing two or three articles to preface it. Other bloggers almost do a mini-review. These are much more in-depth analyses or criticisms of papers. (26)

Moore also noted some research on the chemistry posts aggregated by ResearchBlogging that found:

the blog coverage of the chemistry literature was more efficient than the traditional citation process. The science blogs were found to be faster in terms of reporting on important articles, and they also did a better job of putting the material in context within different areas of chemistry. (26)

The issues raised by the other workshop participants here were the predictable ones.

One, from John Miller of the Department of Energy, was whether online venues like ResearchBlogging might replace traditional peer review for journal articles. Joy Moore said she saw it as a possibility. Of course, this might rather undercut the idea that what is being aggregated is blog posts on peer reviewed research — the peer review that happens before publication, I take it, is enhanced, not replaced, by the post-publication “peer review” happening in these online discussions of the research.

Another comment, from Bill Carroll, had to do with the perceived tone of the blogosphere:

“One of the things I find discouraging about reading many blogs or various comments is that it very quickly goes from one point of view to another point of view to ‘you are a jerk.’ My question is, How do you keep [the blog] generating light and not heat.” (26)

Moore’s answer allowed as how some blog readers are interested in being entertained by fisticuffs.

Here again, it strikes me that there’s a danger in drawing sweeping conclusions from too few data points. There exist science blogs that don’t get shouty and personal in the posts or the comment threads. Many of these are really good reads with engaging discussions happening between bloggers and readers.

Sometimes too, the heat (or at least, some kind of passion) may be part of how a blogger conveys to readers what about chemistry is interesting, or puzzling, or important in contexts beyond the laboratory or the journal pages. Chemistry is cool enough or significant enough that it can get us riled up. I doubt that insisting on Vulcan-grade detachment is a great way to convince readers who aren’t already sold on the importance of chemistry that they ought to care about it.

And, can we please get past this suggestion that the blogosphere is the source of incivility in exchanges about science?

I suspect that people who blame the medium (of blogs) for the tone (of some blogs or of the exchanges in their comments) haven’t been to a department seminar or a group meeting lately. Those face-to-face exchanges can get not only contentious but also shouty and personal. (True story: When I was a chemistry graduate student shopping for a research group, I was a guest at a group meeting where the PI, who knew I was there to see how I liked the research group, spent a full five minutes tearing one of his senior grad students a new one. And then, he was disappointed that I did not join the research group.)

Now, maybe the worry is that blogs about chemistry might give the larger public a peek at chemists being contentious and personal and shouty, something that otherwise would be safely hidden from view behind the walls of university lecture halls and laboratory spaces. If that’s the worry, one possible response is that chemists playing in the blogosphere should maybe pay attention to the broader reach the internet affords them and behave themselves in the way they want the public to see them behaving.

If, instead, the worry is that chemists ought not ever to behave in certain ways toward each other (e.g., attacking the person rather than the methods or the results or the conclusions), then there’s plenty of call for peer pressure within the chemistry community to head off these behaviors before we even start talking about blogs.

There are a few things that complicate discussions like this about the nature of communication about chemistry on blogs. One is that the people taking up the issue are sometimes unclear about what kind of communication it is they’re interested in — for example, chemist to non-chemist or chemist-to-chemist. Another is that they sometimes have very different ideas about what kinds of chemical issues ought to be communicated (basic concepts, cutting edge research, issues to do with chemical education or chemical workplaces, chemistry in everyday products or in highly charged political debates, etc., etc.). And, as mentioned already, the chemistry blogosphere, like chemistry as a discipline, contains multitudes. There is so much going on, in so many sub-specialities, that it’s hard to draw too many useful generalizations.

For the reader, this diversity of chemistry blogging is a good thing, not a bad thing — at least if the reader is brave enough to venture beyond networks which don’t always have lots of blogs devoted to chemistry. Some good places to look:

Blogs about chemistry indexed by ScienceSeeker

CENtral Science (which is a blog network, but one devoted to chemistry by design)

Many excellent chemistry blogs are linked in this post at ScienceGeist. Indeed, ScienceGeist is an excellent chemistry blog.

Have you been reading the Scientopia Guest Blog lately? If so, you’ve had a chance to read Dr. Rubidium’s engaging discussions of chemistry that pops up in the context of sex and drugs. I’m sure rock ‘n’ roll is on deck.

Finally, David Kroll’s blogroll has more fine chemistry-related blogs than you can shake a graduated cylinder at.

If there are other blogospheric communicators of chemistry you’d like to single out, please tell us about them in the comments.
______
*Yes, I have a new blog there, but this blog isn’t going anywhere.

Astounding claims made on the internet.

On this post, from a commenter named Zac*:

I’m a philosophy student. When we disagree we reason, we argue, we discuss. We do not, ever, ever call for a boycott of those with opposing views.

My thoughts on this:

  1. What would make you think that reasoning, arguing, and discussing necessarily rule out boycotting? Where’s the logical contradiction you’re assuming?
  2. Also, would you like to offer a positive argument that people with whom we disagree are entitled to our money? I means, if we take their goods and services they have a claim to our money, but do they have a right to demand that we not take our business elsewhere?
  3. And look, for this particular X, the claim that “Philosophers do not do X” turns out to be clearly false. Perhaps you meant to make a normative claim rather than a descriptive one. If this is the case, you’ll probably also want to offer a defense of the particular “oughts” you are asserting.

This edition of “Astounding claims made on the internet” brought to you by my current inability to settle down and grade case study responses.
_____
*The permalink may be wonky. The comments appears at July 13, 2011 at 01:51 AM.

Assumptions that seem reasonable to undergraduates.

Gleaned from my “Ethics in Science” students:

  1. There exists an Official Scientist’s Code of Ethics to which all scientists swear allegiance.
  2. There exists an Ethics Board that operates nationally (and maybe internationally) to impose penalties on scientists who violate the Official Scientist’s Code of Ethics.
  3. In the 22 years since the publication of Cantor’s Dilemma, the scientific community has likely evolved to become more civilized and more ethical.
  4. Anyone who has earned a Ph.D. in a scientific field (at least in the past 22 years) must also have had extensive training in ethics — at least the equivalent of a semester-long course.

As to the origins of these assumptions, I don’t know what to tell you. I’m curious about that myself.

Limits of ethical recycling.

In the “Ethics in Science” course I regularly teach, we spend some time discussing case studies to explore some of the situations students may encounter in their scientific training or careers where they will want to be able to make good ethical decisions.

A couple of these cases touch on the question of “recycling” pieces of old grant proposals or journal articles — say, the background and literature review.

There seem to be cases where the right thing to do is pretty straightforward. For example, helping yourself to the background section someone else had written for her own grant proposal would be wrong. This would amount to misappropriating someone else’s words and ideas without her permission and without giving her credit. (Plagiarism anyone?) Plus, it would be weaseling out of one’s own duty to actually read the relevant literature, develop a view about what it’s saying, and communicate clearly why it matters in motivating the research being proposed.

Similarly, reusing one’s own background section seems pretty clearly within the bounds of ethical behavior. You did the intellectual labor yourself, and especially in the case where you are revising and resubmitting your own proposal, there’s no compelling reason for you to reinvent that particular wheel (unless, if course, reviewer comments indicate that the background section requires serious revision, the literature cited ought to take account of important recent developments that were missing in the first round, etc.).

Between these two extremes, my students happened upon a situation that seemed less clear-cut. How acceptable is it to recycle the background section (or experimental protocol, for that matter) from an old grant proposal you wrote in collaboration with someone else? Does it make a difference whether that old grant proposal was actually funded? Does it matter whether you are “more powerful” or “less powerful” (however you want to cash that out) within the collaboration? Does it require explicit permission from the person with whom you collaborated on the original proposal? Does it require clear citation of the intellectual contribution of the person with whom you collaborated on the original proposal, even if she is not officially a collaborator on the new proposal?

And, in your experience, does this kind of recycling make more sense than just sitting down and writing something new?

A question for the trainees: How involved do you want the boss to get with your results?

This question follows on the heels of my recent discussion of the Bengü Sezen misconduct investigations, plus a conversation via Twitter that I recapped in the last post.

The background issue is that people — even scientists, who are supposed always to be following the evidence wherever it might lead — can run into trouble really scrutinizing the results of someone they trust (however that trust came about). Indeed, in the Sezen case, her graduate advisor at Columbia University, Dalibor Sames, seemed to trust Sezen and her scientific prowess so much that he discounted the results of other graduate students in his lab who could not replicate Sezen’s results (which turned out to have been faked).

Really, it’s the two faces of the PI’s trust here: trusting one trainee so much that her results couldn’t be wrong, and using that trust to ignore the empirical evidence presented by other trainees (who apparently didn’t get the same level of presumptive trust). As it played out, at least three of those other trainees whose evidence Sames chose not to trust left the graduate program before earning their degrees.

The situation suggests to me that PIs would be prudent to establish environments in their research groups where researchers don’t take scrutiny of their results, data, methods, etc., personally — and where the scrutiny is applied to each member’s results, data, methods, etc. (since anyone can make mistakes). But how do things play out when they rubber hits the road?

So, here’s the question I’d like to ask the scientific trainees. (PIs: I’ve posed the complementary question to you in the post that went up right before this one!)

In his or her capacity as PI, your advisor’s scientific credibility (and likely his or her name) is tied to all the results that come out of the research group — whether they are experimental measurements, analyses of measurements, modeling results, or whatever else it is that scientists of your stripe regard as results. Moreover, in his or her capacity as a trainer of new scientists, the boss has something like a responsibility to make sure you know how to generate reliable results — and that you know how to tell them from results that aren’t reliable. What does your PI do to ensure that the results you generate are reliable? Do you feel like it’s enough (both in terms of quality control and in terms of training you well)? Do you feel like it’s too much?

Commenting note: You may feel more comfortable commenting with a pseudonym for this particular discussion, and that’s completely fine with me. However, please pick a unique ‘nym and keep it for the duration of this discussion, so we’re not in the position of trying to sort out which “Anonymous” is which. Also, if you’re a regular commenter who wants to go pseudonymous for this discussion, you’ll probably want to enter something other than your regular email address in the commenting form — otherwise, your Gravatar may give your other identity away!

A question for the PIs: How involved do you get in your trainees’ results?

In the wake of this post that touched on recently released documents detailing investigations into Bengü Sezen’s scientific misconduct, and that noted that a C & E News article described Sezen as a “master of deception”, I had an interesting chat on the Twitters:

@UnstableIsotope (website) tweeted:

@geernst @docfreeride I scoff at the idea that Sezen was a master at deception. She lied a lot but plenty of opportunities to get caught.

@geernst (website) tweeted back:

@UnstableIsotope Maybe evasion is a more accurate word.

@UnstableIsotope:

@geernst I’d agree she was a master of evasion. But she was caught be other group members but sounds like advisor didn’t want to believe it.

@docfreeride (that’s me!):

@UnstableIsotope @geernst Possible that she was master of deception only in environment where people didn’t guard against being deceived?

@UnstableIsotope:

@docfreeride @geernst I agree ppl didn’t expect deception, my read suggests she was caught by group members but protected by advisor.

@UnstableIsotope:

@docfreeride @geernst The advisor certainly didn’t expect deception and didn’t encourage but didn’t want to believe evidence

@docfreeride:

@UnstableIsotope @geernst Not wanting to believe the evidence strikes me as a bad fit with “being a scientist”.

@UnstableIsotope:

@docfreeride @geernst Yes, but it is human. Not wanting to believe your amazing results are not amazing seems like a normal response to me.

@geernst:

@docfreeride @UnstableIsotope I agree. Difficult to separate scientific objectivity from personal feelings in those circumstances.

@docfreeride:

@geernst @UnstableIsotope But isn’t this exactly the argument for not taking scrutiny of your results, data, methods personally?

@UnstableIsotope:

@docfreeride @geernst Definitely YES. I look forward to people repeating my experiments. I’m nervous if I have the only result.

@geernst:

@docfreeride @UnstableIsotope Couldn’t agree more.

This conversation prompted a question I’d like to ask the PIs. (Trainees: I’m going to pose the complementary question to you in the very next post!)

In your capacity as PI, your scientific credibility (and likely your name) is tied to all the results that come out of your research group — whether they are experimental measurements, analyses of measurements, modeling results, or whatever else it is that scientists of your stripe regard as results. What do you do to ensure that the results generated by your trainees are reliable?

Now, it may be the case that what you see as the appropriate level of involvement/quality control/”let me get up in your grill while you repeat that measurement for me” would still not have been enough to deter — or to detect — a brazen liar. If you want to talk about that in the comments, feel free.

Commenting note: You may feel more comfortable commenting with a pseudonym for this particular discussion, and that’s completely fine with me. However, please pick a unique ‘nym and keep it for the duration of this discussion, so we’re not in the position of trying to sort out which “Anonymous” is which. Also, if you’re a regular commenter who wants to go pseudonymous for this discussion, you’ll probably want to enter something other than your regular email address in the commenting form — otherwise, your Gravatar may give your other identity away!

What are honest scientists to do about a master of deception?

A new story posted at Chemical & Engineering News updates us on the fraud case of Bengü Sezen (who we discussed here, here, and here at much earlier stages of the saga).

William G. Schultz notes that documents released (PDF) by the Department of Health and Human Services (which houses the Office of Research Integrity) detail some really brazen misconduct on Sezen’s part in her doctoral dissertation at Columbia University and in at least three published papers.

From the article:

The documents—an investigative report from Columbia and HHS’s subsequent oversight findings—show a massive and sustained effort by Sezen over the course of more than a decade to dope experiments, manipulate and falsify NMR and elemental analysis research data, and create fictitious people and organizations to vouch for the reproducibility of her results. …

A notice in the Nov. 29, 2010, Federal Register states that Sezen falsified, fabricated, and plagiarized research data in three papers and in her doctoral thesis. Some six papers that Sezen had coauthored with Columbia chemistry professor Dalibor Sames have been withdrawn by Sames because Sezen’s results could not be replicated. …

By the time Sezen received a Ph.D. degree in chemistry in 2005, under the supervision of Sames, her fraudulent activity had reached a crescendo, according to the reports. Specifically, the reports detail how Sezen logged into NMR spectrometry equipment under the name of at least one former Sames group member, then merged NMR data and used correction fluid to create fake spectra showing her desired reaction products.

Apparently, her results were not reproducible because those trying to reproduce them lacked her “hand skills” with Liquid Paper.

Needless to say, this kind of behavior is tremendously detrimental to scientific communities trying to build a body of reliable knowledge about the world. Scientists are at risk of relying on published papers that are based in wishes (and lies) rather than actual empirical evidence, which can lead them down scientific blind alleys and waste their time and money. Journal editors devoted resources to moving her (made-up) papers through peer review, and then had to devote more resources to dealing with their retractions. Columbia University and the U.S. government got to spend a bunch of money investigating Sezen’s wrongdoing — the latter expenditures unlikely to endear scientific communities to an already skeptical public. Even within the research lab where Sezen, as a grad student, was concocting her fraudulent results, her labmates apparently wasted a lot of time trying to reproduce her results, questioning their own abilities when they couldn’t.

And to my eye, one of the big problems in this case is that Sezen seems to have been the kind of person who projected confidence while lying her pants off:

The documents paint a picture of Sezen as a master of deception, a woman very much at ease with manipulating colleagues and supervisors alike to hide her fraudulent activity; a practiced liar who would defend the integrity of her research results in the face of all evidence to the contrary. Columbia has moved to revoke her Ph.D.

Worse, the reports document the toll on other young scientists who worked with Sezen: “Members of the [redacted] expended considerable time attempting to reproduce Respondent’s results. The Committee found that the wasted time and effort, and the onus of not being able to reproduce the work, had a severe negative impact on the graduate careers of three (3) of those students, two of whom [redacted] were asked to leave the [redacted] and one of whom decided to leave after her second year.”

In this matter, the reports echo sources from inside the Sames lab who spoke with C&EN under conditions of anonymity when the case first became public in 2006. These sources described Sezen as Sames’ “golden child,” a brilliant student favored by a mentor who believed that her intellect and laboratory acumen provoked the envy of others in his research group. They said it was hard to avoid the conclusion that Sames retaliated when other members of his group questioned the validity of Sezen’s work.

What I find striking here is that Sezen’s vigorous defense of her’s own personal integrity was sufficient, at least for awhile, to convince her mentor that those questioning the results were in the wrong — not just incompetent to reproduce the work, but jealous and looking to cause trouble. And, it’s deeply disappointing that this judgment may have been connected to the departure of those fellow graduate students who raised questions from their graduate program.

How could this have been avoided?

Maybe a useful strategy would have been to treat questions about the scientific work (including its reproducibility) first and foremost as questions about the scientific work.

Getting results that others cannot reproduce is not prima facie evidence that you’re a cheater-pants. It may just mean that there was something weird going on with the equipment, or the reagents, or some other component of the experimental system when you did the experiment that yielded the exciting but hard to replicate results. Or, it may mean that the folks trying to replicate the results haven’t quite mastered the technique (which, in the case that they are your colleagues in the lab, could be addressed by working with them on their technique). Or, it may mean that there’s some other important variable in the system that you haven’t identified as important and so have not worked out (or fully described) how to control.

In this case, of course, it’s looking like the main reason that Sezen’s results were not reproducible was that she made them up. But casting the failure to replicate presumptively as one scientist’s mad skillz and unimpeachable integrity against another’s didn’t help get to the bottom of the scientific facts. It made the argument personal rather than putting the scientists involved on the same team in figuring out what was really going on with the scientific systems being studied.

Of all of the Mertonian norms imputed to the Tribe of Science, organized skepticism is probably the one nearest and dearest to most scientists’ basic understanding of how they get the knowledge-building job done. Figuring out what’s going on with particular phenomena in the world can be hard, not least because lining up solid evidence to support your conclusions requires identifying evidence that others trying to repeat your work can reliably obtain themselves. This is more than just a matter of making sure your results are robust. Rather, you want others to be able to reproduce your work so that you know you haven’t fooled yourself.

Organized skepticism, in other words, should start at home.

There is a risk of being too skeptical of your own results, and there are chances to overlook something important as noise because it doesn’t fit with what you expect to observe. However, the scientist who refuses to entertain the possibility that her work could be wrong — indeed, who regards questions about the details of her work as a personal affront — should raise a red flag for the rest of her scientific community, no matter what her career stage or her track record of brilliance to date.

In a world where every scientist’s findings are recognized as being susceptible to error, the first response to questions about findings might be to go back to the phenomena together, helping each other to locate potential sources of error and to avoid them. In such a world, the master of deception trying to ride personal reputation (or good initial impressions) to avoid scrutiny of his or her work will have a much harder time getting traction.

Evaluating scientific reports (and the reliability of the scientists reporting them).

One of the things scientific methodology has going for it (at least in theory) is a high degree of transparency. When scientists report findings to other scientists in the community (say, in a journal article), it is not enough for them to just report what they observed. They must give detailed specifications of the conditions in the field or in the lab — just how did they set up and run that experiment, choose their sample, make their measurement. They must explain how they processed the raw data they collected, giving a justification for processing it this way. And, in drawing conclusions from their data, they must anticipate concerns that the data might have been due to something other than the phenomenon of interest, or that the measurements might better support an alternate conclusion, and answer those objections.

A key part of transparency in scientific communications is showing your work. In their reports, scientists are supposed to include enough detailed information so that other scientists could set up the same experiments, or could follow the inferential chain from raw data to processed data to conclusions and see if it holds up to scrutiny.

Of course, scientists try their best to apply hard-headed scrutiny to their own results before they send the manuscript to the journal editors, but the whole idea of peer review, and indeed the communication around a reported result that continues after publication, is that the scientific community exercises “organized skepticism” in order to discern which results are robust and reflective of the system under study rather than wishful thinking or laboratory flukes. If your goal is accurate information about the phenomenon you’re studying, you recognize the value of hard questions from your scientific peers about your measurements and your inferences. Getting it right means catching your mistakes and making sure your conclusions are well grounded.

What sort of conclusions should we draw, then, when a scientist seems resistant to transparency, evasive in responding to concerns raised by peer reviewers, and indignant when mistakes are brought to light?

It’s time to revisit the case of Stephen Pennycook and his research group at Oak Ridge National Laboratory. In an earlier post I mused on the saga of this lab’s 1993 Nature paper [1] and its 2006 correction [2] (or “corrigendum” for the Latin fans), in light of allegations that the Pennycook group had manipulated data in another recent paper submitted to Nature Physics. (In addition to the coverage in the Boston Globe (PDF), the situation was discussed in a news article in Nature [3] and a Nature editorial [4].)

Now, it’s time to consider the recently uploaded communication by J. Silcox and D. A. Muller (PDF) [5] that analyzes the corrigendum and argues that a retraction, not a correction, was called for.

It’s worth noting that this communication was (according to a news story at Nature about how the U.S. Department of Energy handles scientific misconduct allegations [6]) submitted to Nature as a technical comment back in 2006 and accepted for publication “pending a reply by Pennycook.” Five years later, uploading the technical comment to arXiv.org makes some sense, since a communication that never sees the light of day doesn’t do much to further scientific discussion.

Given the tangle of issues at stake here, we’re going to pace ourselves. In this post, I lay out the broad details of Silcox and Muller’s argument (drawing also on the online appendix to their communication) as to what the presented data show and what they do not show. In a follow-up post, my focus will be on what we can infer from the conduct of the authors of the disputed 1993 paper and 2006 corrigendum in their exchanges with peer reviewers, journal editors, and the scientific community. Then, I’ll have at least one more post discussing the issues raised by the Nature news story and the related Nature editorial on the DOE’s procedures for dealing with alleged misconduct [7].

Continue reading

The economy might be getting better for someone …

… but I daresay that “someone” is not the typical student at a public school or university in the state of California.

The recent news about the impact of the California State budget on the California State University system:

The 2011-12 budget will reduce state funding to the California State University by at least $650 million and proposes an additional mid-year cut of $100 million if state revenue forecasts are not met. A $650 million cut reduces General Fund support for the university to $2.1 billion and will represent a 23 percent year over year cut to the system. An additional cut of $100 million would reduce CSU funding to $2.0 billion and represent a 27 percent year-to-year reduction in state support.

“What was once unprecedented has unfortunately become normal, as for the second time in three years the CSU will be cut by well over $500 million,” said CSU Chancellor Charles B. Reed. “The magnitude of this cut, compounded with the uncertainty of the final amount of the reduction, will have negative impacts on the CSU long after this upcoming fiscal year has come and gone.”

The $2.1 billion in state funding allocated to the CSU in the 2011-12 budget will be the lowest level of state support the system has received since the 1998-99 fiscal year ($2.16 billion), and the university currently serves an additional 90,000 students. If the system is cut by an additional $100 million, state support would be at its lowest level since 1997-98.

Two immediate responses to these cuts will be to decrease enrollments (by about 10,000 students across the 23 campuses of the CSU system) and increase “fees” (what we call tuition, since originally the California Master Plan for Higher Education didn’t include charging tuition, on the theory that educated Californians were some sort of public good worth supporting), yet again, by another $300 per semester or so.

“Why cut enrollments?” I hear some of you ask. Well, because the state still puts up a portion of the money required to actually educate each enrolled student (although that portion is now less than half of what the students must put up themselves). So 10,000 less students means 10,000 less “state’s share” expenditures. And, short term, that’s a saving for the tax payers. Long term, however, it may cost us.

Those students circling the tarmac, hoping to be admitted to the CSU (or University of California) system as students, are only going to cool their heels in community college for so long. (Plus, the community colleges are impacted by the decrease in transfer slots due to slashed enrollments, and have had their budgets cut because of the state’s fiscal apocalypse.) At a certain point, many of them will give up on earning college degrees, or will give up on earning them in California. And if the place where they earn those college degrees is less enthusiastic about slashing education budgets to the bone, these erstwhile Californians may well judge it prudent to put down roots, since it will make it easier to secure a good education for their offspring or partners, or a good continuing education for themselves.

I do not imagine a brain drain would do much to help California’s economy to recover.

In possibly related “what is the deal with our public schools?!” news, the elder Free-Ride offspring will be starting junior high (which, in our district, includes seventh and eighth grades) in the fall. The junior high school day consists of just enough periods for English, math, science, social studies, lunch, and one elective.* The elective choices include things like wood shop, or home economics, or band, or a foreign language. But unless your child has mastered bilocation, there is no option to take French and band, or mechanical drawing and Mandarin. Plus, school is out at like 2:15 PM — well before the standard 9-to-5 workday is over. Of course, this doesn’t take into account how many parents work more than eight hours a day (and may be hesitant to complain about it because at least they still have jobs) or how much time they have to spend commuting to and from those jobs. The bottom line seems to be that the public is unwilling to fund more than five academic periods per day of junior high. The public doesn’t even appreciate the utility of keeping the young people off the streets until 3 PM.

Verily, I suspect that only thing holding us back from abolishing child labor laws is that the additional infusion of labor would make our unemployment numbers worse, which rather undermine the narrative that the economy is turning a corner to happy days.

This lack of progress addressing the budgetary impacts on education — indeed, this apparent willingness to believe that education shouldn’t actually cost money to provide — makes me a big old crankypants.
_______
* There is probably also some provision for physical education, because there is still something like a state requirement that there be physical education.