Figuring out why something makes me cranky.

For some time I have been aware of my own discomfort in situations where I’m talking about certain challenges for girls and women in their educational trajectory, or the difficulty of the academic job market, or the challenges of the tenure track.

Sometimes I’ll note, in passing, my own good fortune in navigating the difficult terrain. Sometimes I won’t. Yet, reliably, someone will chime in with something along the lines of:

“Yeah, it’s hard, but the best and the brightest, like you, will survive the rigors.”

This kind of comment makes me extremely grumpy.

And I know, usually, it’s offered as a compliment. Frequently, I think, it’s offered to counteract my residual impostor complex, to remind me that I do work very hard, and that the work I do actually has value by any reasonable metric of assessment — in other words, that my talents, skills, effort, and determination have made some causal contribution to my successes.

But I know plenty of people with talents, skills, effort, and determination comparable to mine — maybe even surpassing mine — who haven’t been as lucky. I’m not inclined to think that for every single one of them — or even for most of them — that there’s a plausible causal story about some additional thing they could have done that would have made the difference.

Assuming there is amounts to assuming that our systems “work” to sort out the meritorious from the rest. That is a pretty serious assumption hanging out there with pretty scanty empirical backing.

And this morning I finally figured out how to articulate why I get cranky about the personal accolades and affirmations offered in response to my discussions of challenging systems and environments: they shift the discussion back to the level of individuals and individual actions, and away from the level of systems.

I guess if you think the systems are just fine, there’s not much point in examining them or thinking about ways they could be different.

But the evidence suggests to me that many of our systems are not just fine. When that’s what I’m trying to talk about, please don’t change the subject.

SPSP 2013 Contributed Papers: Communities & Institutions: Objectivity, Equality, & Trust

SPSP 2013 Contributed Papers: Communities & Institutions: Objectivity, Equality, & Trust

Tweeted from the 4th biennial conference of the Society for Philosophy of Science in Practice in Toronto, Ontario, Canada, on June 28, 2013, during Concurrent Sessions VI

  1. This was a session, by the way, in which it was necessary to confront my limitations as a conference live-tweeter. The session was in a room where the only available electrical outlets were at the front (where the speakers were), and my battery was rapidly running out of juice.  And my right shoulder was seizing up.  And I ended up in Twitter Jail (for “too many tweets today!” per Twitter’s proprietary algorithm), which meant that the last chunk of tweets I composed for the second talk got pasted into a text file and tweeted hours later, while my notes for the third talk in the session went into my quad-ruled notebook.

    With multiple live-tweeters in a given session, this trifecta of fail (in my tweeting — the session papers were a trifecta of good stuff!) would have been less traumatic for me.  But philosophers are not quite as keen to live-tweet as, say, ScienceOnline attendees … yet.
    There was, however, a bit of backup!  Christine James was driving SPSP’s shiny new Twitter account,   SocPhilSciPract, and she happened choose the same session of contributed papers to attend and to tweet.  She also tweeted some pictures.
  2. Continue reading

Ponderable: Academic hiring and interviewing.

It has been eleven years since I was last on the market for an academic job, and about six years (if I’m remembering correctly) since I was last on a search committee working to fill a tenure-track position in my department. Among other things, this means that I can consider the recent discussion of “conference interviews” at The Philosophy Smoker with something approaching “distance”.

However, as I’m well aware, distance is not the same as objectivity, and anyway objectivity is not the kind of thing you can achieve solo, so I’m going to do a little thinking out loud on the screen in the hopes that you all may chime in.

The nub of the issue is how search committees in philosophy (and in at least some other academic disciplines) use preliminary interviews (typically 30 to 60 minutes in length) to winnow their “best” applicants for a position (as judged on the basis of writing samples, publication records, letters of recommendation, transcripts, teaching evaluations, and other written materials) down to the finalists, the number of which must be small enough that you can reasonably afford to bring them out for campus interviews.

The winnowing down is crucial. From more than a hundred applications, a search committee can usually reach some substantial agreement on maybe twenty candidates whose application materials suggest the right combination of skills (in teaching and research, and maybe also skills that will be helpful in “service” to the department, the institution, and the academic discipline) and “fit” with the needs of the department (as far as teaching, advising students, and also creating a vibrant community in which colleagues have the potential for fruitful collaborations close at hand).

But even if we could afford to fly out 15 or 20 candidates for campus interviews (which typically run a day or two, which means we’d also be paying for food and lodging for the candidates), it would literally break our semester to interview so many. These interviews, after all, include seminars in which the candidates make a research presentation, teaching demonstrations (hosted in one of our existing classes, with actual students in attendance as well as search committee members observing), meetings with individual faculty members, meetings with deans, and a long interview with the whole search committee. This is hard enough to squeeze into your semester with only five candidates.

So, the standard procedure has been to conduct preliminary interviews of shorter duration with the 20 or so candidates who make the first cut at the Eastern Division Meeting of the American Philosophical Association. For departments like mine, these interviews happen at a table in a ballroom designated for this purpose. Departments that have a bit more money will rent a suite at the conference hotel and conduct the interviews there, with a bit less background noise.

Job candidates pretty much hate this set up. The conference falls during winter holidays (December 26-30 or so), which means travel is more expensive than it might be some other time of year. Search committees sometimes don’t decide who they want to interview at the convention until quite late in the game, which means candidates may not hear that a department would like to interview them until maybe a week before the conference starts (boosting the price of those plane tickets even more, or making you gamble by buying a plane ticket in advance of having any interviews scheduled). Even at conference rates, the hotel rooms are expensive. Occasionally, winter storms create problems for candidates and search committee members try to get to, or to flee from, the conference. Flu season piles on.

Search committee members are not wild about the logistics of traveling to the convention for the interviews, either. However, they feel like the conference interviews provide vital information in working out which of the top 20 or so candidates are the most likely to “fit” what the department wants and needs.

But this impression is precisely what is in question.

It has been pointed out (e.g., by Gilbert Harman, referencing research in social psychology) that interviews of the sort philosophy search committees use to winnow down the field add noise to the decision process rather than introducing reliable information beyond what is available in other application materials. This is not to say that search committees don’t believe that their 30 or 60 minutes talking with candidates tells them something useful. But this belief, however strong, is unwarranted. The search committee might as well push itself to identify the top five candidates on the basis of the application materials alone, or, if that’s not possible, randomly pick five of the top twenty for campus interviews.*

Of course, search committees seem not to be in a great hurry to abandon conference interviews, at least in philosophy. My (brief) experience on the scientific job market didn’t include conference job interviews per se, but I did have preliminary interviews of very much the same nature and duration with some private sector companies and national labs — which is to say, I don’t think it’s just philosophers who are making hiring decisions that are at least partially grounded on a type of information we have reason to believe could be misleading.

The question, of course, is what to do about all this.

Search committees could abandon these preliminary interviews altogether. That would surely put more pressure on the written components of the applications, some of which might themselves be misleading in interesting ways. I’m guessing search committees would resist this, since they believe (although mistakenly, if the research is right) that they really are learning something important from them. It’s not obvious to me that job candidates would unanimously endorse this either (since some see the interview as a chance to make their case more vividly — but again, maybe what they’re making is pseudo-evidence for their case).

Search committees could work to structure preliminary interviews so that they provide more reliable information (as the research suggests properly structured interviews actually do).** This would require search committee members to learn how properly to conduct such interviews (and how properly to record them for later examination and evaluation). Moreover, it would require that search committee members do something like acknowledging that their instincts about how to conduct free-flowing, open-ended preliminary interviews that are also informative are probably just wrong. This is a task with a difficulty level that’s probably right around what it takes to get science faculty to acknowledge that having learned a lot about their field might not be sufficient to be able to teach it effectively, and that science education research might be a useful source of empirically grounded pedagogical insight. In other words, I think it would be really hard.

Search committees could keep conducting preliminary interviews as they always have. Inertia can be powerful, as can the feeling that you really are learning something from the interviews. However, it seems like a search committee would have to take into account the claim that, empirically, interviews are misleading when drawing conclusions on the basis of preliminary interviews. (Of course this is a normative claim — the search committees ought to take this worry into account — rather than a claim that mere exposure to a research finding would be enough to remove the search committee’s collective powers of self-delusion.)

Or … search committees could do something else?

What else could they do here? How do those of you in scientific fields handle the role of interviewing in hiring? Specifically, do you take concrete measures to ensure that interviews don’t introduce noise into hiring decisions? Or do you feel that the hiring decisions you need to make admit of sufficiently objective information that this just isn’t a problem for you?

If you prefer to comment pseudonymously for this discussion, feel free, but one pseudonym to a customer please.

_____
* For all I know, campus interviews may introduce some of the same kinds of noise to the decision-making process as conference interviews do. However, many include teaching demonstrations with a sample from the actual student population the candidate would be asked to teach if hired, a formal presentation of the candidate’s research (including responding to questions about it), and ample opportunity for members of the hiring department to get a sense of whether the candidate is someone with whom one could interact productively or instead someone who might drive one up a wall.

** It is worth noting that some search committees, even in philosophy departments, actually do conduct structured interviews.

Chem Coach: a career outside of science with more chemistry than you might expect.

In honor of National Chemistry Week, See Arr Oh is spearheading the Chem Coach Carnival, which he describes as an “online repository of chemistry job success stories”. The posts from the first two days make for interesting and inspiring reading.

Given that, by official reckoning, I leaked out of the science pipeline, it wasn’t obvious to me that I had a chemistry job success story to share. But See Arr Oh asked me to share, and I love my job, and it turns out that chemistry has more than a little to do with how I do it. So, here we go:

My current job:
Associate professor of philosophy at a teaching-focused university, with my teaching and research focused on philosophy of science and ethics in science.

What I do in a standard “work day”:
Let’s skip over the parts that make it “work” (i.e., grading, committee meetings, getting swallowed up by bureaucracy) since I imagine those are pretty similar to what chemistry professors get to do. Instead, I’ll tell you about the teaching and research.

In the classroom, I teach mostly upper division students (juniors and seniors, but with some masters students in the mix). About half of my teaching ends up being an “Ethics in Science” course (multiple sections each year) that is required of our chemistry majored, heavily enrolled by other science majors, but also taken by a good handful of non-scientists who are curious about what’s involved in doing good science, and in scientists and non-scientists successfully sharing a world. You can peek at the current syllabus to get a feel for the sweep of the topics we discuss. The other half of my teaching assignment is usually “Philosophy of Science” (again, multiple sections each year), a straight-ahead intro to the subject with the usual philosophical discussions of how the scientific knowledge gets built, whether we have good grounds for believing the scientific method can deliver on its promises, what attitude we should take towards our best scientific theories (approaching literally true, or merely empirically adequate), and so forth. The interesting twist is that a lot of the population taking “Philosophy of Science” is there to fulfill the upper division general education science requirement. (Yeah, I know.) So, basically, this is an opportunity to take a whole bunch of people who are kind of scared of science and get them a basic understanding of where scientific knowledge comes from.

The research I do focuses a lot on the different conceptual and methodological toolboxes different scientific disciplines use to build science (philosophers of science of yore loved physics but really neglected chemistry), and on saying useful things about how to understand “ethical practice of science” in the particular circumstances in which scientists and scientific trainees find themselves in our world.

What kind of schooling / training / experience helped me get there?:
As an undergraduate, I double-majored in chemistry and philosophy. Then I got my Ph.D. in chemistry because I kind of thought I’d just read philosophy at home after work. Well … it didn’t turn out that way. The philosophical questions about science kept squeaking for my attention, and when I recognized that pursuing those was probably what would make me happy, I got another Ph.D. in philosophy, with a focus on the history and philosophy of science.

I should tell you that I got my chemistry Ph.D. relatively quickly (4.25 years), which made re-upping for another Ph.D.-length stint in grad school far more palatable than it would have been otherwise. If I had taken more like 8 years to get the first Ph.D., I think I would have been more likely just to get an M.A. in philosophy, or to do a “Ph.D. minor” in philosophy (that was an option my graduate institution had that I didn’t find out about until I was well into the second Ph.D.).

How does chemistry inform my work?
In my research (in philosophy, this looks an awful lot like reading and writing!), my experience with chemical methodology and the “forms of life” of scientists who do chemistry ends up being really useful when I read someone making sweeping generalizations on how all good science must work based on a close examination of physics. Chemistry differs from physics in interesting ways, which means a careful philosopher of science needs to build a model of science that can accommodate chemical practice too — or else dismiss chemistry as an “immature” science or some hogwash like that. Indeed, philosophers have been working on developing an interesting subfield in philosophy of chemistry.

The ethical practice of science part of my research is more informed by the types of human interactions in knowledge-building that I observed during my misspent scientific youth, but some of the issues that are especially important to chemists (like safety, so the knowledge-building doesn’t kill you) are of special interest to me.

You can probably guess how that misspent scientific youth is important in providing examples for discussion with my “Ethics in Science” students. It also helps me frame discussions of strategies for being ethical in situations where one is decidedly on the low end of the community power gradient. In my “Philosophy of Science” class, of course, I sneak in examples from chemistry whenever I can!

A unique, interesting, or funny anecdote about my career: 
I’ve been on conference panels a couple times with a Nobel Prize winner in chemistry, but only after I started doing philosophy. (Once was at a philosophy of science meeting, the other was at a chemistry meeting.)

* * * * *

If you’d like to honor National Chemistry Week and the chemistry bloggers who keep its spirit alive every week, you might consider kicking a few bucks into the Chembloggers general donations during the DonorsChoose Science Bloggers for Students Challenge.

In which too much grading plus Mel Brooks leads me to ponder the nature of crowd reactions at scientific presentations.

Fair warning: I have been grading for the last several days, and grading makes me silly. This post may give you a sense of just how silly.

Last night, during a brief break in grading, I caught the last half of Young Frankenstein on TV.

Dr. Frankenstein’s presentation of the Creature to the public, under the auspices of the Transylvania Neurological Society, is one of my favorite parts of the movie, not least because Dr. Frankenstein is so very quotable. “Please! Remain in your seats, I beg you! We are not children here, we are scientists!” and “For safety’s sake, don’t humiliate him!” are just two exhortations that I can imagine getting some good use in scientific presentations.

Also, when Dr. Frankenstein’s presentation of the Creature goes off the rails, members of the audience start pelting both scientist and monster with what look to be cabbages.

Which led me to notice that there are not too many scientific presentations nowadays at which audience members throw fruit or vegetables at the presenters.

Possibly this is a reflection of the current direction of scientific work — focused on findings so unsurprising (at least in a global sense) as to be unlikely to elicit strong reactions from those hearing them. Or, maybe scientists are channeling their disbelief and outrage to private channels, say, by fuming about presentations in lab meetings when they’ve returned from the conferences at which they’re presented, or saving the worst of their aggressive outburst for when they are the third reviewer.

On the other hand, maybe it reflects the limited supply of fruits and vegetables available at most venues for scientific presentations.

Your better complementary continental breakfast spreads can be counted on for apples, bananas, and oranges, but not so much for cabbages or overripe tomatoes. And, some conference venues (like the San Diego Convention Center) don’t really have free food so much as places to buy snacks — snacks which tend to be pretzels or muffins or cookies, items not traditionally hurled to register one’s disagreement with a research presentation.

Are warm pretzels too delicious an item to hurl at one’s fellow scientist to register one’s disbelief? Do muffins not fly well enough, nor generate sufficient force at impact? Or is it primarily a matter of the cost of these items that makes them unappealing as instruments of peer review?

Maybe this calls out for an economic analysis?

In the event that you had a cabbage handy, given the relative scarcity of cabbages at scientific meetings, would you tend to keep it rather than throwing it just in case the next presentation turned out to be even worse? And wouldn’t there be something like an opportunity cost associated with holding onto the cabbage, given how much room it would take up in the conference tote bag?

Really, someone should investigate this. But not me, because I still have grading to do.

Scientific authorship: guests, courtesy, contributions, and harms.

DrugMonkey asks, where’s the harm in adding a “courtesy author” (also known as a “guest author”) to the author line of a scientific paper?

I think this question has interesting ethical dimensions, but before we get into those, we need to say a little bit about what’s going on with authorship of scientific papers.

I suppose there are possible worlds in which who is responsible for what in a scientific paper might not matter. In the world we live in now, however, it’s useful to know who designed the experimental apparatus and got the reaction to work (so you can email that person your questions when you want to set up a similar system), who did the data analysis (so you can share your concerns about the methodology), who made the figures (so you can raise concerns about digital fudging of the images), etc. Part of the reason people put their names on scientific papers is so we know who stands behind the research — who is willing to stake their reputation on it.

The other reason people put their names on scientific papers is to claim credit for their hard work and their insights, their contribution to the larger project of scientific knowledge-building. If you made a contribution, the scientific community ought to know about it so they can give you props (and funding, and tenure, and the occasional Nobel Prize).

But, we aren’t in a possition to make accurate assignments of credit or responsibility if we have no good information about what an author’s actual involvement in the project may have been. We don’t know who’s really in a position to vouch for the data, or who really did heavy intellectual lifting in bringing the project to fruition. We may understand, literally, the claim, “Joe Schmoe is second author of this paper,” but we don’t know what that means, exactly.

I should note that there is not one universally recognized authorship standard for all of the Tribe of Science. Rather, different scientific disciplines (and subdisciplines) have different practices as far as what kind of contribution is recognized as worthy of inclusion as an author on a paper, and as far as what the order in which the authors are listed is supposed to communicate about the magnitude of each contribution. In some fields, authors are always listed alphabetically, no matter what they contributed. In others, being first in the list means you made the biggest contribution, followed by the second author (who made the second-biggest contribution), and so forth. It is usually the case that the principal investigator (PI) is identified as the “corresponding author” (i.e., the person to whom questions about the work should be directed), and often (but not always) the PI takes the last slot in the author line. Sometimes this is an acknowledgement that while the PI is the brains of the lab’s scientific empire, particular underlings made more immediately important intellectual contributions to the particular piece of research the paper is communicating. But authorship practices can be surprisingly local. Not only do different fields do it differently, but different research groups in the same field — at the same university — do it differently. What this means is it’s not obvious at all, from the fact that your name appears as one of the authors of a paper, what your contribution to the project was.

There have been attempts to nail down explicit standards for what kinds of contributions should count for authorship, with the ICMJE definition of authorship being one widely cited effort in this direction. Not everyone in the Tribe of Science, or even in the subset of the tribe that publishes in biomedical journals, thinks this definition draws the lines in the right places, but the fact that journal editors grapple with formulating such standards suggests at least the perception that scientists need a clear way to figure out who is responsible for the scientific work in the literature. We can have a discussion about how to make that clearer, but we have to acknowledge that at the present moment, just noting that someone is an author without some definition of what that entails doesn’t do the job.

Here’s where the issue of “guest authorship” comes up. A “guest author” is someone whose name appears in a scientific paper’s author line even though she has not made a contribution that is enough (under whatever set of standards one recognizes for proper authorship) to qualify her as an author of the paper.

A guest is someone who is visiting. She doesn’t really live here, but stays because of the courtesy and forebearance of the host. She eats your food, sleeps under your roof, uses your hot water, watches your TV — in short, she avails herself of the amenities the host provides. She doesn’t pay the rent or the water bill, though; that would transform her from a guest to a tenant.

To my way of thinking, a guest author is someone who is “just visiting” the project being written up. Rather than doing the heavy lifting in that project, she is availing herself of the amenities offered by association (in print) with that project, and doing so because of the courtesy and forebearance of the “host” author.

The people who are actually a part of the project will generally be able to recognize the guest author as a “guest” (as opposed to an actual participant). The people receiving the manuscript will not. In other words, the main amenity the guest author partakes in is credit for the labors of the actual participants. Even if all the participants agreed to this (and didn’t feel the least bit put out at the free-rider whose “authorship” might be diluting his or her own share of credit), this makes it impossible for those outside the group to determine what the guest author’s actual contribution was (or, in this case, was not). Indeed, if people outside the arrangement could tell that the guest author was a free-rider, there wouldn’t be any point in guest authorship.

Science strives to be a fact-based enterprise. Truthful communication is essential, and the ability to connect bits of knowledge to the people who contributed is part of how the community does quality control on that knowledge base. Ambiguity about who made the knowledge may lead to ambiguity about what we know. Also, developing too casual a relationship with the truth seems like a dangerous habit for a scientist to get into.

Coming back to DrugMonkey’s question about whether courtesy authorship is a problem, it looks to me like maybe we can draw a line between two kinds of “guests,” one that contributes nothing at all to the actual design, execution, evaluation, or communication of the research, and one who contributes something here, just less than what the conventions require for proper authorship. If these characters were listed as authors on a paper, I’d be inclined to call the first one a “guest author” and the second a “courtesy author” in an attempt to keep them straight; the cases with which DrugMonkey seems most concerned are the “courtesy authors” in my taxonomy. In actual usage, however, the two labels seem to be more or less interchangeable. Naturally, this makes it harder to distinguish who actually did what — but it strikes me that this is just the kind of ambiguity people are counting on when they include a “guest author” or “courtesy author” in the first place.

What’s the harm?

Consider a case where the PI of a research group insists on giving authorship of a paper to a postdoc who hasn’t gotten his experimental system to work at all and is almost out of funding. The PI gives the justification that “He needs some first-author papers or his time here will have been a total waste.” As it happens, giving this postdoc authorship bumps the graduate student who did all the experimental work (and the conceptual work, and data analysis, and drafting of the manuscript) out of first author slot — maybe even off the paper entirely.

There is real harm here, to multiple parties. In this case, someone got robbed of appropriate credit, and the person identified as most responsible for the published work will be a not-very-useful person to contact with deeper questions about the work (since he didn’t do any of it or at best participated on the periphery of the project).

Consider another kind of case, where authorship is given to a well-known scientist with a lot of credibility in his field, but who didn’t make a significant intellectual contribution to work (at least, not one that rises to the level of meriting authorship under the recognized standards). This is the kind of courtesy authorship that was extended to Gerald Schatten in a 2005 paper in Science another of whose authors was Hwang Woo Suk. This paper had 25 authors listed, with Schatten identified as the senior author. Ultimately, the paper was revealed to be fraudulent, at which point Schatten claimed mostly to have participated in writing the paper in good English — a contribution recognized as less than what one would expect from an author (especially the senior author).

Here, including Schatten as an author seemed calculated to give the appearance (to the journal editors while considering the manuscript, and to the larger scientific community consuming the published work)that the work was more important and/or credible, because of the big name associated with it. But this would only work because listing that big name in the author line amounts to claiming the big name was actually involved in the work. When the paper fell apart, Schatten swiftly disavowed responsibility — but such a disavowal was only necessary because of what was communicated by the author line, and I think it’s naïve to imagine that this “ambiguity” or “miscommunication” was accidental.

In cases like this, I think it’s fair to say courtesy authorship does harm, undermining the baseline of trust in the scientific community. It’s hard to engage in efficient knowledge-building with people you think are trying to put one over on you.

The cases where DrugMonkey suggests courtesy authorship might be innocuous strike me as interestingly different. They are cases where someone has actually made a real contribution of some sort to the work, but where that contribution may be judged (under whatever you take to be the accepted standards of your scientific discipline) as not quite rising to the level of authorship. Here, courtesy authorship could be viewed as inflating the value of the actual contribution (by listing the person who made it in the author line, rather than the acknowledgements), or alternatively as challenging where the accepted standards of your discipline draw the line between a contribution that qualifies you as an author and one that does not. For example, DrugMonkey writes:

First, the exclusion of those who “merely” collect data is stupid to me. I’m not going to go into the chapter and verse but in my lab, anyway, there is a LOT of ongoing trouble shooting and refining of the methods in any study. It is very rare that I would have a paper’s worth of data generated by my techs or trainees and that they would have zero intellectual contribution. Given this, the asymmetry in the BMJ position is unfair. In essence it permits a lab head to be an author using data which s/he did not collect and maybe could not collect but excludes the technician who didn’t happen to contribute to the drafting of the manuscript. That doesn’t make sense to me. The paper wouldn’t have happened without both of the contributions.

I agree with DrugMonkey that there’s often a serious intellectual contribution involved in conducting the experiments, not just in designing them (and that without the data, all we have are interesting hunches, not actual scientific knowledge, to report). Existing authorship standards like those from ICMJE or BMJ can unfairly exclude those who do the experimental labor from authorship by failing to recognize this as an intellectual contribution. Pushing to have these real contributions recognized with appropriate career credit is important. As well, being explicit about who made these contributions to the research being reported in the paper makes it much easier for other scientists following up on the published work (e.g., comparing it to their own results in related experiments, or trying to use some of the techniques described in the paper to set up new experiments) to actually get in touch with the people most likely to be able to answer their questions.

Changing how might weight experimental prowess is given in the career scorekeeping may be an uphill battle, especially when the folks distributing the rewards for the top scores are administrators (focused on the money the people they’re scoring can bring to an institution) and PIs (who frequently have more working hours devoted to conception and design of project for their underlings rather than to the intellectual labor of making those projects work, and to writing the proposals that bring in the grant money and the manuscripts that report the happy conclusion of the projects funded by such grants). That doesn’t mean it’s not a fight worth having.

But, I worry that using courtesy authorship as a way around this unfair setting of the authorship bar actually amounts to avoiding the fight rather than addressing these issues and changing accepted practices.

DrugMonkey also writes:

Assuming that we are not talking about pushing someone else meaningfully* out of deserved credit, where lies the harm even if it is a total gift?

Who is hurt? How are they damaged?
__
*by pushing them off the paper entirely or out of first-author or last-author position. Adding a 7th in the middle of the authorship list doesn’t affect jack squat folks.

Here, I wonder: if dropping in a courtesy author as the seventh author of a paper can’t hurt, how either can we expect it to help the person to whom this “courtesy” is extended?

Is it the case that no one actually expects that the seventh author made anything like a significant contribution, so no one is being misled in judging the guest in the number seven slot as having made a comparable contribution to the scientist who earned her seventh-author position in another paper? If listing your seventh-author paper on your CV is automatically viewed as not contributing any points in your career scorekeeping, why even list it? And why doesn’t it count for anything? Is it because the seventh author never makes a contribution worth career points … or is it because, for all we know, the seventh author may be a courtesy author, there for other reasons entirely?

If a seventh-author paper is actually meaningless for career credit, wouldn’t it be more help to the person to whom you might extend such a “courtesy” if you actually engaged her in the project in such a way that she could make an intellectual contribution recognized as worthy of career credit?

In other words, maybe the real problem with such courtesy authorship is that it gives the appearance of help without actually being helpful.

(Cross-posted at Doing Good Science)

Harvard Psych department may have a job opening.

… because Marc Hauser has resigned his faculty position, effective August 1.

You may recall, from our earlier discussions of Hauser (here, here, here, and here), that some of his papers were retracted because they drew conclusions that weren’t supported by the data … and then it emerged that maybe the data didn’t support the conclusions on account of scientific misconduct (rather than honest mistakes). Harvard mounted an inquiry. Hauser took a leave of absence from his position while the inquiry was ongoing. Harvard found Hauser “solely responsible, after a thorough investigation by a faculty member investigating committee, for eight instances of scientific misconduct under FAS standards.” In February, Hauser’s colleagues in the Psychology Department voted against allowing him to return to the classroom in the Fall. Meanwhile, since Hauser’s research was supported by grants from federal funding agencies, the Office of Research Integrity is thought to be in the midst of its own investigation of Hauser’s scientific conduct.

So perhaps Hauser’s resignation was to be expected (although it’s not too hard to come up with examples of faculty who were at least very close to scientific fraudsters — close enough to be enabling the fraud — who are still happily ensconced in their Ivy League institutions).

From Carolyn Y. Johnson at the Boston Globe:

“While on leave over the past year, I have begun doing some extremely interesting and rewarding work focusing on the educational needs of at-risk teenagers. I have also been offered some exciting opportunities in the private sector,” Hauser wrote in a resignation letter to the dean, dated July 7. “While I may return to teaching and research in the years to come, I look forward to focusing my energies in the coming year on these new and interesting challenges.”

Hauser did not respond to e-mail or voicemail messages today.

His resignation brings some resolution to the turmoil on campus, but it still leaves the scientific community trying to sort out what findings, within his large body of work, they should trust. Three published papers led by Hauser were thrown into question by the investigation — one was retracted and two were corrected. Problems were also found in five additional studies that were either not published or corrected prior to publication.

“What it does do is it provides some sort of closure for people at Harvard. … They were in a state of limbo,” said Gerry Altmann, editor of the journal Cognition, who, based on information provided to him by Harvard last year, said the only plausible conclusion he could draw was that some of the data had been fabricated in a study published in his journal in 2002 and retracted last year. “There’s just been this cloud hanging over the department. … It has no real impact on the field more broadly.”

Maybe it’s just me, but there seems to be a mixed message in those last two paragraphs. Either this is the story of one bad apple who indulged in fabrication and brought shame to his university, or this is the story of a trusted member of the scientific community who contributed many, many articles to the literature in his field and now turns out not to be so trustworthy. If it’s the latter, then we’re talking about potential impacts that are much bigger than Harvard’s reputation. We’re talking about a body of scientific literature that suddenly looks less solid — a body of scientific literature that other researchers had trusted, used as the basis for new studies of their own, perhaps even taken as the knowledge base with which other new findings would need to be reconciled to be credible.

And, it’s not like there’s no one suggesting that Marc Hauser is a good guy who has made important (and presumably trustworthy) contributions to science. For example:

“I’m deeply saddened by the whole events of the last year,” Steven Pinker, a psychology professor at Harvard, said today. “Marc is a scientist of enormous creativity, energy, and talent.”

Meanwhile, if the data from the Harvard investigation best supports the conclusion that Hauser’s recent work was marred by scientific misconduct characterized by “problems involving data acquisition, data analysis, data retention, and the reporting of research methodologies and results,” this seems to count against Hauser’s credibility (and his judgment). And, although we might make the case that teaching involves a different set of competencies than research, his colleagues may have decided that the his to his credibility as a knowledge-builder would also do damage to his credibility as a teacher. The Boston Globe article notes:

Another researcher in the field, Michael Tomasello, co-director of the Max Planck Institute for Evolutionary Anthropology, said today that Hauser’s departure was not unexpected. “Once they didn’t let him teach –- and there are severe restrictions in his ability to do research — you come to office and what do you do all day?” he said. “People in the field, we’re just wondering — this doesn’t change anything. We’re still where we were before about the [other] studies.”

What could Hauser do at work all day if not teach and conduct research? Some might suggest a full slate of committee work.

Others would view that as cruel and unusual punishment, even for the perpetrator of scientific misconduct.

Blogospheric navel-gazing: where’s the chemistry communication?

The launch last week of the new Scientific American Blog Network* last week prompted a round of blogospheric soul searching (for example here, here, and here): Within the ecosystem of networked science blogs, where are all the chem-bloggers?

Those linked discussions do a better job with the question and its ramifications than I could, so as they say, click through and read them. But the fact that these discussions are so recent is an interesting coincidence in light of the document I want to consider in this post.

I greeted with interest the release of a recent publication from the National Academy of Sciences titled Chemistry in Primetime and Online: Communicating Chemistry in Informal Environments (PDF available for free here). The document aims to present a summary of a one-and-a-half day workshop, organized by the Chemical Sciences Roundtable and held in May 2010.

Of course, I flipped right to the section that took up the issue of blogs.

The speaker invited to the workshop to talk about chemistry on blogs was Joy Moore, representing Seed Media Group.

She actually started by exploring how much chemistry coverage there was in Seed magazine and professed surprise that there wasn’t much:

When she talked to one of her editors about why, what he told her was very similar to what others had mentioned previously in the workshop. He said, “part of the reason behind the apparent dearth of chemistry content is that chemistry is so easily subsumed by other fields and bigger questions, so it is about the ‘why’ rather than the how.'” For example, using chemistry to create a new clinical drug is often not reported or treated as a story about chemistry. Instead, it will be a story about health and medicine. Elucidating the processes by which carbon compounds form in interstellar space is typically not treated as a chemistry story either; it will be an astronomy-space story.

The Seed editor said that in his experience most pure research in chemistry is not very easy to cover or talk about in a compelling and interesting way for general audiences, for several reasons: the very long and easily confused names of many organic molecules and compounds, the frequent necessity for use of arcane and very specific nomenclature, and the tendency for most potential applications to boil down to an incremental increase in quality of a particular consumer product. Thus, from a science journalist point of view, chemistry is a real challenge to cover, but he said, “That doesn’t mean that there aren’t a lot of opportunities.” (24)

A bit grumpily, I will note that this editor’s impression of chemistry and what it contains is quite a distance from my own. Perhaps it’s because I was a physical chemist rather than an organic chemist (so I mostly dodged the organic nomenclature issue in my own research), and because the research I did had no clear applications to any consumer products (and many of my friends in chemistry were in the same boat), and because the lot of us learned how to explain what was interesting and important and cool to each other (and to our friends who weren’t in chemistry, or even in school) without jargon. It can be done. It’s part of this communication strategy called “knowing your audience and meeting them where they are.”

Anyway, after explaining why Seed didn’t have much chemistry, Moore shifted her focus to ScienceBlogs and its chemistry coverage. Here again, the pickings seemed slim:

Moore said there is no specific channel in ScienceBlogs dedicated to chemsitry, but there are a number of bloggers who use chemistry in their work.

Two chemistry-related blogs were highlighted by Moore. The first one, called Speakeasy Science, is by a new blogger Deborah Bloom [sic]. Bloom is not a scientist, but chemistry informs her writing, especially her new book on the birth of forensic toxicology. Moore also showed a new public health blog from Seed called the Pump Handle. Seed has also focused more on chemistry, in particular environmental toxins. Moore added, “So again, as we go through we can find the chemistry as the supporting characters, but maybe not as the star of the show.” (26)

While I love both The Pump Handle and Speakeasy Science (which has since relocated to PLoS Blogs), Moore didn’t mention a bunch of blogs at ScienceBlogs that could be counted on for chemistry content in a starring role. These included Molecule of the Day, Terra Sigillata (which has since moved to CENtral Science), and surely the pharmacology posts on Neurotopia. That’s just three off the top of my head. Indeed, even my not-really-a-chemistry-blog had a “Chemistry” category populated with posts that really focused on chemistry.

And, of course, I shouldn’t have to point out that ScienceBlogs is not now, and was not then, the entirety of the science blogosphere. There have always been seriously awesome chem-bloggers writing entertaining, accessible stuff outside the bounds of the Borg.

Ignoring their work (and their readership) is more than a little lazy. (Maybe a search engine would help?)

Anyway, Moore also told the workshop about Research Blogging:

Moore said that Research Blogging is a tagging and aggregating tool for bloggers who write about journal articles. Bloggers who occasionally discuss journal articles on their blog sites can join the Seed Research Blogging community. Seed provides the blogger with some code to put into blog posts that allows Seed to pick up those blog posts and aggregate them. Seed then offers the blogger on its website Researchvolume.org [sic]. This allows people to search across the blog posts within these blogs. Moore said that bloggers can also syndicate comments through the various Seed feeds, widgets, and other websites. It basically brings together blog posts about peer-reviewed research. At the same time, Seed gives a direct link back to the journal article, so that people can read the original source.

“Who are these bloggers?” Moore asked. She said the blog posts take many different forms. Sometimes someone is simply pointing out an interesting article or picking a topic and citing two or three articles to preface it. Other bloggers almost do a mini-review. These are much more in-depth analyses or criticisms of papers. (26)

Moore also noted some research on the chemistry posts aggregated by ResearchBlogging that found:

the blog coverage of the chemistry literature was more efficient than the traditional citation process. The science blogs were found to be faster in terms of reporting on important articles, and they also did a better job of putting the material in context within different areas of chemistry. (26)

The issues raised by the other workshop participants here were the predictable ones.

One, from John Miller of the Department of Energy, was whether online venues like ResearchBlogging might replace traditional peer review for journal articles. Joy Moore said she saw it as a possibility. Of course, this might rather undercut the idea that what is being aggregated is blog posts on peer reviewed research — the peer review that happens before publication, I take it, is enhanced, not replaced, by the post-publication “peer review” happening in these online discussions of the research.

Another comment, from Bill Carroll, had to do with the perceived tone of the blogosphere:

“One of the things I find discouraging about reading many blogs or various comments is that it very quickly goes from one point of view to another point of view to ‘you are a jerk.’ My question is, How do you keep [the blog] generating light and not heat.” (26)

Moore’s answer allowed as how some blog readers are interested in being entertained by fisticuffs.

Here again, it strikes me that there’s a danger in drawing sweeping conclusions from too few data points. There exist science blogs that don’t get shouty and personal in the posts or the comment threads. Many of these are really good reads with engaging discussions happening between bloggers and readers.

Sometimes too, the heat (or at least, some kind of passion) may be part of how a blogger conveys to readers what about chemistry is interesting, or puzzling, or important in contexts beyond the laboratory or the journal pages. Chemistry is cool enough or significant enough that it can get us riled up. I doubt that insisting on Vulcan-grade detachment is a great way to convince readers who aren’t already sold on the importance of chemistry that they ought to care about it.

And, can we please get past this suggestion that the blogosphere is the source of incivility in exchanges about science?

I suspect that people who blame the medium (of blogs) for the tone (of some blogs or of the exchanges in their comments) haven’t been to a department seminar or a group meeting lately. Those face-to-face exchanges can get not only contentious but also shouty and personal. (True story: When I was a chemistry graduate student shopping for a research group, I was a guest at a group meeting where the PI, who knew I was there to see how I liked the research group, spent a full five minutes tearing one of his senior grad students a new one. And then, he was disappointed that I did not join the research group.)

Now, maybe the worry is that blogs about chemistry might give the larger public a peek at chemists being contentious and personal and shouty, something that otherwise would be safely hidden from view behind the walls of university lecture halls and laboratory spaces. If that’s the worry, one possible response is that chemists playing in the blogosphere should maybe pay attention to the broader reach the internet affords them and behave themselves in the way they want the public to see them behaving.

If, instead, the worry is that chemists ought not ever to behave in certain ways toward each other (e.g., attacking the person rather than the methods or the results or the conclusions), then there’s plenty of call for peer pressure within the chemistry community to head off these behaviors before we even start talking about blogs.

There are a few things that complicate discussions like this about the nature of communication about chemistry on blogs. One is that the people taking up the issue are sometimes unclear about what kind of communication it is they’re interested in — for example, chemist to non-chemist or chemist-to-chemist. Another is that they sometimes have very different ideas about what kinds of chemical issues ought to be communicated (basic concepts, cutting edge research, issues to do with chemical education or chemical workplaces, chemistry in everyday products or in highly charged political debates, etc., etc.). And, as mentioned already, the chemistry blogosphere, like chemistry as a discipline, contains multitudes. There is so much going on, in so many sub-specialities, that it’s hard to draw too many useful generalizations.

For the reader, this diversity of chemistry blogging is a good thing, not a bad thing — at least if the reader is brave enough to venture beyond networks which don’t always have lots of blogs devoted to chemistry. Some good places to look:

Blogs about chemistry indexed by ScienceSeeker

CENtral Science (which is a blog network, but one devoted to chemistry by design)

Many excellent chemistry blogs are linked in this post at ScienceGeist. Indeed, ScienceGeist is an excellent chemistry blog.

Have you been reading the Scientopia Guest Blog lately? If so, you’ve had a chance to read Dr. Rubidium’s engaging discussions of chemistry that pops up in the context of sex and drugs. I’m sure rock ‘n’ roll is on deck.

Finally, David Kroll’s blogroll has more fine chemistry-related blogs than you can shake a graduated cylinder at.

If there are other blogospheric communicators of chemistry you’d like to single out, please tell us about them in the comments.
______
*Yes, I have a new blog there, but this blog isn’t going anywhere.

Limits of ethical recycling.

In the “Ethics in Science” course I regularly teach, we spend some time discussing case studies to explore some of the situations students may encounter in their scientific training or careers where they will want to be able to make good ethical decisions.

A couple of these cases touch on the question of “recycling” pieces of old grant proposals or journal articles — say, the background and literature review.

There seem to be cases where the right thing to do is pretty straightforward. For example, helping yourself to the background section someone else had written for her own grant proposal would be wrong. This would amount to misappropriating someone else’s words and ideas without her permission and without giving her credit. (Plagiarism anyone?) Plus, it would be weaseling out of one’s own duty to actually read the relevant literature, develop a view about what it’s saying, and communicate clearly why it matters in motivating the research being proposed.

Similarly, reusing one’s own background section seems pretty clearly within the bounds of ethical behavior. You did the intellectual labor yourself, and especially in the case where you are revising and resubmitting your own proposal, there’s no compelling reason for you to reinvent that particular wheel (unless, if course, reviewer comments indicate that the background section requires serious revision, the literature cited ought to take account of important recent developments that were missing in the first round, etc.).

Between these two extremes, my students happened upon a situation that seemed less clear-cut. How acceptable is it to recycle the background section (or experimental protocol, for that matter) from an old grant proposal you wrote in collaboration with someone else? Does it make a difference whether that old grant proposal was actually funded? Does it matter whether you are “more powerful” or “less powerful” (however you want to cash that out) within the collaboration? Does it require explicit permission from the person with whom you collaborated on the original proposal? Does it require clear citation of the intellectual contribution of the person with whom you collaborated on the original proposal, even if she is not officially a collaborator on the new proposal?

And, in your experience, does this kind of recycling make more sense than just sitting down and writing something new?

A question for the trainees: How involved do you want the boss to get with your results?

This question follows on the heels of my recent discussion of the Bengü Sezen misconduct investigations, plus a conversation via Twitter that I recapped in the last post.

The background issue is that people — even scientists, who are supposed always to be following the evidence wherever it might lead — can run into trouble really scrutinizing the results of someone they trust (however that trust came about). Indeed, in the Sezen case, her graduate advisor at Columbia University, Dalibor Sames, seemed to trust Sezen and her scientific prowess so much that he discounted the results of other graduate students in his lab who could not replicate Sezen’s results (which turned out to have been faked).

Really, it’s the two faces of the PI’s trust here: trusting one trainee so much that her results couldn’t be wrong, and using that trust to ignore the empirical evidence presented by other trainees (who apparently didn’t get the same level of presumptive trust). As it played out, at least three of those other trainees whose evidence Sames chose not to trust left the graduate program before earning their degrees.

The situation suggests to me that PIs would be prudent to establish environments in their research groups where researchers don’t take scrutiny of their results, data, methods, etc., personally — and where the scrutiny is applied to each member’s results, data, methods, etc. (since anyone can make mistakes). But how do things play out when they rubber hits the road?

So, here’s the question I’d like to ask the scientific trainees. (PIs: I’ve posed the complementary question to you in the post that went up right before this one!)

In his or her capacity as PI, your advisor’s scientific credibility (and likely his or her name) is tied to all the results that come out of the research group — whether they are experimental measurements, analyses of measurements, modeling results, or whatever else it is that scientists of your stripe regard as results. Moreover, in his or her capacity as a trainer of new scientists, the boss has something like a responsibility to make sure you know how to generate reliable results — and that you know how to tell them from results that aren’t reliable. What does your PI do to ensure that the results you generate are reliable? Do you feel like it’s enough (both in terms of quality control and in terms of training you well)? Do you feel like it’s too much?

Commenting note: You may feel more comfortable commenting with a pseudonym for this particular discussion, and that’s completely fine with me. However, please pick a unique ‘nym and keep it for the duration of this discussion, so we’re not in the position of trying to sort out which “Anonymous” is which. Also, if you’re a regular commenter who wants to go pseudonymous for this discussion, you’ll probably want to enter something other than your regular email address in the commenting form — otherwise, your Gravatar may give your other identity away!