You’re not rehabilitated if you keep deceiving.

Regular readers will know that I view scientific misconduct as a serious harm to both the body of scientific knowledge and the scientific community involved in building that knowledge. I also hold out hope that at least some of the scientists who commit scientific misconduct can be rehabilitated (and I’ve noted that other members of the scientific community behave in ways that suggest that they, too, believe that rehabilitation is possible).

But I think a non-negotiable prerequisite for rehabilitation is demonstrating that you really understand how what you did was wrong. This understanding needs to be more than simply recognizing that what you did was technically against the rules. Rather, you need to grasp the harms that your actions did, the harms that may continue as a result of those actions, the harms that may not be quickly or easily repaired. You need to <a href="http://blogs.scientificamerican.com/doing-good-science/2014/06/29/do-permanent-records-of-scientific-misconduct-findings-interfere-with-rehabilitation/"acknowledgethose harms, not minimize them or make excuses for your actions that caused the harms.

And, you need to stop behaving in the ways that caused the harms in the first place.

Among other things, this means that if you did significant harm to your scientific community, and to the students you were were supposed to be training, by making up “results” rather than actually doing experiments and making and reporting accurate results, you need to recognize that you have acted deceptively. To stop doing harm, you need to stop acting deceptively. Indeed, you may need to be significantly more transparent and forthcoming with details than others who have not transgressed as you have. Owing to your past bad acts, you may just have to meet a higher burden of proof going forward.

That you have retracted the publications in which you deceived, or lost a degree for which (it is strongly suspected) you deceived, or lost your university post, or served your hours of court-ordered community service does not reset you to the normal baseline of presumptive trust. “Paying your debt to society” does not in itself mean that anyone is obligated to believe that you are not still untrustworthy. If you break trust, you need to earn it back, not to demand it because you did your time.

You certainly can’t earn that trust back by engaging in deception to mount an argument that people should give you a break because you’ve served out your sentence.

These thoughts on how not to approach your own rehabilitation are prompted by the appearance of disgraced social scientist Diederik Stapel (discussed here, here, here, here, here, and here) in the comments at Retraction Watch on a post about Diederik Stapel and his short-lived gig as an adjunct instructor for a college course. Now, there’s no prima facie reason Diederik Stapel might not be able to make a productive contribution to a discussion about Diederik Stapel.

However, Diederik Stapel was posting his comments not as Diederik Stapel but as “Paul”.

I hope it is obvious why posting comments that are supportive of yourself while making it appear that this support is coming from someone else is deceptive. Moreover, the comments seem to suggest that Stapel is not really fully responsible for the frauds he committed.

“Paul” writes:

Help! Let’s not change anything. Science is a flawless institution. Yes. And only the past two days I read about medical scientists who tampered with data to please the firm that sponsored their work and about the start of a new investigation into the work of a psychologist who produced data “too good to be true.” Mistakes abound. On a daily basis. Sure, there is nothing to reform here. Science works just fine. I think it is time for the “Men in Black” to move in to start an outside-invesigation of science and academia. The Stapel case and other, similar cases teach us that scientists themselves are able to clean-up their act.

Later, he writes (sic throughout):

Stapel was punished, he did his community service (as he writes in his latest book), he is not on welfare, he is trying to make money with being a writer, a cab driver, a motivational speaker, but not very successfully, and .. it is totally unclear whether he gets paid for his teaching (no research) an extra-curricular hobby course (2 hours a week, not more, not less) and if he gets paid, how much.

Moreover and more importantly, we do not know WHAT he teaches exactly, we have not seen his syllabus. How can people write things like “this will only inspire kids to not get caught”, without knowing what the guy is teaching his students? Will he reach his students how to become fraudsters? Really? When you have read the two books he wrote after his demise, you cannot be conclude that this is very unlikely? Will he teach his students about all the other fakes and frauds and terrible things that happen in science? Perhaps. Is that bad? Perhaps. I think it is better to postpone our judgment about the CONTENT of all this as long as we do not know WHAT he is actually teaching. That would be a Popper-like, open-minded, rationalistic, democratic, scientific attitude. Suppose a terrible criminal comes up with a great insight, an interesting analysis, a new perspective, an amazing discovery, suppose (think Genet, think Gramsci, think Feyerabend).

Is it smart to look away from potentially interesting information, because the messenger of that information stinks?

Perhaps, God forbid, Stapel is able to teach his students valuable lessons and insights no one else is willing to teach them for a 2-hour-a-week temporary, adjunct position that probably doesn’t pay much and perhaps doesn’t pay at all. The man is a failure, yes, but he is one of the few people out there who admitted to his fraud, who helped the investigation into his fraud (no computer crashes…., no questionnaires that suddenly disappeared, no data files that were “lost while moving office”, see Sanna, Smeesters, and …. Foerster). Nowhere it is written that failures cannot be great teachers. Perhaps he points his students to other frauds, failures, and ridiculous mistakes in psychological science we do not know of yet. That would be cool (and not unlikely).

Is it possible? Is it possible that Stapel has something interesting to say, to teach, to comment on?

To my eye, these comments read as saying that Stapel has paid his debt to society and thus ought not to be subject to heightened scrutiny. They seem to assert that Stapel is reformable. They also suggest that the problem is not so much with Stapel as with the scientific enterprise. While there may be systemic features of science as currently practice that make cheating a greater temptation than it might be otherwise, suggesting that those features made Stapel commit fraud does not convey an understanding of Stapel’s individual responsibility to navigate those temptations. Putting those assertions and excuses in someone else’s mouth makes them look less self-serving than they actually are.

Hilariously, “Paul” also urges the Retraction Watch commenters expressing doubts about Stapel’s rehabilitation and moral character to contact Stapel using their real names, first here:

I guess that if people want to write Stapel a message, they can send him a personal email, using their real name. Not “Paul” or “JatdS” or “QAQ” or “nothingifnotcritical” or “KK” or “youknowbestofall” or “whatistheworldcoming to” or “givepeaceachance”.

then here:

if you want to talk to puppeteer, as a real person, using your real name, I recommend you write Stapel a personal email message. Not zwg or neuroskeptic or what arewehiding for.

Meanwhile, behind the scenes, the Retraction Watch editors accumulated clues that “Paul” was not an uninvolved party but rather Diederik Stapel portraying himself as an uninvolved party. After they contacted him to let him know that such behavior did not comport with their comment policy, Diederik Stapel posted under his real name:

Hello, my name is Diederik Stapel. I thought that in an internet environment where many people are writing about me (a real person) using nicknames it is okay to also write about me (a real person) using a nickname. ! have learned that apparently that was —in this particular case— a misjudgment. I think did not dare to use my real name (and I still wonder why). I feel that when it concerns person-to-person communication, the “in vivo” format is to be preferred over and above a blog where some people use their real name and some do not. In the future, I will use my real name. I have learned that and I understand that I –for one– am not somebody who can use a nickname where others can. Sincerely, Diederik Stapel.

He portrays this as a misunderstanding about how online communication works — other people are posting without using their real names, so I thought it was OK for me to do the same. However, to my eye it conveys that he also misunderstands how rebuilding trust works. Posting to support the person at the center of the discussion without first acknowledging that you are that person is deceptive. Arguing that that person ought to be granted more trust while dishonestly portraying yourself as someone other than that person is a really bad strategy. When you’re caught doing it, those arguments for more trust are undermined by the fact that they are themselves further instances of the deceptive behavior that broke trust in the first place.

I will allow as how Diederik Stapel may have some valuable lessons to teach of, though. One of these is how not to make a convincing case that you’ve reformed.

Grappling with the angry-making history of human subjects research, because we need to.

Teaching about the history of scientific research with human subjects bums me out.

Indeed, I get fairly regular indications from students in my “Ethics in Science” course that reading about and discussing the Nazi medical experiments and the U.S. Public Health Service’s Tuskegee syphilis experiment leaves them feeling grumpy, too.

Their grumpiness varies a bit depending on how they see themselves in relation to the researchers whose ethical transgressions are being inspected. Some of the science majors who identify strongly with the research community seem to get a little defensive, pressing me to see if these two big awful examples of human subject research aren’t clear anomalies, the work of obvious monsters. (This is one reason I generally point out that, when it comes to historical examples of ethically problematic research with human subjects, the bench is deep: the U.S. government’s syphilis experiments in Guatemala, the MIT Radioactivity Center’s studies on kids with mental disabilities in a residential school, the harms done to Henrietta Lacks and to the family members that survived her by scientists working with HeLa cells, the National Cancer Institute and Gates Foundation funded studies of cervical cancer screening in India — to name just a few.) Some of the non-science majors in the class seem to look at their classmates who are science majors with a bit of suspicion.

Although I’ve been covering this material with my students since Spring of 2003, it was only a few years ago that I noticed that there was a strong correlation between my really bad mood and the point in the semester when we were covering the history of human subjects research. Indeed, I’ve come to realize that this is no mere correlation but a causal connection.

The harm that researchers have done to human subjects in order to build scientific knowledge in many of these historically notable cases makes me deeply unhappy. These cases involve scientists losing their ethical bearings and then defending indefensible actions as having been all in the service of science. It leaves me grumpy about the scientific community of which these researchers were a part (rather than being obviously marked as monsters or rogues). It leaves me grumpy about humanity.

In other contexts, my grumpiness might be no big deal to anyone but me. But in the context of my “Ethics in Science” course, I need to keep pessimism on a short leash. It’s kind of pointless to talk about what we ought to do if you’re feeling like people are going to be as evil as they can get away with being.

It’s important to talk about the Nazi doctors and the Tuskegee syphilis experiment so my students can see where formal statements about ethical constraints on human subject research (in particular, the Nuremberg Code and the Belmont Report) come from, what actual (rather than imagined) harms they are reactions to. To the extent that official rules and regulations are driven by very bad situations that the scientific community or the larger human community want to avoid repeating, history matters.

History also matters if scientists want to understand the attitudes of publics towards scientists in general and towards scientists conducting research with human subjects in particular. Newly-minted researchers who would never even dream of crossing the ethical lines the Nazi doctors or the Tuskegee syphilis researchers crossed may feel it deeply unfair that potential human subjects don’t default to trusting them. But that’s not how trust works. Ignoring the history of human subjects research means ignoring very real harms and violations of trust that have not faded from the collective memories of the populations that were harmed. Insisting that it’s not fair doesn’t magically earn scientists trust.

Grappling with that history, though, might help scientists repair trust and ensure that the research they conduct is actually worthy of trust.

It’s history that lets us start noticing patterns in the instances where human subjects research took a turn for the unethical. Frequently we see researchers working with human subjects who that don’t see as fully human, or whose humanity seems less important than the piece of knowledge the researchers have decided to build. Or we see researchers who believe they are approaching questions “from the standpoint of pure science,” overestimating their own objectivity and good judgment.

This kind of behavior does not endear scientists to publics. Nor does it help researchers develop appropriate epistemic humility, a recognition that their objectivity is not an individual trait but rather a collective achievement of scientists engaging seriously with each other as they engage with the world they are trying to know. Nor does it help them build empathy.

I teach about the history of human subjects research because it is important to understand where the distrust between scientists and publics has come from. I teach about this history because it is crucial to understanding where current rules and regulations come from.

I teach about this history because I fully believe that scientists can — and must — do better.

And, because the ethical failings of past human subject research were hardly ever the fault of monsters, we ought to grapple with this history so we can identify the places where individual human weakness, biases, blind-spots are likely to lead to ethical problems down the road. We need to build systems and social mechanisms to be accountable to human subjects (and to publics), to prioritize their interests, never to lose sight of their humanity.

We can — and must — do better. But this requires that we seriously examine the ways that scientists have fallen short — even the ways that they have done evil. We owe it to future human subjects of research to learn from the ways scientists have failed past human subjects, to apply these lessons, to build something better.

Adjudicating “misbehavior”: how can scientists respond when they don’t get fair credit?

As I mentioned in an earlier post, I recently gave a talk at UC – Berkeley’s Science Leadership and Management (SLAM) seminar series. After the talk (titled “The grad student, the science fair, the reporter, and the lionfish: a case study of competition, credit, and communication of science to the public”), there was a discussion that I hope was at least as much fun for the audience as it was for me.

One of the questions that came up had to do with what recourse members of the scientific community have when other scientists are engaged in behavior that is problematic but that falls short of scientific misconduct.

If a scientist engages in fabrication, falsification, or plagiarism — and if you can prove that they have done so — you can at least plausibly get help from your institution, or the funder, or the federal government, in putting a stop to the bad behavior, repairing some of the damage, and making sure the wrongdoer is punished. But misconduct is a huge line to cross, so harmful to the collective project of scientific knowledge-building that, scientists hope, most scientists would never engage in it, no matter how dire the circumstances.

Other behavior that is ethically problematic in the conduct of science, however, is a lot more common. Disputes over appropriate credit for scientific contributions (which is something that came up in my talk) are sufficiently common that most people who have been in science for a while have first-hand stories they can tell you.

Denying someone of fair credit for the contribution they made to a piece of research is not a good thing. But who can you turn to if someone does it to you? Can the Office of Research Integrity go after the coauthor who didn’t fully acknowledge your contribution to your joint paper (and in the process knocked you from second author to third), or will you have to suck it up?

At the heart of the question is the problem of working out what mechanisms are currently available to address this kind of problem.

Is it possible to stretch the official government definition of plagiarism“the appropriation of another person’s ideas, processes, results, or words without giving appropriate credit” — to cover the situation where you’re being given credit but not enough?

When scientists work out who did enough to be an author on a scientific paper reporting a research finding — and how the magnitude of the various contributions should be reflected in the ordering of names in the author line — is there a clear, objective, correct answer? Are there widely accepted standards that scientists are using to assign appropriate credit? Or, do the standards vary locally, situationally? Is the lack of a clear set of shared standards the kind of thing that creates ambiguities that scientists are prepared to use to their own advantage when they can?

We’ve discussed before the absence of a single standard for authorship embraced uniformly by the Tribe of Science as a whole. Maybe making the case for such a shared standard would help scientists protect themselves from having their contributions minimized — and also help them not unintentionally minimize the contributions of others.

While we’re waiting for a shared standard to gain acceptance, however, there are a number of scientific journals that clearly spell out their own standards for who counts as an author and what kinds of contributions to research and the writing of the paper do or do not rise to the level of receiving authorship credit. If you have submitted your work to a journal with a clear policy of this sort, and if your coauthors have subverted the policy to misrepresent your contribution, you can bring the problem to the journal editors. Indeed, Retraction Watch is brimming with examples of papers that have been retracted on account of problems with who is, or is not, credited with the work that had been published.

While getting redress form a journal editor may be better than nothing, a retraction is the kind of thing that leaves a mark on a scientific reputation — and on the relationships scientists need to be able to coordinate their efforts in the project of scientific knowledge-building. I would argue, however, that not giving the other scientists you work with fair credit for their contributions is also harmful to those relationships, and to the reputations of the scientists who routinely minimize the contributions of others while inflating their own contributions.

So maybe one of the most important things scientists can do right now, given the rules and the enforcement mechanisms that currently exist, the variance in standards and the ambiguities which they create, is to be clear in communicating about contributions and credit from the very beginning of every collaboration. As people are making contributions to the knowledge being built, explicitly identifying those contributions strikes me as a good practice that can help keep other people’s contributions from escaping our notice. Talking about how the different pieces lead to better understanding of what’s going on may also help the collaborators figure out how to make more progress on their research questions by bringing additional contributions to bear.

Of course, it may be easier to spell out what particular contributions each person in the collaboration made than to rank them in terms of which contribution was the biggest or the most important. But maybe this is a good argument for an explicit authorship standard in which authors specify the details of what they contributed and sidestep the harder question of whether experimental design was more or less important that the analysis of the data in this particular collaboration.

There’s a funny kind of irony in feeling like you have better tools to combat bad behavior that happens less frequently than you do to combat bad behavior that happens all the time. Disputes about credit may feel minor enough to be tolerable most of the time, differences of opinion that can expose power gradients in scientific communities that like to think of themselves as egalitarian. But especially for the folks on the wrong end of the power gradients, the erosion of recognition for their hard work can hurt. It may even lessen their willingness to collaborate with other scientists, impoverishing the opportunities for cooperation that help the knowledge get built efficiently. Scientists are entitled to expect better of each other. When they do — and when they give voice to those expectations (and to their disappointment when their scientific peers don’t live up to them) — maybe disputes over fair credit will become rare enough that someday most people who have been in science for a while won’t have first-hand stories they can tell you about them.

Communicating with the public, being out as a scientist.

In the previous post, I noted that scientists are not always directly engaged in the project of communicating about their scientific findings (or about the methods they used to produce those findings) to the public.

Part of this is a matter of incentives: most scientists don’t have communicating with the public as an explicit part of their job description, and they are usually better rewarded for paying attention to things that are explicit parts of their job descriptions. Part of it is training: scientists are generally taught a whole lot more about how to conduct research in their field than they are taught about effective strategies for communicating with non-scientists. Part of it is the presence of other professions (like journalists and teachers and museum curators) that are, more or less, playing the communicating-with-the-public-about-science zone. Still another part of it may be temperament: some people say that they went into science because they wanted to do research, not to deal with people. Of course, since doing research requires dealing with other people sooner or later, I’m guessing these folks are terribly bitter that scientific research did not support their preferred lifestyle of total isolation from human contact — or, that they really meant that they didn’t want to deal with people who are non-scientists.

I’d like to suggest, however, that there are very good reasons for scientists to be communicating about science with non-scientists — even if it’s not a job requirement, and there are other people playing that zone, and it doesn’t feel like it comes naturally.

The public has an interest in understanding more than it does about what science knows and how science comes to know it, about which claims are backed by evidence and which others are backed by wishful thinking or outright deception. But it’s hard to engage an adult as you would a student; members of the public are frequently just not up for didactic engagement. Dropping a lecture of what you perceive as their ignorance (or their “knowledge deficit,” as the people who study scientific communication and public understanding of science would call it) probably won’t be a welcome form of engagement.

In general, non-scientists neither need nor want to be able to evaluate scientific claims and evidence with the technical rigor that scientists evaluate them. What they need more is a read on whether the scientists whose job it is to make and evaluate these claims are the kind of people they can trust.

This seems to me like a good reason for scientists to come out as scientists to their communities, their families, their friends.

Whenever there are surveys of how many Americans can name a living scientist, a significant proportion of the people surveyed just can’t name any. But I suspect a bunch of these people know actual, living scientists who walk in their midst — they just don’t know that these folks they know as people are also scientists.

If everyone who is a scientist were to bring that identity to their other human interactions, to let it be a part of what the neighbors, or the kids whose youth soccer team they coach, or the people at the school board meeting, or the people at the gym know about them, what do you think that might do to the public’s picture of who scientists are and what scientists are like? What could letting your scientific identity ride along with the rest of you do to help your non-scientist fellow travelers get an idea of what scientists do, or of what inspires them to do science? Could being open about your ties to science help people who already have independent reasons to trust you find reasons to be less reflexively distrustful of science and scientists?

These seem to me like empirical questions. Let’s give it a try and see what we find out.

Are scientists who don’t engage with the public obliged to engage with the press?

In posts of yore, we’ve had occasion to discuss the duties scientists may have to the non-scientists with whom they share a world. One of these is the duty to share the knowledge they’ve built with the public — especially if that knowledge is essential to the public’s ability to navigate pressing problems, or if the public has put up the funds for the research in which that knowledge was built.

Even if you’re inclined to think that what we have here is something that falls short of an obligation, there are surely cases where it would have good effects — not just for the public, but also for scientists — if the public were informed of important scientific findings. After all, if not knowing a key piece of knowledge, or not understanding its implications or how certain or uncertain it is, leads the public to make worse decisions (whether at the ballot box or in their everyday lives), the impacts of those worse decisions could also harm the scientists with whom they are sharing a world.

But here’s the thing: Scientists are generally trained to communicate their knowledge through journal articles and conference presentations, seminars and grant proposals, patent applications and technical documents. Moreover, these tend to be the kind of activities in scientific careers that are rewarded by the folks making the evaluations, distributing grant money, and cutting the paychecks. Very few scientists get explicit training in how to communicate about their scientific findings, or about the processes by which the knowledge is built, with the public. Some scientists manage to be able to do a good job of this despite a lack of training, others less so. And many scientists will note that there are hardly enough hours in the day to tackle all the tasks that are recognized and rewarded in their official scientific job descriptions without adding “communicating science to the public” to the stack.

As a result, much of the job of communicating to the public about scientific research and new scientific findings falls to the press.

This raises another question for scientists: If scientists have a duty (or at least a strong interest) in making sure the knowledge they build is shared with the public, and if scientists themselves are not taking on the communicative task of sharing it (whether because they don’t have the time or they don’t have the skills to do it effectively), do scientists have an obligation to engage with the press to whom that communicative task has fallen?

Here, of course, we encounter some longstanding distrust between scientists and journalists. Scientists sometimes worry that the journalists taking on the task of making scientific findings intelligible to the public don’t themselves understand the scientific details (or scientific methodology more generally) much better than the public does. Or, they may worry about helping a science journalist who has already decided on the story they are going to tell and who will gleefully ignore or distort facts in the service of telling that story. Or, they may worry that the discovery-of-the-week model of science that journalists frequently embrace distorts the public’s understanding of the ongoing cooperative process by which a body of scientific knowledge is actually built.

To the extent that scientists believe journalists will manage to get things wrong, they may feel like they do less harm to the public’s understanding of science if they do not engage with journalists at all.

While I think this is an understandable impulse, I don’t think it necessarily minimizes the harm.

Indeed, I think it’s useful for scientists to ask themselves: What happens if I don’t engage and journalists try to tell the story anyway, without input from scientists who know this area of scientific work and why it matters?

Of course, I also think it would benefit scientists, journalists, and the public if scientists got more support here, from training in how to work with journalists, to institutional support in their interactions with journalist, to more general recognition that communicating about science with broader audiences is a good thing for scientists (and scientific institutions) to be doing. But in a world where “public outreach” falls much further down on the scientist’s list of pressing tasks than does bringing in grant money, training new lab staff, and writing up results for submission, science journalists are largely playing the zone where communication of science to the public happens. Scientists who are playing other zones should think about how they can support science journalists in covering their zone effectively.

Doing science is more than building knowledge: on professional development in graduate training.

Earlier this week, I was pleased to be an invited speaker at UC – Berkeley’s Science Leadership and Management (SLAM) seminar series. Here’s the official description of the program:

What is SLAM?

Grad school is a great place to gain scientific expertise – but that’s hardly the only thing you’ll need in your future as a PhD. Are you ready to lead a group? Manage your coworkers? Mentor budding scientists? To address the many interpersonal issues that arise in a scientific workplace, grad students from Chemistry, Physics, and MCB founded SLAM: Science Leadership and Management.

This is a seminar series focused on understanding the many interpersonal interactions critical for success in a scientific lab, as well as some practical aspects of lab management.  The target audience for this course is upper-level science graduate students with broad interests and backgrounds, and the skills discussed will be applicable to a variety of career paths. Postdocs are also welcome to attend.

Let me say for the record that I think programs like this are tremendously important, and far too few universities with Ph.D. programs have anything like them. (Stanford has offered something similar, although more explicitly focused on career trajectories in academia, in its Future Faculty Seminar.)

In their standard configuration, graduate programs can do quite a lot to help you learn how to build new knowledge in your discipline. Mostly, you master this ability by spending years working, under the supervision of your graduate advisor, to build new knowledge in your discipline. The details of this apprenticeship vary widely, owing largely to differences in advisors’ approaches: some are very hands-on mentors, others more hands-off, some inclined towards very specific task-lists for the scientific trainees in their labs, others towards letting trainees figure out their own plans of attack or even their own projects. The promise the Ph.D. training holds out, though, is that at the end of the apprenticeship you will have the skills and capacities to go forth and build more knowledge in your field.

The challenge is that most of this knowledge-building will take place in employment contexts that expect the knowledge-builders will have other relevant skills, as well. These may include mounting collaborations, or training others, or teaching, or writing for an audience of non-experts, not to mention working effectively with others (in the lab, on committees, in other contexts) and making good ethical decisions.

To the extent that graduate training focuses solely on learning how to be a knowledge-builder, it often falls down on the job of providing reasonable professional development. This is true even in the realm of teaching, where graduate students usually gain some experience as teaching assistants but they hardly ever get any training in pedagogy.

The graduate students who organize the SLAM program at Berkeley impress me as a smart, vibrant bunch, and they have a supportive faculty advisor. But it’s striking to me that such efforts at serious professional development for grad students are usually spearheaded by grad students, rather than by the grown-up members of their departments training them to be competent knowledge-builders.

One wonders if this is because it just doesn’t occur to the grown-up members of these disciplines that their trainees that such professional development could be helpful — or because graduate programs don’t feel like they owe their graduate students professional development of this sort.

If the latter, that says something about how graduate programs see their relationship with their students, especially in scientific fields. If all you are transmitting to students is how to build new knowledge, rather than attending to other skills they will need to successfully apply their knowledge-building chops in a career after graduate school, it makes it hard not to suspect that the relationship is really one that’s all about providing relatively cheap knowledge-building labor for grad school faculty.

Apprenticeships need not be that exploitative.

Indeed, if graduate programs want to compete for the best grad-school-bound undergraduates or prospective students who have done something else in the interval since their undergraduate education, making serious professional development could help them distinguish themselves from other programs. The trick here is that trainees would need to recognize, as they’re applying to graduate programs, that professional development is something they deserve. Whoever is mentoring them and providing advice on how to choose a graduate program should at least put the issue of professional development on the radar.

If you are someone who fits that description, I hope I have just put professional development on your radar.

Complacent in earthquake country.

A week ago, there was a 6.0 earthquake North of San Francisco. I didn’t feel it, because I was with my family in Santa Barbara that weekend. Even if we had been home, it’s not clear that we would have noticed it; reports are that some folks in San Jose felt some shaking but others slept through it.

Dana Hunter has a great breakdown of what to do if you find yourself in a temblor. Even for those of you nowhere near California, it’s worth a read, since we’re not the only place with fault lines or seismic activity.

But I must confess, I’ve lived in earthquake country for nearly 25 years now, and we don’t have an earthquake preparedness kit.

To be fair, we have many of the recommended items on the list, though not all in one place as an official “kit”. I even know where many of the recommended components are (like the first aid kit, which came with us to the swim league’s championship meet, and the rain gear, which comes out every year that we have a proper rainy season). But we haven’t got the preserved-with-bleach, replaced-every-six-months ration of a gallon of water per person per day. We’re in the middle of a drought right now. If we needed emergency water, how many days would we need it for?

Honestly, though, the thing that really holds me back from preparing for an earthquake is that earthquakes are so darned unpredictable.

My attitude towards earthquake preparedness is surely not helped by the fact that my very first earthquake, when I had been in California scarcely a month, was the October 1989 Loma Prieta quake, clocking in at 6.9 or 7.0, depending on who you ask. I felt that temblor, but had nothing to compare it to. At the time, it was actually almost cool: hey, that must be an earthquake! I didn’t know that it was big, or how much damage it had done, until my housemates got home and turned on the TV.

The earth shakes, but seldom for more than a minute. If after the shaking everything returns to normal, you might even go to the USGS “Did You Feel It?” page to add your data on how it felt in your location. Depending on where you are (a lab full of glassware and chemicals and students, a law library with bookcases lining the walls, a building with lots of windows, a multistory building on filled in land that used to be bay, a bridge), you may get hurt. But you may not.

Maybe you lose power for a day or two, but we survived the regular rolling blackouts when Enron was playing games with the California power grid. (That’s why I know where our flashlights and emergency candles are.) Maybe a water main breaks and you get by on juice boxes, tonic water, and skipping showers until service returns.

Since 1989, people in these parts have been pretty good about seismic retrofits. My impression is that the recession has slowed such retrofits down recently (and generally dealt a blow to keeping up infrastructure like roads and bridges), but it’s still happening. The new span on the Bay Bridge is supposed to have been engineered specifically with significant quakes in mind, although some engineers mutter their doubts.

I’d rather not be on a bridge, or a freeway, or a BART train when the big one hits. But we haven’t really got the kind of lead time it would take to ensure that — the transit trip-planners don’t include quakes the same way they do scheduled maintenance or even just-reported accidents.

There is no earthquake season. There is no earthquake weather. Earthquakes are going to happen when they happen.

So, psychologically, they are really, really hard to prepare for.

Fall semester musing on numbers.

The particular numbers on which I’m focused aren’t cool ones like pi, although I suspect they’re not entirely rational, either.

I teach at a public university in a state whose recent budget crises have been epic. That means that funding for sections of classes (and especially for the faculty who teach those sections of classes) has been tight.

My university is a teaching-focused university, which means that there has also been serious effort to ensure that the education students get at the university gives them a significant level of mastery over their major subject, helps them develop compentencies and qualities of mind and skills, and so forth. How precisely to ensure this is an interesting conversation, couched in language about learning objectives and assessments and competing models of learning. But for at least some of the things our students are supposed to learn, the official judgment has been that this will require students to write (and receive meaningful feedback on) a minimum number of words, and for them to do so in classes with a relatively small maximum number of students.

In a class where students are required to write, and receive feedback on, a total of at least 6000 words, it seems absolutely reasonable that you wouldn’t want more than 25 students in the class. Do you want to grade and comment on more than 150,000 words per class section you are teaching? (At my university, it’s usually three or four sections per semester.) That’s a lot of feedback, and for it to be at all useful in assisting student learning, it’s best of you don’t go mad in the process of giving it.

There’s a recognition, then, that on a practical level, for courses that help students learn by way of a lot of writing, smaller class sizes are good. From the student’s point of view as well, there are arguably additional benefits to a smaller class size, whether being able to ask questions during lectures or class discussions, not feeling lost in the crowd, or what have you.

At least for a certain set of courses, the university recognizes that smaller classes are better and requires that the courses be no larger than 25.

But remember that tight funding? This means that the university has also put demands on departments, schools, and colleges within the university to maintain higher and higher student-faculty ratios.

If you make one set of courses small, to maintain the required student-faculty ratio, you must make other courses big — sometimes very, very big.

But while we’re balancing numbers and counting beans, we are still a teaching-focused university. That might mean that what supports effective teaching and learning should be a constraint on our solutions to the bean-counting problems.

We’re taking as a constraint that composition, critical thinking, and chemistry lab (among others) are courses where keeping class sizes small makes for better teaching and learning.

Is there any reason (beyond budgetary expedience) to think that the courses that are made correspondingly large are also making for better teaching and learning? Is there any subject we teach to a section of 200 that we couldn’t teach better to 30? (And here, some sound empirical research would be nice, not just anecdata.)

I can’t help but wonder if there is some other way to count the beans that would better support our teaching-focused mission, and our students.

Some thoughts about the suicide of Yoshiki Sasai.

In the previous post I suggested that it’s a mistake to try to understand scientific activity (including misconduct and culpable mistakes) by focusing on individual scientists, individual choices, and individual responsibility without also considering the larger community of scientists and the social structures it creates and maintains. That post was where I landed after thinking about what was bugging me about the news coverage and discussions about recent suicide of Yoshiki Sasai, deputy director of the Riken Center for Developmental Biology in Kobe, Japan, and coauthor of retracted papers on STAP cells.

I went toward teasing out the larger, unproductive pattern I saw, on the theory that trying to find a more productive pattern might help scientific communities do better going forward.

But this also means I didn’t say much about my particular response to Sasai’s suicide and the circumstances around it. I’m going to try to do that here, and I’m not going to try to fit every piece of my response into a larger pattern or path forward.

The situation in a nutshell:

Yoshiki Sasai worked with Haruko Obokata at the Riken Center on “stimulus-triggered acquisition of pluripotency”, a method by which exposing normal cells to a stress (like a mild acid) supposedly gave rise to pluripotent stem cells. It’s hard to know how closely they worked together on this; in the papers published on STAP. Obokata was the lead-author and Sasai was a coauthor. It’s worth noting that Obokata was some 20 years younger than Sasai, an up-and-coming researcher. Sasai was a more senior scientist, serving in a leadership position at the Riken Center and as Obokata’s supervisor there.

The papers were published in a high impact journal (Nature) and got quite a lot of attention. But then the findings came into question. Other researchers trying to reproduce the findings that had been reported in the papers couldn’t reproduce them. One of the images in the papers seemed to be a duplicate of another, which was fishy. Nature investigated, Riken investigated, the papers were retracted, Obokata continued to defend the papers and to deny any wrongdoing.

Meanwhile, a Riken investigation committee said “Sasai bore heavy responsibility for not confirming data for the STAP study and for Obokata’s misconduct”. This apparently had a heavy impact on Sasai:

Sasai’s colleagues at Riken said he had been receiving mental counseling since the scandal surrounding papers on STAP, or stimulus-triggered acquisition of pluripotency, cells, which was lead-authored by Obokata, came to light earlier this year.

Kagaya [head of public relations at Riken] added that Sasai was hospitalized for nearly a month in March due to psychological stress related to the scandal, but that he “recovered and had not been hospitalized since.”

Finally, Sasai hanged himself in a Riken stairwell. One of the notes he left, addressed to Obokata, urged her to reproduce the STAP findings.

So, what is my response to all this?

I think it’s good when scientists take their responsibilities seriously, including the responsibility to provide good advice to junior colleagues.

I also think it’s good when scientists can recognize the limits. You can give very, very good advice — and explain with great clarity why it’s good advice — but the person you’re giving it to may still choose to do something else. It can’t be your responsibility to control another autonomous person’s actions.

I think trust is a crucial part of any supervisory or collaborative relationship. I think it’s good to be able to interact with coworkers with the presumption of trust.

I think it’s awful that it’s so hard to tell which people are not worthy of our trust before they’ve taken advantage of our trust to do something bad.

Finding the right balance between being hands-on and giving space is a challenge in the best of supervisory or mentoring relationships.

Bringing an important discovery with the potential to enable lots of research that could ultimately help lots of people to one’s scientific peers — and to the public — must feel amazing. Even if there weren’t a harsh judgment from the scientific community for retraction, I imagine that having to say, “We jumped the gun on the ‘discovery’ we told you about” would not feel good.

The danger of having your research center’s reputation tied to an important discovery is what happens if that discovery doesn’t hold up, whether because of misconduct or mistakes. And either way, this means that lots of hard work that is important in the building of the shared body of scientific knowledge (and lots of people doing that hard work) can become invisible.

Maybe it would be good to value that work on its own merits, independent of whether anyone else judged it important or newsworthy. Maybe we need to rethink the “big discoveries” and “important discoverers” way of thinking about what makes scientific work or a research center good.

Figuring out why something went wrong is important. When the something that went wrong includes people making choices, though, this always seems to come down to assigning blame. I feel like that’s the wrong place to stop.

I feel like investigations of results that don’t hold up, including investigations that turn up misconduct, should grapple with the question of how can we use what we found here to fix what went wrong? Instead of just asking, “Whose fault was this?” why not ask, “How can we address the harm? What can we learn that will help us avoid this problem in the future?”

I think it’s a problem when a particular work environment makes the people in it anxious all the time.

I think it’s a problem when being careful feels like an unacceptable risk because it slows you down. I think it’s a problem when being first feels more important than being sure.

I think it’s a problem when a mistake of judgment feels so big that you can’t imagine a way forward from it. So disastrous that you can’t learn something useful from it. So monumental that it makes you feel like not existing.

I feel like those of us who are still here have a responsibility to pay attention.

We have a responsibility to think about the impacts of the ways science is done, valued, celebrated, on the human beings who are doing science — and not just on the strongest of those human beings, but also on the ones who may be more vulnerable.

We have a responsibility to try to learn something from this.

I don’t think what we should learn is not to trust, but how to be better at balancing trust and accountability.

I don’t think what we should learn is not to take the responsibilities of oversight seriously, but to put them in perspective and to mobilize more people in the community to provide more support in oversight and mentoring.

Can we learn enough to shift away from the Important New Discovery model of how we value scientific contributions? Can we learn enough that cooperation overtakes competition, that building the new knowledge together and making sure it holds up is more important than slapping someone’s name on it? I don’t know.

I do know that, if the pressures of the scientific career landscape are harder to navigate for people with consciences and easier to navigate for people without consciences, it will be a problem for all of us.

When focusing on individual responsibility obscures shared responsibility.

Over many years of writing about ethics in the conduct of science, I’ve had occasion to consider many cases of scientific misconduct and misbehavior, instances of honest mistakes and culpable mistakes. Discussions of these cases in the media and among scientists often make them look aberrant, singular, unconnected — the Schön case, the Hauser case, Aetogate, the Sezen-Sames case, the Hwang Woo-Sook case, the Stapel case, the Van Parijs case.* They make the world of science look binary, a set of unproblematically ethical practitioners with a handful of evil interlopers who need only be identified and rooted out.

I don’t think this approach is helpful, either in preventing misconduct, misbehavior, and mistakes, or in mounting a sensible response to the people involved in them.

Indeed, despite the fact that scientific knowledge-building is inherently a cooperative activity, the tendency to focus on individual responsibility can manifest itself in assignment of individual blame on people who “should have known” that another individual was involved in misconduct or culpable mistakes. It seems that something like this view — whether imposed from without or from within — may have been a factor in the recent suicide of Yoshiki Sasai, deputy director of the Riken Center for Developmental Biology in Kobe, Japan, and coauthor of retracted papers on STAP cells.

While there seems to be widespread suspicion that the lead-author of the STAP cell papers, Haruko Obokata, may have engaged in research misconduct of some sort (something Obokata has denied), Sasai was not himself accused of research misconduct. However, in his role as an advisor to Obokata, Sasai was held responsible by Riken’s investigation for not confirming Obokata’s data. Sasai expressed shame over the problems in the retracted papers, and had been hospitalized prior to his suicide in connection to stress over the scandal.

Michael Eisen describes the similarities here to his own father’s suicide as a researcher at NIH caught up in the investigation of fraud committed by a member of his lab:

[A]s the senior scientists involved, both Sasai and my father bore the brunt of the institutional criticism, and both seem to have been far more disturbed by it than the people who actually committed the fraud.

It is impossible to know why they both responded to situations where they apparently did nothing wrong by killing themselves. But it is hard for me not to place at least part of the blame on the way the scientific community responds to scientific misconduct.

This response, Eisen notes, goes beyond rooting out the errors in the scientific record and extends to rooting out all the people connected to the misconduct event, on the assumption that fraud is caused by easily identifiable — and removable — individuals, something that can be cut out precisely like a tumor, leaving the rest of the scientific community free of the cancer. But Eisen doesn’t believe this model of the problem is accurate, and he notes the damage it can do to people like Sasai and like his own father:

Imagine what it must be like to have devoted your life to science, and then to discover that someone in your midst – someone you have some role in supervising – has committed the ultimate scientific sin. That in and of itself must be disturbing enough. Indeed I remember how upset my father was as he was trying to prove that fraud had taken place. But then imagine what it must feel like to all of a sudden become the focal point for scrutiny – to experience your colleagues and your field casting you aside. It must feel like your whole world is collapsing around you, and not everybody has the mental strength to deal with that.

Of course everyone will point out that Sasai was overreacting – just as they did with my father. Neither was accused of anything. But that is bullshit. We DO act like everyone involved in cases of fraud is responsible. We do this because when fraud happens, we want it to be a singularity. We are all so confident this could never happen to us, that it must be that somebody in a position of power was lax – the environment was flawed. It is there in the institutional response. And it is there in the whispers …

Given the horrible incentive structure we have in science today – Haruko Obokata knew that a splashy result would get a Nature paper and make her famous and secure her career if only she got that one result showing that you could create stem cells by dipping normal cells in acid – it is somewhat of a miracle that more people don’t make up results on a routine basis. It is important that we identify, and come down hard, on people who cheat (although I wish this would include the far greater number of people who overhype their results – something that is ultimately more damaging than the small number of people who out and out commit fraud).

But the next time something like this happens, I am begging you to please be careful about how you respond. Recognize that, while invariably fraud involves a failure not just of honesty but of oversight, most of the people involved are honest, decent scientists, and that witch hunts meant to pretend that this kind of thing could not happen to all of us are not just gross and unseemly – they can, and sadly do, often kill.

As I read him, Eisen is doing at least a few things here. He is suggesting that a desire on the part of scientists for fraud to be a singularity — something that happens “over there” at the hands of someone else who is bad — means that they will draw a circle around the fraud and hold everyone on the inside of that circle (and no one outside of it) accountable. He’s also arguing that the inside/outside boundary inappropriately lumps the falsifiers, fabricators, and plagiarists with those who have committed the lesser sin of not providing sufficient oversight. He is pointing out the irony that those who have erred by not providing sufficient oversight tend to carry more guilt than do those they were working with who have lied outright to their scientific peers. And he is suggesting that needed efforts to correct the scientific record and to protect the scientific community from dishonest researchers can have tragic results for people who are arguably less culpable.

Indeed, if we describe Sasai’s failure as a failure of oversight, it suggests that there is some clear benchmark for sufficient oversight in scientific research collaborations. But it can be very hard to recognize that what seemed like a reasonable level of oversight was insufficient until someone who you’re supervising or with whom you’re collaborating is caught in misbehavior or a mistake. (That amount of oversight might well have been sufficient if the person one was supervising chose to behave honestly, for example.) There are limits here. Unless you’re shadowing colleagues 24/7, oversight depends on some baseline level of trust, some presumption that one’s colleagues are behaving honestly rather than dishonestly.

Eisen’s framing of the problem, though, is still largely in terms of the individual responsibility of fraudsters (and over-hypers). This prompts arguments in response about individuals bearing responsibility for their actions and their effects (including the effects of public discussion of those actions and about the individual scientists who are arguably victims of data fabrication and fraud. We are still in the realm of conceiving of fraudsters as “other” rather than recognizing that honest, decent scientists may be only a few bad decisions away from those they cast as monsters.

And we’re still describing the problem in terms of individual circumstances, individual choices, and individual failures.

I think Eisen is actually on the road to pointing out that a focus primarily on the individual level is unhelpful when he points to the problems of the scientific incentive structure. But I think it’s important to explicitly raise the alternate model, that fraud also flows from a collective failure of the scientific community and of the social structures it has built — what is valued, what is rewarded, what is tolerated, what is punished.

Arguably, one of the social structures implicated in scientific fraud is the first across the finish line, first to publish in a high impact journal model of scientific achievement. When being second to a discovery counts for exactly nothing (after lots of time, effort, and other resources have been invested), there is much incentive for haste and corner-cutting, and sometimes even outright fraud. This provides temptations for researchers — and dangers for those providing oversight to ambitious colleagues who may fall prey to such temptations. But while misconduct involves individuals making bad decisions, it happens in the context of a reward structure that exists because of collective choices and behaviors. If the structures that result from those collective choices and behaviors make some kinds of individual choices that are pathological to the shared project (building knowledge) rational choices for the individual to make under the circumstances (because they help the individual secure the reward), the community probably has an interest in examining the structures it has built.

Similarly, there are pathological individual choices (like ignoring or covering up someone else’s misconduct) that seem rational if the social structures built by the scientific community don’t enable a clear path forward within the community for scientists who have erred (whether culpably or honestly). Scientists are human. They get attached to their colleagues and tend to believe them to be capable of learning from their mistakes. Also, they notice that blowing the whistle on misconduct can lead to isolation of the whistleblower, not just the people committing the misconduct. Arguably, these are failures of the community and of the social structures it has built.

We might even go a step further and consider whether insisting on talking about scientific behavior (and misbehavior) solely in terms of individual actions and individual responsibility is part of the problem.

Seeing the scientific enterprise and things that happen in connection with it in terms of heroes and villains and innocent bystanders can seem very natural. Taking this view also makes it look like the most rational choice for scientists to plot their individual courses within the status quo. The rules, the reward structures, are taken almost as if they were carved in granite. How could one person change them? What would be the point of opting out of publishing in the high impact journals, since it would surely only hurt the individual opting out while leaving the system intact? In a competition for individual prestige and credit for knowledge built, what could be the point of pausing to try to learn something from the culpable mistakes committed by other individuals rather than simply removing those other individuals from the competition?

But individual scientists are not working in isolation against a fixed backdrop. Treating their social structures as if they were a fixed backdrop not only obscures that these structures result from collective choices but also prevents scientists from thinking together about other ways the institutional practice of science could be.

Whether some of the alternative arrangements they could create might be better than the status quo — from the point of view of coordinating scientific efforts, improving scientists’ quality of life, or improving the quality of the body of knowledge scientist are building — is surely an empirical question. But just as surely it is an empirical question worth exploring.

______
* It’s worth noticing that failures of safety are also frequently characterized as singular events, as in the Sheri Sangji/Patrick Harran case. As I’ve discussed at length on this blog, there is no reason to imagine the conditions in Harran’s lab that led to Sangji’s death were unique, and there is plenty of reason for the community of academic researchers to try to cultivate a culture of safety rather than individually hoping their own good luck will hold.