You’re not rehabilitated if you keep deceiving.

Regular readers will know that I view scientific misconduct as a serious harm to both the body of scientific knowledge and the scientific community involved in building that knowledge. I also hold out hope that at least some of the scientists who commit scientific misconduct can be rehabilitated (and I’ve noted that other members of the scientific community behave in ways that suggest that they, too, believe that rehabilitation is possible).

But I think a non-negotiable prerequisite for rehabilitation is demonstrating that you really understand how what you did was wrong. This understanding needs to be more than simply recognizing that what you did was technically against the rules. Rather, you need to grasp the harms that your actions did, the harms that may continue as a result of those actions, the harms that may not be quickly or easily repaired. You need to <a href="http://blogs.scientificamerican.com/doing-good-science/2014/06/29/do-permanent-records-of-scientific-misconduct-findings-interfere-with-rehabilitation/"acknowledgethose harms, not minimize them or make excuses for your actions that caused the harms.

And, you need to stop behaving in the ways that caused the harms in the first place.

Among other things, this means that if you did significant harm to your scientific community, and to the students you were were supposed to be training, by making up “results” rather than actually doing experiments and making and reporting accurate results, you need to recognize that you have acted deceptively. To stop doing harm, you need to stop acting deceptively. Indeed, you may need to be significantly more transparent and forthcoming with details than others who have not transgressed as you have. Owing to your past bad acts, you may just have to meet a higher burden of proof going forward.

That you have retracted the publications in which you deceived, or lost a degree for which (it is strongly suspected) you deceived, or lost your university post, or served your hours of court-ordered community service does not reset you to the normal baseline of presumptive trust. “Paying your debt to society” does not in itself mean that anyone is obligated to believe that you are not still untrustworthy. If you break trust, you need to earn it back, not to demand it because you did your time.

You certainly can’t earn that trust back by engaging in deception to mount an argument that people should give you a break because you’ve served out your sentence.

These thoughts on how not to approach your own rehabilitation are prompted by the appearance of disgraced social scientist Diederik Stapel (discussed here, here, here, here, here, and here) in the comments at Retraction Watch on a post about Diederik Stapel and his short-lived gig as an adjunct instructor for a college course. Now, there’s no prima facie reason Diederik Stapel might not be able to make a productive contribution to a discussion about Diederik Stapel.

However, Diederik Stapel was posting his comments not as Diederik Stapel but as “Paul”.

I hope it is obvious why posting comments that are supportive of yourself while making it appear that this support is coming from someone else is deceptive. Moreover, the comments seem to suggest that Stapel is not really fully responsible for the frauds he committed.

“Paul” writes:

Help! Let’s not change anything. Science is a flawless institution. Yes. And only the past two days I read about medical scientists who tampered with data to please the firm that sponsored their work and about the start of a new investigation into the work of a psychologist who produced data “too good to be true.” Mistakes abound. On a daily basis. Sure, there is nothing to reform here. Science works just fine. I think it is time for the “Men in Black” to move in to start an outside-invesigation of science and academia. The Stapel case and other, similar cases teach us that scientists themselves are able to clean-up their act.

Later, he writes (sic throughout):

Stapel was punished, he did his community service (as he writes in his latest book), he is not on welfare, he is trying to make money with being a writer, a cab driver, a motivational speaker, but not very successfully, and .. it is totally unclear whether he gets paid for his teaching (no research) an extra-curricular hobby course (2 hours a week, not more, not less) and if he gets paid, how much.

Moreover and more importantly, we do not know WHAT he teaches exactly, we have not seen his syllabus. How can people write things like “this will only inspire kids to not get caught”, without knowing what the guy is teaching his students? Will he reach his students how to become fraudsters? Really? When you have read the two books he wrote after his demise, you cannot be conclude that this is very unlikely? Will he teach his students about all the other fakes and frauds and terrible things that happen in science? Perhaps. Is that bad? Perhaps. I think it is better to postpone our judgment about the CONTENT of all this as long as we do not know WHAT he is actually teaching. That would be a Popper-like, open-minded, rationalistic, democratic, scientific attitude. Suppose a terrible criminal comes up with a great insight, an interesting analysis, a new perspective, an amazing discovery, suppose (think Genet, think Gramsci, think Feyerabend).

Is it smart to look away from potentially interesting information, because the messenger of that information stinks?

Perhaps, God forbid, Stapel is able to teach his students valuable lessons and insights no one else is willing to teach them for a 2-hour-a-week temporary, adjunct position that probably doesn’t pay much and perhaps doesn’t pay at all. The man is a failure, yes, but he is one of the few people out there who admitted to his fraud, who helped the investigation into his fraud (no computer crashes…., no questionnaires that suddenly disappeared, no data files that were “lost while moving office”, see Sanna, Smeesters, and …. Foerster). Nowhere it is written that failures cannot be great teachers. Perhaps he points his students to other frauds, failures, and ridiculous mistakes in psychological science we do not know of yet. That would be cool (and not unlikely).

Is it possible? Is it possible that Stapel has something interesting to say, to teach, to comment on?

To my eye, these comments read as saying that Stapel has paid his debt to society and thus ought not to be subject to heightened scrutiny. They seem to assert that Stapel is reformable. They also suggest that the problem is not so much with Stapel as with the scientific enterprise. While there may be systemic features of science as currently practice that make cheating a greater temptation than it might be otherwise, suggesting that those features made Stapel commit fraud does not convey an understanding of Stapel’s individual responsibility to navigate those temptations. Putting those assertions and excuses in someone else’s mouth makes them look less self-serving than they actually are.

Hilariously, “Paul” also urges the Retraction Watch commenters expressing doubts about Stapel’s rehabilitation and moral character to contact Stapel using their real names, first here:

I guess that if people want to write Stapel a message, they can send him a personal email, using their real name. Not “Paul” or “JatdS” or “QAQ” or “nothingifnotcritical” or “KK” or “youknowbestofall” or “whatistheworldcoming to” or “givepeaceachance”.

then here:

if you want to talk to puppeteer, as a real person, using your real name, I recommend you write Stapel a personal email message. Not zwg or neuroskeptic or what arewehiding for.

Meanwhile, behind the scenes, the Retraction Watch editors accumulated clues that “Paul” was not an uninvolved party but rather Diederik Stapel portraying himself as an uninvolved party. After they contacted him to let him know that such behavior did not comport with their comment policy, Diederik Stapel posted under his real name:

Hello, my name is Diederik Stapel. I thought that in an internet environment where many people are writing about me (a real person) using nicknames it is okay to also write about me (a real person) using a nickname. ! have learned that apparently that was —in this particular case— a misjudgment. I think did not dare to use my real name (and I still wonder why). I feel that when it concerns person-to-person communication, the “in vivo” format is to be preferred over and above a blog where some people use their real name and some do not. In the future, I will use my real name. I have learned that and I understand that I –for one– am not somebody who can use a nickname where others can. Sincerely, Diederik Stapel.

He portrays this as a misunderstanding about how online communication works — other people are posting without using their real names, so I thought it was OK for me to do the same. However, to my eye it conveys that he also misunderstands how rebuilding trust works. Posting to support the person at the center of the discussion without first acknowledging that you are that person is deceptive. Arguing that that person ought to be granted more trust while dishonestly portraying yourself as someone other than that person is a really bad strategy. When you’re caught doing it, those arguments for more trust are undermined by the fact that they are themselves further instances of the deceptive behavior that broke trust in the first place.

I will allow as how Diederik Stapel may have some valuable lessons to teach of, though. One of these is how not to make a convincing case that you’ve reformed.

Grappling with the angry-making history of human subjects research, because we need to.

Teaching about the history of scientific research with human subjects bums me out.

Indeed, I get fairly regular indications from students in my “Ethics in Science” course that reading about and discussing the Nazi medical experiments and the U.S. Public Health Service’s Tuskegee syphilis experiment leaves them feeling grumpy, too.

Their grumpiness varies a bit depending on how they see themselves in relation to the researchers whose ethical transgressions are being inspected. Some of the science majors who identify strongly with the research community seem to get a little defensive, pressing me to see if these two big awful examples of human subject research aren’t clear anomalies, the work of obvious monsters. (This is one reason I generally point out that, when it comes to historical examples of ethically problematic research with human subjects, the bench is deep: the U.S. government’s syphilis experiments in Guatemala, the MIT Radioactivity Center’s studies on kids with mental disabilities in a residential school, the harms done to Henrietta Lacks and to the family members that survived her by scientists working with HeLa cells, the National Cancer Institute and Gates Foundation funded studies of cervical cancer screening in India — to name just a few.) Some of the non-science majors in the class seem to look at their classmates who are science majors with a bit of suspicion.

Although I’ve been covering this material with my students since Spring of 2003, it was only a few years ago that I noticed that there was a strong correlation between my really bad mood and the point in the semester when we were covering the history of human subjects research. Indeed, I’ve come to realize that this is no mere correlation but a causal connection.

The harm that researchers have done to human subjects in order to build scientific knowledge in many of these historically notable cases makes me deeply unhappy. These cases involve scientists losing their ethical bearings and then defending indefensible actions as having been all in the service of science. It leaves me grumpy about the scientific community of which these researchers were a part (rather than being obviously marked as monsters or rogues). It leaves me grumpy about humanity.

In other contexts, my grumpiness might be no big deal to anyone but me. But in the context of my “Ethics in Science” course, I need to keep pessimism on a short leash. It’s kind of pointless to talk about what we ought to do if you’re feeling like people are going to be as evil as they can get away with being.

It’s important to talk about the Nazi doctors and the Tuskegee syphilis experiment so my students can see where formal statements about ethical constraints on human subject research (in particular, the Nuremberg Code and the Belmont Report) come from, what actual (rather than imagined) harms they are reactions to. To the extent that official rules and regulations are driven by very bad situations that the scientific community or the larger human community want to avoid repeating, history matters.

History also matters if scientists want to understand the attitudes of publics towards scientists in general and towards scientists conducting research with human subjects in particular. Newly-minted researchers who would never even dream of crossing the ethical lines the Nazi doctors or the Tuskegee syphilis researchers crossed may feel it deeply unfair that potential human subjects don’t default to trusting them. But that’s not how trust works. Ignoring the history of human subjects research means ignoring very real harms and violations of trust that have not faded from the collective memories of the populations that were harmed. Insisting that it’s not fair doesn’t magically earn scientists trust.

Grappling with that history, though, might help scientists repair trust and ensure that the research they conduct is actually worthy of trust.

It’s history that lets us start noticing patterns in the instances where human subjects research took a turn for the unethical. Frequently we see researchers working with human subjects who that don’t see as fully human, or whose humanity seems less important than the piece of knowledge the researchers have decided to build. Or we see researchers who believe they are approaching questions “from the standpoint of pure science,” overestimating their own objectivity and good judgment.

This kind of behavior does not endear scientists to publics. Nor does it help researchers develop appropriate epistemic humility, a recognition that their objectivity is not an individual trait but rather a collective achievement of scientists engaging seriously with each other as they engage with the world they are trying to know. Nor does it help them build empathy.

I teach about the history of human subjects research because it is important to understand where the distrust between scientists and publics has come from. I teach about this history because it is crucial to understanding where current rules and regulations come from.

I teach about this history because I fully believe that scientists can — and must — do better.

And, because the ethical failings of past human subject research were hardly ever the fault of monsters, we ought to grapple with this history so we can identify the places where individual human weakness, biases, blind-spots are likely to lead to ethical problems down the road. We need to build systems and social mechanisms to be accountable to human subjects (and to publics), to prioritize their interests, never to lose sight of their humanity.

We can — and must — do better. But this requires that we seriously examine the ways that scientists have fallen short — even the ways that they have done evil. We owe it to future human subjects of research to learn from the ways scientists have failed past human subjects, to apply these lessons, to build something better.

Adjudicating “misbehavior”: how can scientists respond when they don’t get fair credit?

As I mentioned in an earlier post, I recently gave a talk at UC – Berkeley’s Science Leadership and Management (SLAM) seminar series. After the talk (titled “The grad student, the science fair, the reporter, and the lionfish: a case study of competition, credit, and communication of science to the public”), there was a discussion that I hope was at least as much fun for the audience as it was for me.

One of the questions that came up had to do with what recourse members of the scientific community have when other scientists are engaged in behavior that is problematic but that falls short of scientific misconduct.

If a scientist engages in fabrication, falsification, or plagiarism — and if you can prove that they have done so — you can at least plausibly get help from your institution, or the funder, or the federal government, in putting a stop to the bad behavior, repairing some of the damage, and making sure the wrongdoer is punished. But misconduct is a huge line to cross, so harmful to the collective project of scientific knowledge-building that, scientists hope, most scientists would never engage in it, no matter how dire the circumstances.

Other behavior that is ethically problematic in the conduct of science, however, is a lot more common. Disputes over appropriate credit for scientific contributions (which is something that came up in my talk) are sufficiently common that most people who have been in science for a while have first-hand stories they can tell you.

Denying someone of fair credit for the contribution they made to a piece of research is not a good thing. But who can you turn to if someone does it to you? Can the Office of Research Integrity go after the coauthor who didn’t fully acknowledge your contribution to your joint paper (and in the process knocked you from second author to third), or will you have to suck it up?

At the heart of the question is the problem of working out what mechanisms are currently available to address this kind of problem.

Is it possible to stretch the official government definition of plagiarism“the appropriation of another person’s ideas, processes, results, or words without giving appropriate credit” — to cover the situation where you’re being given credit but not enough?

When scientists work out who did enough to be an author on a scientific paper reporting a research finding — and how the magnitude of the various contributions should be reflected in the ordering of names in the author line — is there a clear, objective, correct answer? Are there widely accepted standards that scientists are using to assign appropriate credit? Or, do the standards vary locally, situationally? Is the lack of a clear set of shared standards the kind of thing that creates ambiguities that scientists are prepared to use to their own advantage when they can?

We’ve discussed before the absence of a single standard for authorship embraced uniformly by the Tribe of Science as a whole. Maybe making the case for such a shared standard would help scientists protect themselves from having their contributions minimized — and also help them not unintentionally minimize the contributions of others.

While we’re waiting for a shared standard to gain acceptance, however, there are a number of scientific journals that clearly spell out their own standards for who counts as an author and what kinds of contributions to research and the writing of the paper do or do not rise to the level of receiving authorship credit. If you have submitted your work to a journal with a clear policy of this sort, and if your coauthors have subverted the policy to misrepresent your contribution, you can bring the problem to the journal editors. Indeed, Retraction Watch is brimming with examples of papers that have been retracted on account of problems with who is, or is not, credited with the work that had been published.

While getting redress form a journal editor may be better than nothing, a retraction is the kind of thing that leaves a mark on a scientific reputation — and on the relationships scientists need to be able to coordinate their efforts in the project of scientific knowledge-building. I would argue, however, that not giving the other scientists you work with fair credit for their contributions is also harmful to those relationships, and to the reputations of the scientists who routinely minimize the contributions of others while inflating their own contributions.

So maybe one of the most important things scientists can do right now, given the rules and the enforcement mechanisms that currently exist, the variance in standards and the ambiguities which they create, is to be clear in communicating about contributions and credit from the very beginning of every collaboration. As people are making contributions to the knowledge being built, explicitly identifying those contributions strikes me as a good practice that can help keep other people’s contributions from escaping our notice. Talking about how the different pieces lead to better understanding of what’s going on may also help the collaborators figure out how to make more progress on their research questions by bringing additional contributions to bear.

Of course, it may be easier to spell out what particular contributions each person in the collaboration made than to rank them in terms of which contribution was the biggest or the most important. But maybe this is a good argument for an explicit authorship standard in which authors specify the details of what they contributed and sidestep the harder question of whether experimental design was more or less important that the analysis of the data in this particular collaboration.

There’s a funny kind of irony in feeling like you have better tools to combat bad behavior that happens less frequently than you do to combat bad behavior that happens all the time. Disputes about credit may feel minor enough to be tolerable most of the time, differences of opinion that can expose power gradients in scientific communities that like to think of themselves as egalitarian. But especially for the folks on the wrong end of the power gradients, the erosion of recognition for their hard work can hurt. It may even lessen their willingness to collaborate with other scientists, impoverishing the opportunities for cooperation that help the knowledge get built efficiently. Scientists are entitled to expect better of each other. When they do — and when they give voice to those expectations (and to their disappointment when their scientific peers don’t live up to them) — maybe disputes over fair credit will become rare enough that someday most people who have been in science for a while won’t have first-hand stories they can tell you about them.

Are scientists who don’t engage with the public obliged to engage with the press?

In posts of yore, we’ve had occasion to discuss the duties scientists may have to the non-scientists with whom they share a world. One of these is the duty to share the knowledge they’ve built with the public — especially if that knowledge is essential to the public’s ability to navigate pressing problems, or if the public has put up the funds for the research in which that knowledge was built.

Even if you’re inclined to think that what we have here is something that falls short of an obligation, there are surely cases where it would have good effects — not just for the public, but also for scientists — if the public were informed of important scientific findings. After all, if not knowing a key piece of knowledge, or not understanding its implications or how certain or uncertain it is, leads the public to make worse decisions (whether at the ballot box or in their everyday lives), the impacts of those worse decisions could also harm the scientists with whom they are sharing a world.

But here’s the thing: Scientists are generally trained to communicate their knowledge through journal articles and conference presentations, seminars and grant proposals, patent applications and technical documents. Moreover, these tend to be the kind of activities in scientific careers that are rewarded by the folks making the evaluations, distributing grant money, and cutting the paychecks. Very few scientists get explicit training in how to communicate about their scientific findings, or about the processes by which the knowledge is built, with the public. Some scientists manage to be able to do a good job of this despite a lack of training, others less so. And many scientists will note that there are hardly enough hours in the day to tackle all the tasks that are recognized and rewarded in their official scientific job descriptions without adding “communicating science to the public” to the stack.

As a result, much of the job of communicating to the public about scientific research and new scientific findings falls to the press.

This raises another question for scientists: If scientists have a duty (or at least a strong interest) in making sure the knowledge they build is shared with the public, and if scientists themselves are not taking on the communicative task of sharing it (whether because they don’t have the time or they don’t have the skills to do it effectively), do scientists have an obligation to engage with the press to whom that communicative task has fallen?

Here, of course, we encounter some longstanding distrust between scientists and journalists. Scientists sometimes worry that the journalists taking on the task of making scientific findings intelligible to the public don’t themselves understand the scientific details (or scientific methodology more generally) much better than the public does. Or, they may worry about helping a science journalist who has already decided on the story they are going to tell and who will gleefully ignore or distort facts in the service of telling that story. Or, they may worry that the discovery-of-the-week model of science that journalists frequently embrace distorts the public’s understanding of the ongoing cooperative process by which a body of scientific knowledge is actually built.

To the extent that scientists believe journalists will manage to get things wrong, they may feel like they do less harm to the public’s understanding of science if they do not engage with journalists at all.

While I think this is an understandable impulse, I don’t think it necessarily minimizes the harm.

Indeed, I think it’s useful for scientists to ask themselves: What happens if I don’t engage and journalists try to tell the story anyway, without input from scientists who know this area of scientific work and why it matters?

Of course, I also think it would benefit scientists, journalists, and the public if scientists got more support here, from training in how to work with journalists, to institutional support in their interactions with journalist, to more general recognition that communicating about science with broader audiences is a good thing for scientists (and scientific institutions) to be doing. But in a world where “public outreach” falls much further down on the scientist’s list of pressing tasks than does bringing in grant money, training new lab staff, and writing up results for submission, science journalists are largely playing the zone where communication of science to the public happens. Scientists who are playing other zones should think about how they can support science journalists in covering their zone effectively.

Doing science is more than building knowledge: on professional development in graduate training.

Earlier this week, I was pleased to be an invited speaker at UC – Berkeley’s Science Leadership and Management (SLAM) seminar series. Here’s the official description of the program:

What is SLAM?

Grad school is a great place to gain scientific expertise – but that’s hardly the only thing you’ll need in your future as a PhD. Are you ready to lead a group? Manage your coworkers? Mentor budding scientists? To address the many interpersonal issues that arise in a scientific workplace, grad students from Chemistry, Physics, and MCB founded SLAM: Science Leadership and Management.

This is a seminar series focused on understanding the many interpersonal interactions critical for success in a scientific lab, as well as some practical aspects of lab management.  The target audience for this course is upper-level science graduate students with broad interests and backgrounds, and the skills discussed will be applicable to a variety of career paths. Postdocs are also welcome to attend.

Let me say for the record that I think programs like this are tremendously important, and far too few universities with Ph.D. programs have anything like them. (Stanford has offered something similar, although more explicitly focused on career trajectories in academia, in its Future Faculty Seminar.)

In their standard configuration, graduate programs can do quite a lot to help you learn how to build new knowledge in your discipline. Mostly, you master this ability by spending years working, under the supervision of your graduate advisor, to build new knowledge in your discipline. The details of this apprenticeship vary widely, owing largely to differences in advisors’ approaches: some are very hands-on mentors, others more hands-off, some inclined towards very specific task-lists for the scientific trainees in their labs, others towards letting trainees figure out their own plans of attack or even their own projects. The promise the Ph.D. training holds out, though, is that at the end of the apprenticeship you will have the skills and capacities to go forth and build more knowledge in your field.

The challenge is that most of this knowledge-building will take place in employment contexts that expect the knowledge-builders will have other relevant skills, as well. These may include mounting collaborations, or training others, or teaching, or writing for an audience of non-experts, not to mention working effectively with others (in the lab, on committees, in other contexts) and making good ethical decisions.

To the extent that graduate training focuses solely on learning how to be a knowledge-builder, it often falls down on the job of providing reasonable professional development. This is true even in the realm of teaching, where graduate students usually gain some experience as teaching assistants but they hardly ever get any training in pedagogy.

The graduate students who organize the SLAM program at Berkeley impress me as a smart, vibrant bunch, and they have a supportive faculty advisor. But it’s striking to me that such efforts at serious professional development for grad students are usually spearheaded by grad students, rather than by the grown-up members of their departments training them to be competent knowledge-builders.

One wonders if this is because it just doesn’t occur to the grown-up members of these disciplines that their trainees that such professional development could be helpful — or because graduate programs don’t feel like they owe their graduate students professional development of this sort.

If the latter, that says something about how graduate programs see their relationship with their students, especially in scientific fields. If all you are transmitting to students is how to build new knowledge, rather than attending to other skills they will need to successfully apply their knowledge-building chops in a career after graduate school, it makes it hard not to suspect that the relationship is really one that’s all about providing relatively cheap knowledge-building labor for grad school faculty.

Apprenticeships need not be that exploitative.

Indeed, if graduate programs want to compete for the best grad-school-bound undergraduates or prospective students who have done something else in the interval since their undergraduate education, making serious professional development could help them distinguish themselves from other programs. The trick here is that trainees would need to recognize, as they’re applying to graduate programs, that professional development is something they deserve. Whoever is mentoring them and providing advice on how to choose a graduate program should at least put the issue of professional development on the radar.

If you are someone who fits that description, I hope I have just put professional development on your radar.

Fall semester musing on numbers.

The particular numbers on which I’m focused aren’t cool ones like pi, although I suspect they’re not entirely rational, either.

I teach at a public university in a state whose recent budget crises have been epic. That means that funding for sections of classes (and especially for the faculty who teach those sections of classes) has been tight.

My university is a teaching-focused university, which means that there has also been serious effort to ensure that the education students get at the university gives them a significant level of mastery over their major subject, helps them develop compentencies and qualities of mind and skills, and so forth. How precisely to ensure this is an interesting conversation, couched in language about learning objectives and assessments and competing models of learning. But for at least some of the things our students are supposed to learn, the official judgment has been that this will require students to write (and receive meaningful feedback on) a minimum number of words, and for them to do so in classes with a relatively small maximum number of students.

In a class where students are required to write, and receive feedback on, a total of at least 6000 words, it seems absolutely reasonable that you wouldn’t want more than 25 students in the class. Do you want to grade and comment on more than 150,000 words per class section you are teaching? (At my university, it’s usually three or four sections per semester.) That’s a lot of feedback, and for it to be at all useful in assisting student learning, it’s best of you don’t go mad in the process of giving it.

There’s a recognition, then, that on a practical level, for courses that help students learn by way of a lot of writing, smaller class sizes are good. From the student’s point of view as well, there are arguably additional benefits to a smaller class size, whether being able to ask questions during lectures or class discussions, not feeling lost in the crowd, or what have you.

At least for a certain set of courses, the university recognizes that smaller classes are better and requires that the courses be no larger than 25.

But remember that tight funding? This means that the university has also put demands on departments, schools, and colleges within the university to maintain higher and higher student-faculty ratios.

If you make one set of courses small, to maintain the required student-faculty ratio, you must make other courses big — sometimes very, very big.

But while we’re balancing numbers and counting beans, we are still a teaching-focused university. That might mean that what supports effective teaching and learning should be a constraint on our solutions to the bean-counting problems.

We’re taking as a constraint that composition, critical thinking, and chemistry lab (among others) are courses where keeping class sizes small makes for better teaching and learning.

Is there any reason (beyond budgetary expedience) to think that the courses that are made correspondingly large are also making for better teaching and learning? Is there any subject we teach to a section of 200 that we couldn’t teach better to 30? (And here, some sound empirical research would be nice, not just anecdata.)

I can’t help but wonder if there is some other way to count the beans that would better support our teaching-focused mission, and our students.

Some thoughts about the suicide of Yoshiki Sasai.

In the previous post I suggested that it’s a mistake to try to understand scientific activity (including misconduct and culpable mistakes) by focusing on individual scientists, individual choices, and individual responsibility without also considering the larger community of scientists and the social structures it creates and maintains. That post was where I landed after thinking about what was bugging me about the news coverage and discussions about recent suicide of Yoshiki Sasai, deputy director of the Riken Center for Developmental Biology in Kobe, Japan, and coauthor of retracted papers on STAP cells.

I went toward teasing out the larger, unproductive pattern I saw, on the theory that trying to find a more productive pattern might help scientific communities do better going forward.

But this also means I didn’t say much about my particular response to Sasai’s suicide and the circumstances around it. I’m going to try to do that here, and I’m not going to try to fit every piece of my response into a larger pattern or path forward.

The situation in a nutshell:

Yoshiki Sasai worked with Haruko Obokata at the Riken Center on “stimulus-triggered acquisition of pluripotency”, a method by which exposing normal cells to a stress (like a mild acid) supposedly gave rise to pluripotent stem cells. It’s hard to know how closely they worked together on this; in the papers published on STAP. Obokata was the lead-author and Sasai was a coauthor. It’s worth noting that Obokata was some 20 years younger than Sasai, an up-and-coming researcher. Sasai was a more senior scientist, serving in a leadership position at the Riken Center and as Obokata’s supervisor there.

The papers were published in a high impact journal (Nature) and got quite a lot of attention. But then the findings came into question. Other researchers trying to reproduce the findings that had been reported in the papers couldn’t reproduce them. One of the images in the papers seemed to be a duplicate of another, which was fishy. Nature investigated, Riken investigated, the papers were retracted, Obokata continued to defend the papers and to deny any wrongdoing.

Meanwhile, a Riken investigation committee said “Sasai bore heavy responsibility for not confirming data for the STAP study and for Obokata’s misconduct”. This apparently had a heavy impact on Sasai:

Sasai’s colleagues at Riken said he had been receiving mental counseling since the scandal surrounding papers on STAP, or stimulus-triggered acquisition of pluripotency, cells, which was lead-authored by Obokata, came to light earlier this year.

Kagaya [head of public relations at Riken] added that Sasai was hospitalized for nearly a month in March due to psychological stress related to the scandal, but that he “recovered and had not been hospitalized since.”

Finally, Sasai hanged himself in a Riken stairwell. One of the notes he left, addressed to Obokata, urged her to reproduce the STAP findings.

So, what is my response to all this?

I think it’s good when scientists take their responsibilities seriously, including the responsibility to provide good advice to junior colleagues.

I also think it’s good when scientists can recognize the limits. You can give very, very good advice — and explain with great clarity why it’s good advice — but the person you’re giving it to may still choose to do something else. It can’t be your responsibility to control another autonomous person’s actions.

I think trust is a crucial part of any supervisory or collaborative relationship. I think it’s good to be able to interact with coworkers with the presumption of trust.

I think it’s awful that it’s so hard to tell which people are not worthy of our trust before they’ve taken advantage of our trust to do something bad.

Finding the right balance between being hands-on and giving space is a challenge in the best of supervisory or mentoring relationships.

Bringing an important discovery with the potential to enable lots of research that could ultimately help lots of people to one’s scientific peers — and to the public — must feel amazing. Even if there weren’t a harsh judgment from the scientific community for retraction, I imagine that having to say, “We jumped the gun on the ‘discovery’ we told you about” would not feel good.

The danger of having your research center’s reputation tied to an important discovery is what happens if that discovery doesn’t hold up, whether because of misconduct or mistakes. And either way, this means that lots of hard work that is important in the building of the shared body of scientific knowledge (and lots of people doing that hard work) can become invisible.

Maybe it would be good to value that work on its own merits, independent of whether anyone else judged it important or newsworthy. Maybe we need to rethink the “big discoveries” and “important discoverers” way of thinking about what makes scientific work or a research center good.

Figuring out why something went wrong is important. When the something that went wrong includes people making choices, though, this always seems to come down to assigning blame. I feel like that’s the wrong place to stop.

I feel like investigations of results that don’t hold up, including investigations that turn up misconduct, should grapple with the question of how can we use what we found here to fix what went wrong? Instead of just asking, “Whose fault was this?” why not ask, “How can we address the harm? What can we learn that will help us avoid this problem in the future?”

I think it’s a problem when a particular work environment makes the people in it anxious all the time.

I think it’s a problem when being careful feels like an unacceptable risk because it slows you down. I think it’s a problem when being first feels more important than being sure.

I think it’s a problem when a mistake of judgment feels so big that you can’t imagine a way forward from it. So disastrous that you can’t learn something useful from it. So monumental that it makes you feel like not existing.

I feel like those of us who are still here have a responsibility to pay attention.

We have a responsibility to think about the impacts of the ways science is done, valued, celebrated, on the human beings who are doing science — and not just on the strongest of those human beings, but also on the ones who may be more vulnerable.

We have a responsibility to try to learn something from this.

I don’t think what we should learn is not to trust, but how to be better at balancing trust and accountability.

I don’t think what we should learn is not to take the responsibilities of oversight seriously, but to put them in perspective and to mobilize more people in the community to provide more support in oversight and mentoring.

Can we learn enough to shift away from the Important New Discovery model of how we value scientific contributions? Can we learn enough that cooperation overtakes competition, that building the new knowledge together and making sure it holds up is more important than slapping someone’s name on it? I don’t know.

I do know that, if the pressures of the scientific career landscape are harder to navigate for people with consciences and easier to navigate for people without consciences, it will be a problem for all of us.

When focusing on individual responsibility obscures shared responsibility.

Over many years of writing about ethics in the conduct of science, I’ve had occasion to consider many cases of scientific misconduct and misbehavior, instances of honest mistakes and culpable mistakes. Discussions of these cases in the media and among scientists often make them look aberrant, singular, unconnected — the Schön case, the Hauser case, Aetogate, the Sezen-Sames case, the Hwang Woo-Sook case, the Stapel case, the Van Parijs case.* They make the world of science look binary, a set of unproblematically ethical practitioners with a handful of evil interlopers who need only be identified and rooted out.

I don’t think this approach is helpful, either in preventing misconduct, misbehavior, and mistakes, or in mounting a sensible response to the people involved in them.

Indeed, despite the fact that scientific knowledge-building is inherently a cooperative activity, the tendency to focus on individual responsibility can manifest itself in assignment of individual blame on people who “should have known” that another individual was involved in misconduct or culpable mistakes. It seems that something like this view — whether imposed from without or from within — may have been a factor in the recent suicide of Yoshiki Sasai, deputy director of the Riken Center for Developmental Biology in Kobe, Japan, and coauthor of retracted papers on STAP cells.

While there seems to be widespread suspicion that the lead-author of the STAP cell papers, Haruko Obokata, may have engaged in research misconduct of some sort (something Obokata has denied), Sasai was not himself accused of research misconduct. However, in his role as an advisor to Obokata, Sasai was held responsible by Riken’s investigation for not confirming Obokata’s data. Sasai expressed shame over the problems in the retracted papers, and had been hospitalized prior to his suicide in connection to stress over the scandal.

Michael Eisen describes the similarities here to his own father’s suicide as a researcher at NIH caught up in the investigation of fraud committed by a member of his lab:

[A]s the senior scientists involved, both Sasai and my father bore the brunt of the institutional criticism, and both seem to have been far more disturbed by it than the people who actually committed the fraud.

It is impossible to know why they both responded to situations where they apparently did nothing wrong by killing themselves. But it is hard for me not to place at least part of the blame on the way the scientific community responds to scientific misconduct.

This response, Eisen notes, goes beyond rooting out the errors in the scientific record and extends to rooting out all the people connected to the misconduct event, on the assumption that fraud is caused by easily identifiable — and removable — individuals, something that can be cut out precisely like a tumor, leaving the rest of the scientific community free of the cancer. But Eisen doesn’t believe this model of the problem is accurate, and he notes the damage it can do to people like Sasai and like his own father:

Imagine what it must be like to have devoted your life to science, and then to discover that someone in your midst – someone you have some role in supervising – has committed the ultimate scientific sin. That in and of itself must be disturbing enough. Indeed I remember how upset my father was as he was trying to prove that fraud had taken place. But then imagine what it must feel like to all of a sudden become the focal point for scrutiny – to experience your colleagues and your field casting you aside. It must feel like your whole world is collapsing around you, and not everybody has the mental strength to deal with that.

Of course everyone will point out that Sasai was overreacting – just as they did with my father. Neither was accused of anything. But that is bullshit. We DO act like everyone involved in cases of fraud is responsible. We do this because when fraud happens, we want it to be a singularity. We are all so confident this could never happen to us, that it must be that somebody in a position of power was lax – the environment was flawed. It is there in the institutional response. And it is there in the whispers …

Given the horrible incentive structure we have in science today – Haruko Obokata knew that a splashy result would get a Nature paper and make her famous and secure her career if only she got that one result showing that you could create stem cells by dipping normal cells in acid – it is somewhat of a miracle that more people don’t make up results on a routine basis. It is important that we identify, and come down hard, on people who cheat (although I wish this would include the far greater number of people who overhype their results – something that is ultimately more damaging than the small number of people who out and out commit fraud).

But the next time something like this happens, I am begging you to please be careful about how you respond. Recognize that, while invariably fraud involves a failure not just of honesty but of oversight, most of the people involved are honest, decent scientists, and that witch hunts meant to pretend that this kind of thing could not happen to all of us are not just gross and unseemly – they can, and sadly do, often kill.

As I read him, Eisen is doing at least a few things here. He is suggesting that a desire on the part of scientists for fraud to be a singularity — something that happens “over there” at the hands of someone else who is bad — means that they will draw a circle around the fraud and hold everyone on the inside of that circle (and no one outside of it) accountable. He’s also arguing that the inside/outside boundary inappropriately lumps the falsifiers, fabricators, and plagiarists with those who have committed the lesser sin of not providing sufficient oversight. He is pointing out the irony that those who have erred by not providing sufficient oversight tend to carry more guilt than do those they were working with who have lied outright to their scientific peers. And he is suggesting that needed efforts to correct the scientific record and to protect the scientific community from dishonest researchers can have tragic results for people who are arguably less culpable.

Indeed, if we describe Sasai’s failure as a failure of oversight, it suggests that there is some clear benchmark for sufficient oversight in scientific research collaborations. But it can be very hard to recognize that what seemed like a reasonable level of oversight was insufficient until someone who you’re supervising or with whom you’re collaborating is caught in misbehavior or a mistake. (That amount of oversight might well have been sufficient if the person one was supervising chose to behave honestly, for example.) There are limits here. Unless you’re shadowing colleagues 24/7, oversight depends on some baseline level of trust, some presumption that one’s colleagues are behaving honestly rather than dishonestly.

Eisen’s framing of the problem, though, is still largely in terms of the individual responsibility of fraudsters (and over-hypers). This prompts arguments in response about individuals bearing responsibility for their actions and their effects (including the effects of public discussion of those actions and about the individual scientists who are arguably victims of data fabrication and fraud. We are still in the realm of conceiving of fraudsters as “other” rather than recognizing that honest, decent scientists may be only a few bad decisions away from those they cast as monsters.

And we’re still describing the problem in terms of individual circumstances, individual choices, and individual failures.

I think Eisen is actually on the road to pointing out that a focus primarily on the individual level is unhelpful when he points to the problems of the scientific incentive structure. But I think it’s important to explicitly raise the alternate model, that fraud also flows from a collective failure of the scientific community and of the social structures it has built — what is valued, what is rewarded, what is tolerated, what is punished.

Arguably, one of the social structures implicated in scientific fraud is the first across the finish line, first to publish in a high impact journal model of scientific achievement. When being second to a discovery counts for exactly nothing (after lots of time, effort, and other resources have been invested), there is much incentive for haste and corner-cutting, and sometimes even outright fraud. This provides temptations for researchers — and dangers for those providing oversight to ambitious colleagues who may fall prey to such temptations. But while misconduct involves individuals making bad decisions, it happens in the context of a reward structure that exists because of collective choices and behaviors. If the structures that result from those collective choices and behaviors make some kinds of individual choices that are pathological to the shared project (building knowledge) rational choices for the individual to make under the circumstances (because they help the individual secure the reward), the community probably has an interest in examining the structures it has built.

Similarly, there are pathological individual choices (like ignoring or covering up someone else’s misconduct) that seem rational if the social structures built by the scientific community don’t enable a clear path forward within the community for scientists who have erred (whether culpably or honestly). Scientists are human. They get attached to their colleagues and tend to believe them to be capable of learning from their mistakes. Also, they notice that blowing the whistle on misconduct can lead to isolation of the whistleblower, not just the people committing the misconduct. Arguably, these are failures of the community and of the social structures it has built.

We might even go a step further and consider whether insisting on talking about scientific behavior (and misbehavior) solely in terms of individual actions and individual responsibility is part of the problem.

Seeing the scientific enterprise and things that happen in connection with it in terms of heroes and villains and innocent bystanders can seem very natural. Taking this view also makes it look like the most rational choice for scientists to plot their individual courses within the status quo. The rules, the reward structures, are taken almost as if they were carved in granite. How could one person change them? What would be the point of opting out of publishing in the high impact journals, since it would surely only hurt the individual opting out while leaving the system intact? In a competition for individual prestige and credit for knowledge built, what could be the point of pausing to try to learn something from the culpable mistakes committed by other individuals rather than simply removing those other individuals from the competition?

But individual scientists are not working in isolation against a fixed backdrop. Treating their social structures as if they were a fixed backdrop not only obscures that these structures result from collective choices but also prevents scientists from thinking together about other ways the institutional practice of science could be.

Whether some of the alternative arrangements they could create might be better than the status quo — from the point of view of coordinating scientific efforts, improving scientists’ quality of life, or improving the quality of the body of knowledge scientist are building — is surely an empirical question. But just as surely it is an empirical question worth exploring.

______
* It’s worth noticing that failures of safety are also frequently characterized as singular events, as in the Sheri Sangji/Patrick Harran case. As I’ve discussed at length on this blog, there is no reason to imagine the conditions in Harran’s lab that led to Sangji’s death were unique, and there is plenty of reason for the community of academic researchers to try to cultivate a culture of safety rather than individually hoping their own good luck will hold.

When your cover photo says less about the story and more about who you imagine you’re talking to.

The choice of cover of the most recent issue of Science was not good. This provoked strong reactions and, eventually, an apology from Science‘s editor-in-chief. It’s not the worst apology I’ve seen in recent days, but my reading of it suggests that there’s still a gap between the reactions to the cover and the editorial team’s grasp of those reactions.

So, in the interests of doing what I can to help close that gap, I give you the apology (in block quotes) and my response to it:

From Science Editor-in-Chief Marcia McNutt:

Science has heard from many readers expressing their opinions and concerns with the recent [11 July 2014] cover choice.

The cover showing transgender sex workers in Jarkarta was selected after much discussion by a large group

I suppose the fact that the choice of the cover was discussed by many people for a long time (as opposed to by one person with no discussion) is good. But it’s no guarantee of a good choice, as we’ve seen here. It might be useful to tell readers more about what kind of group was involved in making the decision, and what kind of discussion led to the choice of this cover over the other options that were considered.

and was not intended to offend anyone,

Imagine my relief that you did not intend what happened in response to your choice of cover. And, given how predictable the response to your cover was, imagine my estimation of your competence in the science communication arena dropping several notches. How well do you know your audience? Who exactly do you imagine that audience to be? If you’re really not interested in reaching out to people like me, can I get my AAAS dues refunded, please?

but rather to highlight the fact that there are solutions for the AIDS crisis for this forgotten but at-risk group. A few have indicated to me that the cover did exactly that,

For them. For them the cover highlighted transgender sex workers as a risk group who might get needed help from research. So, there was a segment of your audience for whom your choice succeeded, apparently.

but more have indicated the opposite reaction: that the cover was offensive because they did not have the context of the story prior to viewing it, an important piece of information that was available to those choosing the cover.

Please be careful with your causal claims here. Even with the missing context provided, a number of people still find the cover harmful. This explanation of the harm in the context of what the scientific community, and the wider world, can be like for a trans*woman, spells it out pretty eloquently.

The problem, in other words, goes deeper than the picture not effectively conveying your intended context. Instead, the cover communicated layers of context about who you imagine as your audience — and about whose reality is not really on your radar.

The people who are using social media to explain the problems they have with this cover are sharing information about who is in your audience, about what our lives in and with science are like. We are pinging you so we will be on your radar. We are trying to help you.

I am truly sorry for any discomfort that this cover may have caused anyone,

Please do not minimize the harm your choice of cover caused by describing it as “discomfort”. Doing so suggests that you still aren’t recognizing how this isn’t an event happening in a vacuum. That’s a bad way to support AAAS members who are women and to broaden the audience for science.

and promise that we will strive to do much better in the future to be sensitive to all groups and not assume that context and intent will speak for themselves.

What’s your action plan going forward? Is there good reason to think that simply trying hard to do better will get the job done? Or are you committed enough to doing better that you’re ready to revisit your editorial processes, the diversity of your editorial team, the diversity of the people beyond that team whose advice and feedback you seek and take seriously?

I’ll repeat: We are trying to help you. We criticize this cover because we expect more from Science and AAAS. This is why people have been laboring, patiently, to spell out the problems.

Please use those patient explanations and formulate a serious plan to do better.

* * * * *
For this post, I’m not accepting comments. There is plenty of information linked here for people to read and digest, and my sense is this is a topic where thinking hard for a while is likely to be more productive than jumping in with questions that the reading, digesting, and hard thinking could themselves serve to answer.

Successful science outreach means connecting with the people you’re trying to reach.

Let’s say you think science is cool, or fun, or important to understand (or to do) in our modern world. Let’s say you want to get others who don’t (yet) see science as cool, or fun, or important, to appreciate how cool, how fun, how important it is.

Doing that, even on a small scale, is outreach.

Maybe just talking about what you find cool, fun, and important will help some others come to see science that way. But it’s also quite possible that some of the people to whom you’re reaching out will not be won over by the same explanations, the same experiences, the same exemplars of scientific achievement that won you over.

If you want your outreach to succeed, it’s not enough to know what got you engaged with science. To engage people-who-are-not-you, you probably need to find out something about them.

Find out what their experiences with science have been like — and what their experiences with scientists (and science teachers) have been like. These experiences shape what they think about science, but also what they think about who science is for.

Find out what they find interesting and what they find off-putting.

Find out what they already know and what they want to know. Don’t assume before doing this that you know where their information is gappy or what they’re really worried about. Don’t assume that filling in gaps in their knowledge is all it will take to make them science fans.

Recognize that your audience may not be as willing as you want them to be to separate their view of science from their view of scientists. A foible of a famous scientist that is no big deal to you may be a huge deal to people you’re trying to reach who have had different experiences. Your baseline level of trust for scientists and the enterprise of scientific knowledge-building may be higher than that of people in your target audience who come from communities that have been hurt by researchers or harmed by scientific claims used to justify their marginalization.

Actually reaching people means taking their experiences seriously. Telling someone how to feel is a bad outreach strategy.

Taking the people you’re trying to reach seriously also means taking seriously their capacity to understand and to make good decisions — even when their decisions are not precisely the decisions you might make. When you feel frustration because of decisions being made out of what looks to you like ignorance, resist the impulse to punch down. Instead, ask where the decisions are coming from and try to understand them before explaining, respectfully, why you’d make a different decision.

If your efforts at outreach don’t seem to be reaching people or groups you are trying hard to reach, seriously consider the possibility that what you’re doing may not be succeeding because it’s not aligned with the wants or needs of those people or groups.

If you’re serious about reaching those people or groups ask them how your outreach efforts are coming across to them, and take their answers seriously.