Incoherent ethical claims that give philosophers a bad rap

Every now and then, in the course of a broader discussion, some philosopher will make a claim that is rightly disputed by non-philosophers. Generally, this is no big deal — philosophers have just as much capacity to be wrong as other humans. But sometimes, the philosopher’s claim, delivered with an air of authority, is not only a problem in itself but also manages to convey a wrong impression about the relation between the philosophers and non-philosophers sharing a world.

I’m going to examine the general form of one such ethical claim. If you’re interested in the specific claim, you’re invited to follow the links above. We will not be discussing the specific claim here, nor the larger debate of which it is a part.

Claim: To decide to do X is always (or, at least, should always be) a very difficult and emotional step, precisely because it has significant ethical consequences.

Let’s break that down.

“Doing X has significant ethical consequences” suggests a consequentialist view of ethics, in which doing the right thing is a matter of making sure the net good consequences (for everyone affected, whether you describe them in terms of “happiness” or something else) outweigh the net bad consequences.

To say that doing X has significant ethical consequences is then to assert that (at least in the circumstances) doing X will make a significant contribution to the happiness or unhappiness being weighed.

In the original claim, the suggestion is that the contribution of doing X to the balance of good and bad consequences is negative (or perhaps that it is negative in many circumstances), and that on this account it ought to be a “difficult and emotional step”. But does this requirement make sense?

In the circumstances in which doing X shifts the balance of good and bad consequences to a net negative, the consequentialist will say you shouldn’t do X — and this will be true regardless of your emotions. Feeling negative emotions as you are deciding to do X will add more negative consequences, but they are not necessary: a calculation of the consequences of doing X versus not doing X will still rule out doing X as an ethical option even if you have no emotions associated with it at all.

On the other hand, in the circumstances in which doing X shifts the balance of good and bad consequences to a net positive, the consequentialist will say you should do X — again, regardless of your emotions. Here, feeling negative emotions as you are deciding to do X will add more negative consequences. If these negative emotions are strong enough, they run the risk of reducing the net positive consequences — which makes the claim that one should feel negative emotions (pretty clearly implied in the assertion that the decision to do X should be difficult) a weird claim, since these negative emotions would serve only to reduce the net good consequences of doing something that produces net good consequences in the circumstances.

By the way, this also suggests, perhaps perversely, a way that strong emotions could become a problem in circumstances in which doing X would otherwise clearly bring more negative consequences than positive ones: if the person contemplating doing X were to get a lot of happiness from doing X.

Now, maybe the idea is supposed to be that negative feelings associated with the prospect of doing X are supposed to be a brake if doing X frequently leads to more bad consequences than good ones. But I think we have to recognize feelings as consequences — as something that we need to take into account in the consequentialist calculus with which we evaluate whether doing X here is ethical or not. And that makes the claim that the feelings ought always to be negative, regardless of other features of the situation that make doing X the right thing, puzzling.

You could avoid worries about weighing feelings as consequences by shifting from a consequentialist ethical framework to something else, but I don’t think that’s going to be much help here.

Kantian ethics, for example, won’t pin the ethics of doing X to the net consequences, but instead it will come down to something like whether it is your duty to do X (where your duty is to respect the rational capacity in yourself and in others, to treat people as ends in themselves rather than as mere means). Your feelings are no part of what a Kantian would consider in judging whether your action is ethical or not. Indeed, Kantians stress that ethical acts are motivated by recognizing your duty precisely because feelings can be a distraction from behaving as we should.

Virtue ethicists, on the other hand, do talk about the agent’s feelings as ethically relevant. Virtuous people take pleasure in doing the right things and feel pain at the prospect of doing the wrong thing. However, if doing X is right under the circumstances, the virtuous people will feel good about doing X, not conflicted about it — so the claim that doing X should always be difficult and emotional doesn’t make much sense here. Moreover, virtue ethicists describe the process of becoming virtuous as one where behaving in virtuous ways usually precedes developing emotional dispositions to feel pleasure from acting virtuously.

Long story short, it’s hard to make sense of the claim “To decide to do X is always (or, at least, should always be) a very difficult and emotional step, precisely because it has significant ethical consequences” — unless really what is being claimed it that doing X is always unethical and you should always feel bad for doing X. If that’s the claim, though, emotions are pretty secondary.

But beyond the incoherence of the claim, here’s what really bugs me about it: It seems to assert that ethicists (and philosophers more generally) are in the business of telling people how to feel. That, my friends, is nonsense. Indeed, I’m on record prioritizing changes in unethical behavior over any interference with what’s in people’s hearts. How we behave, after all, has much more impact on our success in sharing a world with each other than how we feel.

This is not to say that I don’t recognize a likely connection between what’s in people’s hearts and how they behave. For example, I’m willing to bet that improvements in our capacity for empathy would likely lead to more ethical behavior.

But it’s hard to see as empathetic telling people they should generally feel bad for making a choice which under the circumstances is an ethical choice. If anything, requiring such negative emotions is a failure of empathy, and punitive to boot.

Clearly, there exist ethicists and philosophers who operate this way, but many of us try to do better. Indeed, it’s reasonable for you all to expect and demand that we do better.

Soothing jellies

One day in to ScienceOnline Together 2014, my head is full of ideas and questions and hunches that weren’t there a day ago.

I’ll be posting about some of them after I’ve had some time to digest them. In the meantime, I’m looking at pictures of jellies I snapped on a recent trip to the Monterey Bay Aquarium.

In addition to being pretty interesting animals, I find them very relaxing to look at. Which is nice.

Warty comb jellies

Sea nettles

Moon jellies

How to be ethical while getting the public involved in your science

At ScienceOnline Together later this week, Holly Menninger will be moderating a session on “Ethics, Genomics, and Public Involvement in Science”.

Because the ethical (and epistemic) dimensions of “citizen science” have been on my mind for a while now, in this post I share some very broad, pre-conference thoughts on the subject.

Ethics is a question of how we share a world with each other. Some of this is straightforward and short-term, but sometimes engaging each other ethically means taking account of long-range consequences, including possible consequences that may be difficult to foresee unless we really work to think through the possibilities ahead of time — and unless this thinking through of possibilities is informed by knowledge of some of the technologies involved and of history of what kinds of unforeseen outcomes have led to ethical problems before.

Ethics is more than merely meeting your current legal and regulatory requirements. Anyone taking that kind of minimalist approach to ethics is gunning to be a case study in an applied ethics class (probably within mere weeks of becoming a headline in a major news outlet).

With that said, if you’re running a project you’d describe as “citizen science” or as cultivating public involvement in science, here are some big questions I think you should be asking from the start:

1. What’s in it for the scientists?

Why are you involving members of the public in your project?

Are they in the field collecting observations that you wouldn’t have otherwise, or on their smart phones categorizing the mountains of data you’ve already collected? In these cases, the non-experts are providing labor you need for vital non-automatable tasks.

Are they sending in their biological samples (saliva, cheek swab, belly button swab, etc.)? In these cases, the non-experts are serving as human subjects, expanding the pool of samples in your study.

In both of these cases, scientists have ethical obligations to the non-scientists they are involving in their projects, although the ethical obligations are likely to be importantly different. In any case where a project involves humans as sources of biological samples, researchers ought to be consulting an Institutional Review Board, at least informally, before the project is initiated (which includes the start of anything that looks like advertising for volunteers who will provide their samples).

If volunteers are providing survey responses or interviews instead of vials of spit, there’s a chance they’re still acting as human subjects. Consult an IRB in the planning stages to be sure. (If your project is properly exempt from IRB oversight, there’s no better way to show it than an exemption letter from an IRB.)

If volunteers are providing biological samples from their pets or reports of observations of animals in the field (especially in fragile habitats), researchers ought to be consulting an Institutional Animal Care and Use Committee, at least informally, before the project is initiated. Again, it’s possible that what you’ll discover in this consultation is that the proposed research is exempt from IACUC oversight, but you want a letter from an IACUC to that effect.

Note that IRBs and IACUCs don’t exist primarily to make researchers’ lives hard! Rather, they exist to help researchers identify their ethical obligations to the humans and animals who serve as subjects of their studies, and to help find ways to conduct that research in ways that honor those obligations. A big reason to involve committees in thinking through the ethical dimensions of the research is that it’s hard for researchers to be objective in thinking through these questions about their own projects.

If you’re involving non-experts in your project in some other way, what are they contributing to the project? Are you involving them so you can check off the “broader impacts” box on your grant application, or is there some concrete way that involving members of the public is contributing to your knowledge-building? If the latter, think hard about what kinds of obligations might flow from that contribution.

2. What’s in it for the non-scientists/non-experts/members of the public involved in the project?

Why would members of the public want to participate in your project? What could they expect to get from such participation?

Maybe they enjoy being outdoors counting birds (and would be doing so even if they weren’t participating in the project), or looking at pictures of galaxies from space telescopes. Maybe they are curious about what’s in their genome or what’s in their belly-button. Maybe they want to help scientists build new knowledge enough to participate in some of the grunt-work required for that knowledge-building. Maybe they want to understand how that grunt-work fits into the knowledge-building scientists do.

It’s important to understand what the folks whose help you’re enlisting think they’re signing on for. Otherwise, they may be expecting something from the experience that you can’t give them. The best way to find out what potential participants are looking for from the experience is to ask them.

Don’t offer potential diagnostic benefits from participation in a project for which that information is a long, long way off. Don’t promise that tracking the health of streams by screening for the presence of different kinds of bugs will be tons of fun without being clear about the conditions your volunteers will undergo to perform those screenings.

Don’t promise participants that they will be getting a feel for what it’s like to “do science” if, in fact, they are really just providing a sample rather than being part of the analysis or interpretation of that sample.

Don’t promise them that they will be involved in hypothesis-formation or conclusion-drawing if really you are treating them as fancy measuring devices.

3. What’s the relationship between the scientists and the non-scientists in this project? What consequences will this have for relationships between scientists and the pubic more generally?

There’s a big difference in involving members of the public in your project because it will be enriching for them personally and involving them in your project because it’s the only conceivable way to build a particular piece of knowledge you’re trying to build.

Being clear about the relationship upfront — here’s why we need you, here’s what you can expect in return (both the potential benefits of participation and the potential risks) — is the best way to make sure everyone’s interests are well-served by the partnership and that no one is being deceived.

Things can get complicated, though, when you pull the focus back from how participants are involved in building the knowledge and consider how that knowledge might be used.

Will the new knowledge primarily benefit the scientists leading the project, adding publications to their CVs and helping them make the case for funding for further projects? Could the new knowledge contribute to our understanding (of ecosystems, or human health, for example) in ways that will drive useful interventions? Will those interventions be driven by policy-makers or commercial interests? Will the scientists be a part of this discussion of how the knowledge gets used? Will the members of the public (either those who participated in the project or members of the public more generally) be a part of this discussion — and will their views be taken seriously?

To the extent that participating in citizen science project, whatever shape that participation may take, can influence non-scientists’ views on science and the scientific community as a whole, the interactions between scientists and volunteers in and around these projects are hugely important. They are an opportunity for people with different interests, different levels of expertise, different values, to find common ground while working together to achieve a shared goal — to communicate honestly, deal with each other fairly, and take each other seriously.

More such ethical engagement between scientists and publics would be a good thing.

But the flip-side is that engagements between scientists and publics that aren’t as honest or respectful as they should be may have serious negative impacts beyond the particular participants in a given citizen science project. They may make healthy engagement, trust, and accountability harder for scientists and publics across the board.

In other words, working hard to do it right is pretty important.

I may have more to say about this after the conference. In the meantime, you can add your questions or comments to the session discussion forum.

The line between persuasion and manipulation.

As this year’s ScienceOnline Together conference approaches, I’ve been thinking about the ethical dimensions of using empirical findings from psychological research to inform effective science communication (or really any communication). Melanie Tannenbaum will be co-facilitating a session about using such research findings to guide communication strategies, and this year’s session is nicely connected to a session Melanie led with Cara Santa Maria at last year’s conference called “Persuading the Unpersuadable: Communicating Science to Deniers, Cynics, and Trolls.”

In that session last year, the strategy of using empirical results from psychology to help achieve success in a communicative goal was fancifully described as deploying “Jedi mind tricks”. Achieving success in communication was cast in terms of getting your audience to accept your claims (or at least getting them not to reject your claims out of hand because they don’t trust you, or don’t trust the way you’re engaging with them, or whatever). But if you have the cognitive launch codes, as it were, you can short-circuit distrust, cultivate trust, help them end up where you want them to end up when you’re done communicating what you’re trying to communicate.

Jason Goldman pointed out to me that these “tricks” aren’t really that tricky — it’s not like you flash the Queen of Diamonds and suddenly the person you’re talking to votes for your ballot initiative or buys your product. As Jason put it to me via email, “From a practical perspective, we know that presenting reasons is usually ineffective, and so we wrap our reasons in narrative – because we know, from psychology research, that storytelling is an effective device for communication and behavior change.”

Still, using a “trick” to get your audience to end up where you want them to end up — even if that “trick” is simply empirical knowledge that you have and your audience doesn’t — sounds less like persuasion than manipulation. People aren’t generally happy about the prospect of being manipulated. Intuitively, manipulating someone else gets us into ethically dicey territory.

As a philosopher, I’m in a discipline whose ideal is that you persuade by presenting reasons for your interlocutor to examine, arguments whose logical structure can be assessed, premises whose truth (or at least likelihood) can be evaluated. I daresay scientists have something like the same ideal in mind when they present their findings or try to evaluate the scientific claims of others. In both cases, there’s the idea than we should be making a concerted effort not to let tempting cognitive shortcuts get in the way of reasoning well. We want to know about the tempting shortcuts (some of which are often catalogued as “informal fallacies”) so we can avoid falling into them. Generally, it’s considered sloppy argumentation (or worse) to try to tempt our audience with those shortcuts.

How much space is there between the tempting cognitive shortcuts we try to avoid in our own reasoning and the “Jedi mind tricks” offered to us to help us communicate, or persuade, or manipulate more effectively? If we’re taking advantage of cognitive shortcuts (or switches, or whatever the more accurate metaphor would be) to increase the chances that people will accept our factual claims, our recommendations, our credibility, etc., can we tell when we’ve crossed the line between persuasion and manipulation? Can we tell when it’s the cognitive switch that’s doing the work rather than the sharing of reasons?

It strikes me as even more ethically problematic if we’re using these Jedi mind tricks while concealing the fact that we’re using them from the audience we’re using them on. There’s a clear element of deception in doing that.

Now, possibly the Jedi mind tricks work equally well if we disclose to our audience that we’re using them and how they work. In that case, we might be able to use them to persuade without being deceptive — and it would be clear to our audience that we were availing ourselves of these tricks, and that our goal was to get them to end up in a particular place. It would be kind of weird, though, perhaps akin to going to see a magician knowing full well that she would be performing illusions and that your being fooled by those illusions is a likely outcome. (Wouldn’t this make us more distrustful in our communicative interactions, though? If you know about the switches and it’s still the case that they can be used against you, isn’t that the kind of thing that might make you want to block lots of communication before it can even happen?)

As a side note, I acknowledge that there might be some compelling extreme cases in which the goal of getting the audience to end up in a particular place — e.g., revealing to you the location of the ticking bomb — is so urgent that we’re prepared to swallow our qualms about manipulating the audience to get the job done. I don’t think that the normal stakes of our communications are like this, though. But there may be some cases where how high the stakes really are is one of the places we disagree. Jason suggests vaccine acceptance or refusal might be important enough that the Jedi mind tricks shouldn’t set off any ethical alarms. I’ll note that vaccine advocates using a just-the-empirical-facts approach to communication are often accused or suspected of having some undisclosed financial conflict of interest that is motivating them to try to get everyone vaccinated — that is, they’re not using the Jedi mind trick social psychologists think could help them persuade their target audience and yet that audience thinks they’re up to something sneaky. That’s a pretty weird situation.

Does our cognitive make-up as humans make it possible to get closer to exchanging and evaluating reasons rather than just pushing each other’s cognitive buttons? If so, can we achieve better communication without the Jedi mind tricks?

Maybe it would require some work to change the features of our communicative environment (or of the environment in which we learn how to reason about the world and how to communicate and otherwise interact with others) to help our minds more reliably work this way. Is there any empirical data on that? (If not, is this a research question psychologists are asking?)

Some of these questions tread dangerously close to the question of whether we humans can actually have free will — and that’s a big bucket of metaphysical worms that I’m not sure I want to dig into right now. I just want to know how to engage my fellow human beings as ethically as possible when we communicate.

These are some of the questions swirling around my head. Maybe next week at ScienceOnline some of them will be answered — although there’s a good chance some more questions will be added to the pile!

Professors, we need you to do more!

…though we can’t be bothered to notice all the work you’re already doing, to acknowledge the ways in which the explicit and implicit conditions of your employment make it extremely difficult to do it, or the ways in which other cultural forces, including the pronouncements of New York Times columnists, make the “more” we’re exhorting you to do harder by alienating the public you’re meant to help from both “academics” and “intellectuals”.

In his column in the New York Times, Nicholas Kristof asserts that most university professors “just don’t matter in today’s great debates,” claiming that instead of stepping up to be public intellectuals, academics have marginalized themselves.

Despite what you may have heard in the school-yard or the op-ed pages, most of us who become university professors (even in philosophy) don’t do so to cloister ourselves from the real world and its cares. We do not become academics to sideline ourselves from public debates nor to marginalize ourselves.

So, as you might guess, I have a few things to say to Mr. Kristof here.

Among other things, Kristof wants professors to do more to engage the public. He writes:

Professors today have a growing number of tools available to educate the public, from online courses to blogs to social media. Yet academics have been slow to cast pearls through Twitter and Facebook.

A quick examination of the work landscape of a professor might shed some light on this slowness.

Our work responsibilities — and the activities on which we are evaluated for retention, tenure, and promotion — can generally be broken into three categories:

  • Research, the building of new knowledge in a discipline as recognized by peers in that discipline (e.g., via peer-review on the way to publication in a scholarly journal).
  • Teaching, the transmission of knowledge in a discipline (including strategies for building more knowledge) to students, whether those majoring in the discipline or studying it at the graduate level in order to become knowledge-builders themselves, or others taking courses to support their general education.
  • Service, generally cast as service to the discipline or service to the university, which often amounts to committee work, journal editing, and the like.

Research — the knowledge-building that academics do — is something Kristof casts as problematic:

academics seeking tenure must encode their insights [from research] into turgid prose. As a double protection against public consumption, this gobbledygook is then sometimes hidden in obscure journals — or published by university presses whose reputations for soporifics keep readers at a distance.

This ignores the academics who strive to write clearly and accessibly even when writing for an audience of their peers (not to mention the efforts of peer-reviewers to encourage more clear and accessible writing from the authors whose manuscripts they review). It also ignores the significant number of academics involved in efforts to bring the knowledge they build from behind the paywalls of closed-access journals to the public.

And, it ignores that the current structures of retention, tenure, and promotion, of hiring, of grant-awarding, keep score with metrics like impact factors that entrench the primacy of a conversation in the pages of peer-reviewed journals while making other conversations objectively worthless — at least from the point of view of the evaluation on which one’s academic career flourishes or founders.

A bit earlier in the column, Kristof includes a quote from Middle East specialist Will McCants that makes this point:

If the sine qua non for academic success is peer-reviewed publications, then academics who “waste their time” writing for the masses will be penalized.

Yet even as Kristof notes that those trying to rebel against the reward system built in to the tenure process “are too often crushed or driven away,” he seems to miss the point that exhorting academics to rebel against it anyway sounds like bad advice.

This is especially true in a world where academics lucky enough to have tenure-track jobs are keenly aware of the “excess PhDs” caught in the eternal cycle of postdoctoral appointments or conscripted in the army of adjuncts. Verily, there are throngs of people with the education, the intelligence, and the skills to be public intellectuals but who are scraping by on low pay, oppressively long hours, and the kind of deep uncertainty that comes with a job that is “temporary” by design.

If the public needs professors to be sharing their knowledge more directly, Nicholas Kristof, please explain how professors can do so without paying a high professional price? Where are the additional hours in the academic day for the “public intellectual” labor you want them to do (since they will still be expected to participate fully in the knowledge-building and discourse within their disciplinary community)? How will you encourage more professors to step up after the first wave taking your marching orders is denied tenure, or denied grants, or collapses from exhaustion?

More explicit professional recognition — professional credit — for academics engaging with the public would be a good thing. But to make it happen in a sustainable way, you need a plan. And getting buy-in from the administrators who shape and enforce the current systems of professional rewards and punishments makes more sense than exhorting the professors subject to that system to ignore the punishments they’re likely to face — especially at a moment when there are throngs of new and seasoned Ph.D.s available to replace the professors who run afoul of the system as it stands.

Kristof doesn’t say much about teaching in his column, though this is arguably a place where academics regularly do outreach to the segment of the public that shows up in the classroom. Given how few undergraduates go on to be academics themselves, this opportunity for engagement can be significant. Increasingly, though, we university teachers are micromanaged and “assessed” by administrators and committees in response to free-floating anxiety about educational quality and pressure to bring “No Child Left Behind”-style oversight and high-stakes testing to higher ed. Does this increase our ability to put knowledge and insights from our discipline into real-world contexts that matter to our students — that help them broaden their understanding of the challenges that face us individually and collectively, and of different disciplinary strategies for facing them, not just to serve their future employers’ goals, but to serve their own? In my experience, it does not.

Again, if Kristof wants better engagement between academics and the public — which, presumably, includes the students who show up in the classroom and will, in their post-college lives, be part of the public — he might get better results by casting some light on the forces that derail engagement in college teaching.

Despite all these challenges, the fact is that many academics are already engaging the public. However, Nicholas Kristof seems not to have noticed this. He writes:

Professors today have a growing number of tools available to educate the public, from online courses to blogs to social media. Yet academics have been slow to cast pearls through Twitter and Facebook.

The academics who have been regularly engaging with the public on Facebook and Twitter and G+ and YouTube and blogs and podcasts — many of us for years — would beg to differ with this assessment. Check out the #EngagedAcademics hashtag for a sampling of the response.

As well, there are academics writing for mass-circulation publications, whether online or in dead-tree form, working at science festivals and science fairs, going into elementary and secondary school classrooms, hosting or participating in local events like Café Scientifique or Socrates Café, going on radio or TV programs, writing letters to the editors of their local papers, going to town council and school board meetings.

Either all of this sort of engagement is invisible to Nicholas Kristof, or he thinks it doesn’t really count towards the work of being a public intellectual.

I wonder if this is because Kristof has in mind public intellectuals who have a huge reach and an immediate impact. If so, it would be good to ask who controls the microphone and why the academics from whom Kristof wants more aren’t invited to use it. It should be noted here that the New York Times, where Kristof has a regular column, is a pretty big microphone.

Also, it’s worth asking whether there’s good (empirical) reason to believe that one-to-many communication by academics who do have access to a big microphone is a better way to serve the needs of the public than smaller-scale communications (some of them one-to-one) in which academics are not just professing their knowledge to members of the public but also actually listening to them to find out what they want to know and what they care about? Given what seems to be a persistent attitude of suspicion and alienation from “intellectuals” among members of the public, engagement on a human level strikes me as likely to feel less manipulative — and to be less manipulative.

Maybe Nicholas Kristof has a plan to dispel the public’s reflexive distrust of academics. If so, I trust he’ll lay it out in a column in the not-so-distant future.

I don’t think Kristof is wrong that the public could benefit from engagement with professors, but asserting that we need more while ignoring the conditions that discourage such engagement — and while ignoring the work of the many academics who are engaging the public — is not particularly helpful. Moreover, it seems to put the burden on professors to step up and do more while losing sight of the fact that engagement requires active participation on both sides.

Professors cannot proclaim what they know and assume that the public will automatically absorb that knowledge and, armed with it, act according. It would be somewhat horrifying (for academics and the public alike) if engagement worked that way.

Academics and members of the public are sharing a world. Having various kinds of reliable knowledge about the world is good, as is sharing that knowledge and putting it into useful context, but this is never enough to determine just what we should do with that knowledge. We need to work out, together, our shared interests and goals.

Academics must be part of this discussion, but if other members of the public aren’t willing to engage, it probably doesn’t matter if more professors come to the table.

* * * * *
It should go without saying, but I will say it here anyway, that there are plenty of people who are not professors or academics engaging the public in meaningful ways that should make us recognize them as “public intellectuals” too. My focus here has been on professors since they are the focus of Kristof’s column.

Are you saying I can’t go home until we cure cancer? Obligations of scientists (part 7)

In the previous post in this series, we examined the question of what scientists who are trained with significant financial support from the public (which, in the U.S., means practically every scientist trained at the Ph.D. level) owe to the public providing that support. The focus there was personal: I was trained to be a physical chemist, free of charge due to the public’s investment, but I stopped making new scientific knowledge in 1994, shortly after my Ph.D. was conferred.

From a certain perspective, that makes me a deadbeat, a person who has fallen down on her obligations to society.

Maybe that perspective strikes you as perverse, but there are working scientists who seem to share it.

Consider this essay by cancer researcher Scott E. Kern raising the question of whether cancer researchers at Johns Hopkins who don’t come into the lab on a Sunday afternoon have lost sight of their obligations to people with cancer.

Kern wonders if scientists who manage to fit their laboratory research into the confines of a Monday-through-Friday work week might lack a real passion for scientific research. He muses that full weekend utilization of their modern cancer research facility might waste less money (in terms of facilities and overhead, salaries and benefits). He suggests that the researchers who have are not hard at work in the lab on a weekend are falling down on their moral duty to cure cancer as soon as humanly possible.

The unsupported assumptions in Kern’s piece are numerous (and far from novel). Do we know that having each research scientist devote more hours in the lab increases the rate of scientific returns? Or might there plausibly be a point of diminishing returns, where additional lab-hours produce no appreciable return? Where’s the economic calculation to consider the potential damage to the scientists from putting in 80 hours a week (to their cognitive powers, their health, their personal relationships, their experience of a life outside of work, maybe even their enthusiasm for science)? After all, lots of resources are invested in educating and training researchers — enough so that one wouldn’t want to damage those researchers on the basis of an (unsupported) hypothesis offered in the pages of Cancer Biology & Therapy.

And while Kern is doing economic calculations, he might want to consider the impact on facilities of research activity proceeding full-tilt, 24/7. Without some downtime, equipment and facilities might wear out faster than they would otherwise.

Nowhere here does Kern consider the option of hiring more researchers to work 40 hour weeks, instead of persuading the existing research workforce into spending 60, 80, 100 hours a week in the lab.

These researchers might still end up bringing work home (if they ever get a chance to go home).

Kern might dismiss this suggestion on purely economic grounds — organizations are more likely to want to pay for fewer employees (with benefits) who can work more hours than to pay to have the same number of hours of work done my more employees. He might also dismiss it on the basis that the people who really have the passion needed to do the research to cure cancer will not prioritize anything else in their lives above doing that research and finding that cure.

But one assumes passion of the sort Kern seems to have in mind would be the kind of thing that would drive researchers to the lab no matter what, even in the face of long hours, poor pay, grinding fatigue. If that is so, it’s not clear how the problem is solved by browbeating researchers without this passion into working more hours because they owe it to cancer patients. Indeed, Kern might consider, in light of the relative dearth of researchers with passion sufficient to fill the cancer research facilities on weekends, the necessity of making use of the research talents and efforts of people who don’t want to spend 60 hours a week in the lab. Kern’s piece suggests he’d have a preference for keeping such people out of the research ranks (despite the significant societal investment made in their scientific training), but by his own account there would hardly be enough researchers left in that case to keep research moving forward.

Might not these conditions prompt us to reconsider whether the received wisdom of scientific mentors is always so wise? Wouldn’t this be a reasonable place to reevaluate the strategy for accomplishing the grand scientific goal?

And Kern does not even consider a pertinent competing hypothesis, that people often have important insights into how to move research forward in the moments when they step back and allow their minds to wander. Perhaps less time away from one’s project means fewer of these insights — which, on its face, would be bad for the project of curing cancer.

The strong claim at the center of Kern’s essay is an ethical claim about what researchers owe cancer patients, about what cancer patients can demand from researchers (or any other members of society), and on what basis.

He writes:

During the survey period, off-site laypersons offer comments on my observations. “Don’t the people with families have a right to a career in cancer research also?” I choose not to answer. How would I? Do the patients have a duty to provide this “right”, perhaps by entering suspended animation? Should I note that examining other measures of passion, such as breadth of reading and fund of knowledge, may raise the same concern and that “time” is likely only a surrogate measure? Should I note that productive scientists with adorable family lives may have “earned” their positions rather than acquiring them as a “right”? Which of the other professions can adopt a country-club mentality, restricting their activities largely to a 35–40 hour week? Don’t people with families have a right to be police? Lawyers? Astronauts? Entrepreneurs?

Kern’s formulation of this interaction of rights and duties strikes me as odd. Essentially, he’s framing this as a question of whether people with families have a right to a career in cancer research, rather than whether cancer researchers have a right to have families (or any other parts of their lives that exist beyond their careers). Certainly, there have been those who have treated scientific careers as vocations requiring many sacrifices, who have acted as if there is a forced choice between having a scientific career and having a family (unless one has a wife to tend to that family).

We should acknowledge, however, that having a family life is just one way to “have a life.” Therefore, let’s consider the question this way: Do cancer researchers have a right to a life outside of work?

Kern’s suggestion is that this “right,” when exercised by researchers, is something that cancer patients end up paying for with their lives (unless they go into suspended animation while cancer researchers are spending time with their families or puttering around their gardens).

The big question, then, is what the researcher’s obligations are to the cancer patient — or to society in general.

If we’re to answer that question, I don’t think it’s fair to ignore the related questions: What are society’s obligations to the cancer patient? What are society’s obligations to researchers? And what are the cancer patient’s obligations in all of this?

We’ve already spent some time discussing scientists’ putative obligation to repay society’s investment in their training:

  • society has paid for the training the scientists have received (through federal funding of research projects, training programs, etc.)
  • society has pressing needs that can best (only?) be addressed if scientific research is conducted
  • those few members of society who have specialized skills that are needed to address particular societal needs have a duty to use those skills to address those needs (i.e., if you can do research and most other people can’t, then to the extent that society as a whole needs the research that you can do, you ought to do it)

Arguably, finding cures and treatments for cancer would be among those societal needs.

Once again the Spider-Man ethos rears its head: with great power comes great responsibility, and scientific researchers have great power. If cancer researchers won’t help find cures and treatments for cancer, who else can?

Here, I think we should pause to note that there is probably an ethically relevant difference between offering help and doing everything you possibly can. It’s one thing to donate a hundred bucks to charity and quite another to give all your money and sell all your worldly goods in order to donate the proceeds. It’s a different thing for a healthy person to donate one kidney than to donate both kidneys plus the heart and lungs.

In other words, there is help you can provide, but there seems also to be a level of help that it would be wrong for anyone else to demand of you. Possibly there is also a level of help that it would be wrong for you to provide even if you were willing to do so because it harms you in a fundamental and/or irreparable way.

And once we recognize that such a line exists between the maximum theoretical help you could provide and the help you are obligated to provide, I think we have to recognize that the needs of cancer patients do not — and should not — trump every other interest of other individuals or of society as a whole. If a cancer patient cannot lay claim to the heart and lungs of a cancer researcher, then neither can that cancer patient lay claim to every moment of a cancer researcher’s time.

Indeed, in this argument of duties that spring from ability, it seems fair to ask why it is not the responsibility of everyone who might get cancer to train as a cancer researcher and contribute to the search for a cure. Why should tuning out in high school science classes, or deciding to pursue a degree in engineering or business or literature, excuse one from responsibility here? (And imagine how hard it’s going to be to get kids to study for their AP Chemistry or AP Biology classes when word gets out that their success is setting them up for a career where they ought never to take a day off, go to the beach, or cultivate friendships outside the workplace. Nerds can connect the dots.)

Surely anyone willing to argue that cancer researchers owe it to cancer patients to work the kind of hours Kern seems to think would be appropriate ought to be asking what cancer patients — and the precancerous — owe here.

Does Kern think researchers owe all their waking hours to the task because there are so few of them who can do this research? Reports from job seekers over the past several years suggest that there are plenty of other trained scientists who could do this research but have not been able to secure employment as cancer researchers. Some may be employed in other research fields. Others, despite their best efforts, may not have secured research positions at all. What are their obligations here? Ought those employed in other research areas to abandon their current research to work on cancer, departments and funders be damned? Ought those who are not employed in a research field to be conducting their own cancer research anyway, without benefit of institution or facilities, research funding or remuneration?

Why would we feel scientific research skills, in particular, should make the individuals who have them so subject to the needs of others, even to the exclusion of their own needs?

Verily, if scientific researchers and the special skills they have are so very vital to providing for the needs of other members of society — vital enough that people like Kern feel it’s appropriate to criticize them for wanting any time out of the lab — doesn’t society owe it to its members to give researchers every resource they need for the task? Maybe even to create conditions in which everyone with the talent and skills to solve the scientific problems society wants solved can apply those skills and talents — and live a reasonably satisfying life while doing so?

My hunch is that most cancer patients would actually be less likely than Kern to regard cancer researchers as of merely instrumental value. I’m inclined to think that someone fighting a potentially life-threatening disease would be reluctant to deny someone else the opportunity to spend time with loved ones or to savor an experience that makes life worth living. To the extent that cancer researchers do sacrifice some aspects of the rest of their life to make progress on their work, I reckon most cancer patients appreciate these sacrifices. If more is needed for cancer patients, it seems reasonable to place this burden on society as a whole — teeming with potential cancer patients and their relatives and friends — to enable more (and more effective) cancer research to go on without drastically restricting the lives of the people qualified to conduct it, or writing off their interests in their own human flourishing.

As a group, scientists do have special capabilities with which they could help society address pressing problems. To the extent that they can help society address those problems, scientists probably should — not least because scientists are themselves part of society. But despite their special powers, scientists are still human beings with needs, desires, interests, and aspirations. A society that asks scientists to direct their skills and efforts towards solving its problems also has a duty to give scientists the same opportunities to flourish that it provides for its members who happen not to be scientists.

In the next post in this series, I’ll propose a less economic way to think about just what society might be buying when it invests in the training of scientists. My hope is that this will give us a richer and more useful picture of the obligations scientists and non-scientists have to each other as they are sharing a world.

* * * * *
Ancestors of this post first appeared on Adventures in Ethics and Science
_____

Kern, S. E. (2010). Where’s the passion?. Cancer biology & therapy, 10(7),655-657.
_____
Posts in this series:

Questions for the non-scientists in the audience.

Questions for the scientists in the audience.

What do we owe you, and who’s “we” anyway? Obligations of scientists (part 1)

Scientists’ powers and ways they shouldn’t use them: Obligations of scientists (part 2)

Don’t be evil: Obligations of scientists (part 3)

How plagiarism hurts knowledge-building: Obligations of scientists (part 4)

What scientists ought to do for non-scientists, and why: Obligations of scientists (part 5)

What do I owe society for my scientific training? Obligations of scientists (part 6)

Are you saying I can’t go home until we cure cancer? Obligations of scientists (part 7)

What do I owe society for my scientific training? Obligations of scientists (part 6)

One of the dangers of thinking hard about your obligations is that you may discover one that you’ve fallen down on. As we continue our discussion of the obligations of scientist, I put myself under the microscope and invite you to consider whether I’ve incurred a debt to society that I have failed to pay back.

In the last post in this series, we discussed the claim that those in our society with scientific training have a positive duty to conduct scientific research in order to build new scientific knowledge. The source of that putative duty is two-fold. On the one hand, it’s a duty that flows from the scientist’s abilities in the face of societal needs: if people trained to build new scientific knowledge won’t build the new scientific knowledge needed to address pressing problems (like how to feed the world, or hold off climate change, or keep us all from dying from infectious diseases, or what have you), we’re in trouble. On the other hand, it’s a duty that flows from the societal investment that nurtures the development of these special scientific abilities: in the U.S., it’s essentially impossible to get scientific training at the Ph.D. level that isn’t subsidized by public funding. Public funding is used to support the training of scientists because the public expects a return on that investment in the form of grown-up scientists building knowledge which will benefit the public in some way. By this logic, people who take advantage of that heavily subsidized scientific training but don’t go on to build scientific knowledge when they are fully trained are falling down on their obligation to society.

People like me.

From September 1989 through December 1993, I was in a Ph.D. program in chemistry. (My Ph.D. was conferred January 1994.)

As part of this program, I was enrolled in graduate coursework (two chemistry courses per quarter for my first year, plus another chemistry course and three math courses, for fun, during my second year). I didn’t pay a dime for any of this coursework (beyond buying textbooks and binder paper and writing implements). Instead, tuition was fully covered by my graduate tuition stipend (which also covered “units” in research, teaching, and department seminar that weren’t really classes but appeared on our transcripts as if they were). Indeed, beyond the tuition reimbursement I was paid a monthly stipend of $1000, which seemed like a lot of money at the time (despite the fact that more than a third of it went right to rent).

I was also immersed in a research lab from January 1990 onward. Working in this lab was the heart of my training as a chemist. I was given a project to start with — a set of empirical questions to try to answer about a far-from-equilibrium chemical system that one of the recently-graduated students before me had been studying. I had to digest a significant chunk of experimental and theoretical literature to grasp why the questions mattered and what the experimental challenges in answering them might be. I had to assess the performance of the experimental equipment we had on hand, spend hours with calibrations, read a bunch of technical manuals, disassemble and reassemble pumps, write code to drive the apparatus and to collect data, identify experimental constraints that were important to control (and that, strangely, were not identified as such in the experimental papers I was working from), and also, when I determined that the chemical system I had started with was much too fussy to study with the equipment the lab could afford, to identify a different chemical system that I could use to answer similar questions and persuade my advisor to approve this new plan.

In short, my time in the lab had me learning how to build new knowledge (in a particular corner of physical chemistry) by actually building new knowledge. The earliest stages of my training had me juggling the immersion into research with my own coursework and with teaching undergraduate chemistry students as a lab instructor and teaching assistant. Some weeks, this meant I was learning less about how to make new scientific knowledge than I was about how to tackle a my problem-sets or how to explain buffers to pre-meds. Past the first year of the program, though, my waking hours were dominated by getting experiments designed, collecting loads of data, and figuring out what it meant. There were significant stretches of time during which I got into the lab by 5 AM and didn’t leave until 8 or 9 PM, and the weekend days when I didn’t go into the lab were usually consumed with coding, catching up on relevant literature, or drafting manuscripts or thesis chapters.

Once, for fun, some of us grad students did a back-of-the-envelope calculation of our hourly wages. It was remarkably close to the minimum wage I had been paid as a high school student in 1985. Still, we were getting world class scientific training, for free! We paid with the sweat of our brows, but wouldn’t we have to put in that time and effort to learn how to make scientific knowledge anyway? Sure, we graduate students did the lion’s share of the hands-on teaching of undergraduates in our chemistry department (undergraduates who were paying a significant tuition bill), but we were learning, from some of the best scientists in the world, how to be scientists!

Having gotten what amounts to a full-ride for that graduate training, due in significant part to public investment in scientific training at the Ph.D. level, shouldn’t I be hunkered down somewhere working to build more chemical knowledge to pay off my debt to society?

Do I have any good defense to offer for the fact that I’m not building chemical knowledge?

For the record, when I embarked on Ph.D. training in chemistry, I fully expected to be an academic chemist when I grew up. I really did imagine that I’d have a long career building chemical knowledge, training new chemists, and teaching chemistry to an audience that included some future scientists and some students who would go on to do other things but who might benefit from a better understanding of chemistry. Indeed, when I was applying to graduate programs, my chemistry professors were talking up the “critical shortage” of Ph.D. chemists. (By January of my first year in graduate school, I was reading reports that there were actually something like 30% more Ph.D. chemists than there were jobs for Ph.D. chemists, but a first-year grad student is not necessarily freaking out about the job market while she is wrestling with her experimental system.) I did not embark on a chemistry Ph.D. as a collectable. I did not set out to be a dilettante.

In the course of the research that was part of my Ph.D. training, I actually built some new knowledge and shared it with the public, at least to the extent of publishing it in journal articles (four of them, an average of one per year). It’s not clear what the balance sheet would say about this rate of return on the public’s investment in my scientific training — nor either whether most taxpayers would judge the knowledge I built (about the dynamics of far-from-equilibrium chemical reactions and about ways to devise useful empirical tests of proposed reaction mechanisms) as useful knowledge.

Then again, no part of how our research was evaluated in grad school was framed in terms of societal utility. You might try to describe how your research had broader implications that someone outside your immediate subfield could appreciate if you were writing a grant to get the research funded, but solving society’s pressing scientific problems was not the sine qua non of the research agendas we were advancing for our advisors or developing for ourselves.

As my training was teaching me how to conduct serious research in physical chemistry, it was also helping me to discover that my temperament was maybe not so well suited to life as a researcher in physical chemistry. I found, as I was struggling with a grant application that asked me to describe the research agenda I expected to pursue as an academic chemist, that the questions that kept me up at night were not fundamentally questions about chemistry. I learned that no part of me was terribly interested in the amount of grant-writing and lab administration that would have been required of me as a principal investigator. Looking at the few women training me at the Ph.D. level, I surmised that I might have to delay or skip having kids altogether to survive academic chemistry — and that the competition for those faculty jobs where I’d be able to do research and build new knowledge was quite fierce.

Plausibly, had I been serious about living up to my obligation to build new knowledge by conducting research, I could have been a chemist in industry. As I was finishing up my Ph.D., the competition for industry jobs for physical chemists like me was also pretty intense. What I gathered as I researched and applied for industry jobs was that I didn’t really like the culture of industry. And, while working in industry would have been a way from me to conduct research and build new knowledge, I might have ended up spending more time solving the shareholders’ problems than solving society’s problems.

If I wasn’t going to do chemical research in an academic career and I wasn’t going to do chemical research in an industrial job, how should I pay society back for the publicly-supported scientific training I received? Should I be building new scientific knowledge on my own time, in my own garage, until I’ve built enough that the debt is settled? How much new knowledge would that take?

The fact is, none of us Ph.D. students seemed to know at the time that public money was making it possible for us to get graduate training in chemistry without paying for that training. Nor was there an explicit contract we were asked to sign as we took advantage of this public support, agreeing to work for a certain number of years upon the completion of our degrees as chemists serving the public’s interests. Rather, I think most of us saw an opportunity to pursue a subject we loved and to get the preparation we would need to become principal investigators in academia or industry if we decided to pursue those career paths. Most of us probably didn’t know enough about what those career paths would be like to have told you at the beginning of our Ph.D. training whether those career paths would suit our talents or temperaments — that was part of what we were trying to find out by pursuing graduate studies. And practically, many of us would not have been able to find out if we had had to pay the costs of our Ph.D. training ourselves.

If no one who received scientific training subsidized by the public went on to build new scientific knowledge, this would surely be a problem for society. But, do we want to say that everyone who receives such subsidized training is on the hook to pay society back by building new scientific knowledge until such time as society has all the scientific knowledge it needs?

That strikes me as too strong. However, given that I’ve benefitted directly from a societal investment in Ph.D. training that, for all practical purposes, I stopped using in 1994, I’m probably not in a good position to make an objective judgment about just what I do owe society to pay back this debt. Have I paid it back already? Is society within its rights to ask more of me?

Here, I’ve thought about the scientist’s debt to society — my debt to society — in very personal terms. In the next post in the series, we’ll revisit these questions on a slightly larger scale, looking at populations of scientists interacting with the larger society and seeing what this does to our understanding of the obligations of scientists.
______
Posts in this series:

Questions for the non-scientists in the audience.

Questions for the scientists in the audience.

What do we owe you, and who’s “we” anyway? Obligations of scientists (part 1)

Scientists’ powers and ways they shouldn’t use them: Obligations of scientists (part 2)

Don’t be evil: Obligations of scientists (part 3)

How plagiarism hurts knowledge-building: Obligations of scientists (part 4)

What scientists ought to do for non-scientists, and why: Obligations of scientists (part 5)

What do I owe society for my scientific training? Obligations of scientists (part 6)

Are you saying I can’t go home until we cure cancer? Obligations of scientists (part 7)

On speaking up when someone in your profession behaves unethically.

On Twitter recently there was some discussion of a journalist who wrote and published a piece that arguably did serious harm to its subject.

As the conversation unfolded, Kelly Hills helpfully dropped a link to the Society of Professional Journalists Code of Ethics. Even cursory inspection of this code made it quite clear that the journalist (and editor, and publisher) involved in the harmful story weren’t just making decisions that happened to turn out badly. Rather, they were acting in ways that violate the ethical standards for the journalistic profession articulated in this code.

One take-away lesson from this is that being aware of these ethical standards and letting them guide one’s work as a journalist could head off a great deal of harm.

Something else that came up in the discussion, though, was what seemed like a relative dearth of journalists standing up to challenge the unethical conduct of the journalist (and editor, and publisher) in question. Edited to add: A significant number of journalists even used social media to give the problematic piece accolades.

I follow a lot of journalists on Twitter. A handful of them condemned the unethical behavior in this case. The rest may be busy with things offline. It is worth noting that the Society of Professional Journalists Code of Ethics includes the following:

Journalists should:

  • Clarify and explain news coverage and invite dialogue with the public over journalistic conduct.
  • Encourage the public to voice grievances against the news media.
  • Admit mistakes and correct them promptly.
  • Expose unethical practices of journalists and the news media.
  • Abide by the same high standards to which they hold others.

That fourth bullet-point doesn’t quite say that journalists ought to call out bad journalistic behavior that has already been exposed by others. However, using one’s voice to condemn unethical conduct when you see it is one of the ways that people know that you’re committed to ethical conduct. (The other way people know you’re committed to ethical conduct is that you conduct yourself ethically.)

In a world where the larger public is probably going to take your professional tribe as a package deal, extending trust to the lot of you or feeling mistrust for the lot of you, reliably speaking up about problematic conduct when you see it is vital in earning the public’s trust. Moreover, criticisms from inside the professional community seem much more likely to be effective in persuading its members to embrace ethical conduct than criticisms from outside the profession. It’s just too easy for people on the inside to dismiss the critique from people on the outside with, “They just don’t understand what we do.”

There’s a connection here between what’s good for the professional community of journalists and what’s good for the professional community of scientists.

When scientists behave unethically, other scientists need to call them out — not just because the unethical behavior harms the integrity of the scientific record or the opportunities of particular members of the scientific community to flourish, or the health or safety of patients, but because this is how members of the community teetering on the brink of questionable decisions remember that the community does not tolerate such behavior. This is how they remember that those codes of conduct are not just empty words. This is how they remember that their professional peers expect them to act with integrity very single day.

If members of a professional community are not willing to demand ethical behavior from each other in this way, how can the public be expected to trust that professional community to behave ethically?

Undoubtedly, there are situations that can make it harder to take a stand against unethical behavior in your professional community, power disparities that can make calling out the bad behavior dangerous to your own standing in the professional community. As well, shared membership in a professional community creates a situation where you’re inclined to give your fellow professional the benefit of the doubt rather than starting from a place of distrust in your engagements.

But if only a handful of voices in your professional community are raised to call out problematic behavior that the public has identified and is taking very seriously, what does that communicate to the public?

Maybe that you see the behavior, don’t think it’s problematic, but can’t be bothered to explain why it’s not problematic (because the public’s concerns just don’t matter to you).

Maybe that you see the behavior, recognize that it’s problematic, but don’t actually care that much when it happens (and if the public is concerned about it, that’s their problem, not yours).

Maybe that you’re working very hard not to see the problematic behavior (which, in this case, probably means you’re also working very hard not to hear the public voicing its concerns).

Sure, there’s a possibility that you’re working very hard within your professional community to address the problematic behavior and make sure it doesn’t happen again, but if the public doesn’t see evidence of these efforts, it’s unreasonable to expect them to know they’re happening.

It’s hard for me to see how the public’s trust in a profession is supposed to be strengthened by people in the professional community not speaking out against unethical conduct of members of that professional community that the public already knows about. Indeed, I think a profession that only calls out bed behavior in its ranks that the public already knows about is skating on pretty thin ice.

It surely feels desperately unfair to all the members of a professional community working hard to conduct themselves ethically when the public judges the whole profession on the basis of the bad behavior of a handful of its members. One may be tempted to protest, “We’re not all like that!” That’s not really addressing the public’s complaint, though: The public sees at least one of you who’s “like that”; what are the rest of you doing about that?

If the public has good reason to believe that members of the profession will be swift and effective in their policing of bad behavior within their own ranks, the public is more likely to see the bad actors as outliers.

But the public is more likely to believe that members of the profession will be swift and effective in their policing of bad behavior within their own ranks when they see that happen, regularly.

Nature and trust.

Here are some things that I know:

Nature is a high-impact scientific journal that is widely read in the scientific community.

The editorial mechanisms Nature employs are meant to ensure the quality of the publication.

Reports of scientific research submitted to Nature undergo peer review (as do manuscripts submitted to other scholarly scientific journals). As well, Nature publishes items that are not peer-reviewed — for example, news pieces and letters to the editor. Nonetheless, the pieces published in Nature that don’t undergo peer review are subjected to editorial oversight.

Our human mechanisms for ensuring the quality of items that are published are not perfect. Peer reviewers sometimes get fooled. Editors sometimes make judgments that, in retrospect, they would not endorse.

The typical non-scientist who knows about journals like Nature is in the position of being generally trusting that peer review and editorial processes do the job of ensuring the high quality of the contents of these journals, or of being generally distrusting. Moreover, my guess is that the typical non-scientist, innocent of the division of labor on the vast editorial teams employed by journals like Nature, takes for granted that the various items published in such journals reflect sound science — or, at the very least, do not put forward claims that are clearly at odds with the body of existing scientific research.

Non-scientists, in other words, are trusting that the editorial processes at work in a journal like Nature produce a kind of conversation within the scientific community, one that weeds out stuff scientists would recognize as nonsense.

This trust is important because non-scientists do not have the same ability to identify and weed out nonsense. Nature is a kind of scientific gatekeeper for the larger public.

This trust is also something that can be played — for example, by a non-expert with an agenda who manages to get a letter published in a journal like Nature. While such correspondence may not impress a scientist, a “publication in Nature” of this sort may be taken as credible by non-scientists on the basis of the trust they have that such a well-known scientific journal must have editorial processes that reliably weed out nonsense.

In a world where we divide the cognitive labor this way, where non-scientists need to trust scientists to build reliable knowledge and organs of scientific communication to weed out nonsense, the stakes are very high for the scientists and the organs of scientific communication to live up to that trust — to get it right most of the time, and to be transparent enough about their processes that when they don’t get it right it’s reasonably easy to diagnose what went wrong and to fix it.

Otherwise, scientists and the organs of scientific communication risk losing the trust of non-scientists.

I’ve been thinking about this balance of trust and accountability in the context of a letter that was published in Nature asserting, essentially, that the underrepresentation of women as authors and peer reviewers in Nature is no kind of problem, because male scientists have merit and women scientists have child care obligations.

Kelly Hills has a clear and thorough explanation of what made publishing this particular letter problematic. It’s not just that the assertion of the letter writer are not supported by the research (examples of which Kelly helpfully links). It’s not just that there’s every reason to believe that the letter writer will try to spin the publication of his letter in Nature as reason to give his views more credence.

It’s also that the decision to publish this letter suggests the question of women’s ability to do good science is a matter of legitimate debate.

In the discussion of this letter on Twitter, I saw the suggestion that the letter was selected for publication because it was representative of a view that had been communicated by many correspondents to Nature.

In a journal that the larger public takes to be a source of views that are scientifically sound, or at least scientifically plausible (rather than at odds with a growing body of empirical research), the mere fact that many people have expressed a view in letters strikes me as insufficient reason to publish it. I suspect that if a flurry of letters were to arrive asserting that the earth is stationary in the center of the universe, or that the earth is flat, that the editorial staff in charge of correspondence wouldn’t feel the need to publish letters conveying these views — especially if the letters came from people without scientific training or active involvement in scientific work of some sort. I’d even be willing to make a modest bet that Nature regularly gets a significant amount of correspondence communicating crackpot theories of one sort or another. (I’m not running a major organ of scientific communication and I regularly get a significant amount of correspondence communicating crackpot theories of one sort or another.) Yet these crackpot theories do not regularly populate Nature’s “Correspondence” page.

In response to the objections raised to the publication of this letter, the Nature Editorial staff posted this comment:

Nature has a strong history of supporting women in science and of reflecting the views of the community in our pages, including Correspondence. Our Correspondence pages do not reflect the views of the journal or its editors; they reflect the views only of the correspondents.

We do not endorse the views expressed in this Correspondence (or indeed any Correspondences unless we explicitly say so). On re-examining the letter and the process, we consider that it adds no value to the discussion and unnecessarily inflames it, that it did not receive adequate editorial attention, and that we should not have published it, for which we apologize. This note will appear online on nature.com in the notes section of the Correspondence and in the Correspondence’s pdf.

Nature’s own positive views and engagement in the issues concerning women in science are represented by our special from 2013:
www.nature.com/women
Philip Campbell, Editor-in-Chief, Nature

(Bold emphasis added.)

I think this editorial pivot is a wise one. The letter in question may have represented a view many people have, but it didn’t offer any new facts or novel insight. And it’s not like women in science don’t know that they are fighting against biases — even biases in their own heads — every single day. They didn’t need to read a letter from some guy in Nature to become aware of this bit of their professional terrain.

So, the apology is good. But it is likely insufficient.

At this point, Nature may also have trust they need to rebuild with women, whether those women are members of the scientific community or members of the larger public. While it is true that Nature devoted a special issue to challenges faced by women in science, they also gave the editorial green light to a piece of “science fiction” that reinforced, rather than challenging the gendered assumption that make it harder for women in science.

And yes, we understand that different editors oversee the peer-reviewed reports of scientific research and the news items, the correspondence and the short fiction. But our view of organizations — our trust of organizations — tends to bundle these separate units together. This is pretty unavoidable unless we personally know each of the editors in each of the units (and even personal acquaintance doesn’t mean our trust is indestructible).

All of which is to say: as an organization, Nature still has some work to do to win back the trust of women (and others) who cannot think of the special issue on women in science without also thinking of “Womanspace” or the letter arguing that underrepresentation of women in Nature’s pages is just evidence of a meritocracy working as it should.

It would be nice to trust that Nature’s editorial processes will go forth and get it right from here on out, but we don’t want to be played for fools. As well, we may have to do additional labor going forward cleaning up the fallout from this letter in public discourses on women in science when we already had plenty of work to do in that zone.

This is a moment where Nature may want women scientists to feel warmly toward the journal, to focus on the good times as representative of where Nature really stands, but trust is something that is rebuilt, or eroded, over iterated engagements every single day.

Trust can’t be demanded. Trust is earned.

Given the role Nature plays in scientific communications and in the communication of science to a broader public, I’m hopeful the editorial staff is ready to do the hard work to earn that trust — from scientists and non-scientists alike — going forward.

* * * * *
Related posts:

Hope Jahren, Why I Turned Down a Q-and-A in Nature Magazine

Anne Jefferson, Megaphones, broken records and the problem with institutional amplification of sexism and racism

What scientists ought to do for non-scientists, and why: Obligations of scientists (part 5)

If you’re a scientist, are there certain things you’re obligated to do for society (not just for your employer)? If so, where does this obligation come from?

This is part of the discussion we started back in September about special duties or obligations scientists might have to the non-scientists with whom they share a world. If you’re just coming to the discussion now, you might want to check out the post where we set out some groundwork for the discussion, plus the three posts on scientists’ negative duties (i.e., the things scientists have an obligation not to do): our consideration of powers that scientists have and should not misuse, our discussion of scientific misconduct, the high crimes against science that scientists should never commit, and our examination of how plagiarism is not only unfair but also hazardous to knowledge-building.

In this post, finally, we lay out some of the positive duties that scientists might have.

In her book Ethics of Scientific Research, Kristin Shrader-Frechette gives a pretty forceful articulation of a set of positive duties for scientists. She asserts that scientists have a duty to do research, and a duty to use research findings in ways that serve the public good. Recall that these positive duties are in addition to scientists’ negative duty to ensure that the knowledge and technologies created by the research do not harm anyone.

Where do scientists’ special duties come from? Shrader-Frechette identifies a number of sources. For one thing, she says, there are obligations that arise from holding a monopoly on certain kinds of knowledge and services. Scientists are the ones in society who know how to work the electron microscopes and atom-smashers. They’re the ones who have the equipment and skills to build scientific knowledge. Such knowledge is not the kind of thing your average non-scientist could build for himself.

Scientists also have obligations that arise from the fact that they have a good chance of success (at least, better than anyone else) when it comes to educating the public about scientific matters or influencing public policy. The scientists who track the evidence that human activity leads to climate change, for example, are the ones who might be able to explain that evidence to the public and argue persuasively for measures that are predicted to slow climate change.

As well, scientists have duties that arise from the needs of the public. If the public’s pressing needs can only be met with the knowledge and technologies produced by scientific research – and if non-scientists cannot produce such knowledge and technologies themselves – then if scientists do no work to meet these needs, who can?

As we’ve noted before, there is, in all of this, that Spiderman superhero ethos: with great power comes great responsibility. When scientists realize how much power their knowledge and skills give them relative to the non-scientists in society, they begin to see that their duties are greater than they might have thought.

Let’s turn to what I take to be Shrader-Frechette’s more controversial claim: that scientists have a positive duty to conduct research. Where does this obligation come from?

For one thing, she argues, knowledge itself is valuable, especially in democratic societies where it could presumably help us make better choices than we’d be able to make with less knowledge. Thus, those who can produce knowledge should produce it.

For another thing, Shrader-Frechette points out, society funds research projects (through various granting agencies and direct funding from governmental entities). Researchers who accept such research funding are not free to abstain from research. They can’t take the grants and put an addition on the house. Rather, they are obligated to perform the contracted research. This argument is pretty uncontroversial, I think, since asking for money to do the research that will lead to more scientific knowledge and then failing to use that money to build more scientific knowledge is deceptive.

But here’s the argument that I think will meet with more resistance, at least from scientists: In the U.S., in addition to funding particular pieces of scientific research, society pays the bill for training scientists. This is not just true for scientists trained at public colleges and universities. Even private universities get a huge chunk of their money to fund research projects, research infrastructure, and the scientific training they give their students from public sources, including but not limited to federal funding agencies like the National Science Foundation and the National Institutes of Health.

The American people are not putting up this funding out of the goodness of their hearts. Rather, the public invests in the training of scientists because it expects a return on this investment in the form of the vital knowledge those trained scientists go on to produce and share with the public. Since the public pays to train people who can build scientific knowledge, the people who receive this training have a duty to go forth and build scientific knowledge to benefit the public.

Finally, Shrader-Frechette says, scientists have a duty to do research because if they don’t do research regularly, they won’t remain knowledgeable in their field. Not only will they not be up on the most recent discoveries or what they mean, but they will start to lose the crucial experimental and analytic skills they developed when they were being trained as scientists. For the philosophy fans in the audience, this point in Shrader-Frechette’s argument is reminiscent of Immanuel Kant’s example of how the man who prefers not to cultivate his talents is falling down on his duties. If everyone in society chose not to cultivate her talents, each of us would need to be completely self-sufficient (since we could not receive aid from others exercising their talents on our behalf) – and even that would not be enough, since we would not be able to rely on our own talents, having decided not to cultivate them.

On the basis of Shrader-Frechette’s argument, it sounds like every member of society who has had the advantage of scientific training (paid for by your tax dollars and mine) should be working away in the scientific knowledge salt-mine, at least until science has built all the knowledge society needs it to build.

And here’s where I put my own neck on the line: I earned a Ph.D. in chemistry (conferred in January 1994, almost exactly 20 years ago). Like other students in U.S. Ph.D. programs in chemistry, I did not pay for that scientific training. Rather, as Shrader-Frechette points out, my scientific training was heavily subsidized by the American tax payer. I have not build a bit of new chemical knowledge since the middle of 1994 (since I wrapped up one more project after completing my Ph.D.).

Have I fallen down on my positive duties as a trained scientist? Would it be fair for American tax payers to try to recover the funds they invested in my scientific training?

We’ll take up these questions (among others) in the next installment of this series. Stay tuned!

_____
Shrader-Frechette, K. S. (1994). Ethics of scientific research. Rowman & Littlefield.
______
Posts in this series:

Questions for the non-scientists in the audience.

Questions for the scientists in the audience.

What do we owe you, and who’s “we” anyway? Obligations of scientists (part 1)

Scientists’ powers and ways they shouldn’t use them: Obligations of scientists (part 2)

Don’t be evil: Obligations of scientists (part 3)

How plagiarism hurts knowledge-building: Obligations of scientists (part 4)

What scientists ought to do for non-scientists, and why: Obligations of scientists (part 5)

What do I owe society for my scientific training? Obligations of scientists (part 6)

Are you saying I can’t go home until we cure cancer? Obligations of scientists (part 7)