How to be ethical while getting the public involved in your science

At ScienceOnline Together later this week, Holly Menninger will be moderating a session on “Ethics, Genomics, and Public Involvement in Science”.

Because the ethical (and epistemic) dimensions of “citizen science” have been on my mind for a while now, in this post I share some very broad, pre-conference thoughts on the subject.

Ethics is a question of how we share a world with each other. Some of this is straightforward and short-term, but sometimes engaging each other ethically means taking account of long-range consequences, including possible consequences that may be difficult to foresee unless we really work to think through the possibilities ahead of time — and unless this thinking through of possibilities is informed by knowledge of some of the technologies involved and of history of what kinds of unforeseen outcomes have led to ethical problems before.

Ethics is more than merely meeting your current legal and regulatory requirements. Anyone taking that kind of minimalist approach to ethics is gunning to be a case study in an applied ethics class (probably within mere weeks of becoming a headline in a major news outlet).

With that said, if you’re running a project you’d describe as “citizen science” or as cultivating public involvement in science, here are some big questions I think you should be asking from the start:

1. What’s in it for the scientists?

Why are you involving members of the public in your project?

Are they in the field collecting observations that you wouldn’t have otherwise, or on their smart phones categorizing the mountains of data you’ve already collected? In these cases, the non-experts are providing labor you need for vital non-automatable tasks.

Are they sending in their biological samples (saliva, cheek swab, belly button swab, etc.)? In these cases, the non-experts are serving as human subjects, expanding the pool of samples in your study.

In both of these cases, scientists have ethical obligations to the non-scientists they are involving in their projects, although the ethical obligations are likely to be importantly different. In any case where a project involves humans as sources of biological samples, researchers ought to be consulting an Institutional Review Board, at least informally, before the project is initiated (which includes the start of anything that looks like advertising for volunteers who will provide their samples).

If volunteers are providing survey responses or interviews instead of vials of spit, there’s a chance they’re still acting as human subjects. Consult an IRB in the planning stages to be sure. (If your project is properly exempt from IRB oversight, there’s no better way to show it than an exemption letter from an IRB.)

If volunteers are providing biological samples from their pets or reports of observations of animals in the field (especially in fragile habitats), researchers ought to be consulting an Institutional Animal Care and Use Committee, at least informally, before the project is initiated. Again, it’s possible that what you’ll discover in this consultation is that the proposed research is exempt from IACUC oversight, but you want a letter from an IACUC to that effect.

Note that IRBs and IACUCs don’t exist primarily to make researchers’ lives hard! Rather, they exist to help researchers identify their ethical obligations to the humans and animals who serve as subjects of their studies, and to help find ways to conduct that research in ways that honor those obligations. A big reason to involve committees in thinking through the ethical dimensions of the research is that it’s hard for researchers to be objective in thinking through these questions about their own projects.

If you’re involving non-experts in your project in some other way, what are they contributing to the project? Are you involving them so you can check off the “broader impacts” box on your grant application, or is there some concrete way that involving members of the public is contributing to your knowledge-building? If the latter, think hard about what kinds of obligations might flow from that contribution.

2. What’s in it for the non-scientists/non-experts/members of the public involved in the project?

Why would members of the public want to participate in your project? What could they expect to get from such participation?

Maybe they enjoy being outdoors counting birds (and would be doing so even if they weren’t participating in the project), or looking at pictures of galaxies from space telescopes. Maybe they are curious about what’s in their genome or what’s in their belly-button. Maybe they want to help scientists build new knowledge enough to participate in some of the grunt-work required for that knowledge-building. Maybe they want to understand how that grunt-work fits into the knowledge-building scientists do.

It’s important to understand what the folks whose help you’re enlisting think they’re signing on for. Otherwise, they may be expecting something from the experience that you can’t give them. The best way to find out what potential participants are looking for from the experience is to ask them.

Don’t offer potential diagnostic benefits from participation in a project for which that information is a long, long way off. Don’t promise that tracking the health of streams by screening for the presence of different kinds of bugs will be tons of fun without being clear about the conditions your volunteers will undergo to perform those screenings.

Don’t promise participants that they will be getting a feel for what it’s like to “do science” if, in fact, they are really just providing a sample rather than being part of the analysis or interpretation of that sample.

Don’t promise them that they will be involved in hypothesis-formation or conclusion-drawing if really you are treating them as fancy measuring devices.

3. What’s the relationship between the scientists and the non-scientists in this project? What consequences will this have for relationships between scientists and the pubic more generally?

There’s a big difference in involving members of the public in your project because it will be enriching for them personally and involving them in your project because it’s the only conceivable way to build a particular piece of knowledge you’re trying to build.

Being clear about the relationship upfront — here’s why we need you, here’s what you can expect in return (both the potential benefits of participation and the potential risks) — is the best way to make sure everyone’s interests are well-served by the partnership and that no one is being deceived.

Things can get complicated, though, when you pull the focus back from how participants are involved in building the knowledge and consider how that knowledge might be used.

Will the new knowledge primarily benefit the scientists leading the project, adding publications to their CVs and helping them make the case for funding for further projects? Could the new knowledge contribute to our understanding (of ecosystems, or human health, for example) in ways that will drive useful interventions? Will those interventions be driven by policy-makers or commercial interests? Will the scientists be a part of this discussion of how the knowledge gets used? Will the members of the public (either those who participated in the project or members of the public more generally) be a part of this discussion — and will their views be taken seriously?

To the extent that participating in citizen science project, whatever shape that participation may take, can influence non-scientists’ views on science and the scientific community as a whole, the interactions between scientists and volunteers in and around these projects are hugely important. They are an opportunity for people with different interests, different levels of expertise, different values, to find common ground while working together to achieve a shared goal — to communicate honestly, deal with each other fairly, and take each other seriously.

More such ethical engagement between scientists and publics would be a good thing.

But the flip-side is that engagements between scientists and publics that aren’t as honest or respectful as they should be may have serious negative impacts beyond the particular participants in a given citizen science project. They may make healthy engagement, trust, and accountability harder for scientists and publics across the board.

In other words, working hard to do it right is pretty important.

I may have more to say about this after the conference. In the meantime, you can add your questions or comments to the session discussion forum.

Professors, we need you to do more!

…though we can’t be bothered to notice all the work you’re already doing, to acknowledge the ways in which the explicit and implicit conditions of your employment make it extremely difficult to do it, or the ways in which other cultural forces, including the pronouncements of New York Times columnists, make the “more” we’re exhorting you to do harder by alienating the public you’re meant to help from both “academics” and “intellectuals”.

In his column in the New York Times, Nicholas Kristof asserts that most university professors “just don’t matter in today’s great debates,” claiming that instead of stepping up to be public intellectuals, academics have marginalized themselves.

Despite what you may have heard in the school-yard or the op-ed pages, most of us who become university professors (even in philosophy) don’t do so to cloister ourselves from the real world and its cares. We do not become academics to sideline ourselves from public debates nor to marginalize ourselves.

So, as you might guess, I have a few things to say to Mr. Kristof here.

Among other things, Kristof wants professors to do more to engage the public. He writes:

Professors today have a growing number of tools available to educate the public, from online courses to blogs to social media. Yet academics have been slow to cast pearls through Twitter and Facebook.

A quick examination of the work landscape of a professor might shed some light on this slowness.

Our work responsibilities — and the activities on which we are evaluated for retention, tenure, and promotion — can generally be broken into three categories:

  • Research, the building of new knowledge in a discipline as recognized by peers in that discipline (e.g., via peer-review on the way to publication in a scholarly journal).
  • Teaching, the transmission of knowledge in a discipline (including strategies for building more knowledge) to students, whether those majoring in the discipline or studying it at the graduate level in order to become knowledge-builders themselves, or others taking courses to support their general education.
  • Service, generally cast as service to the discipline or service to the university, which often amounts to committee work, journal editing, and the like.

Research — the knowledge-building that academics do — is something Kristof casts as problematic:

academics seeking tenure must encode their insights [from research] into turgid prose. As a double protection against public consumption, this gobbledygook is then sometimes hidden in obscure journals — or published by university presses whose reputations for soporifics keep readers at a distance.

This ignores the academics who strive to write clearly and accessibly even when writing for an audience of their peers (not to mention the efforts of peer-reviewers to encourage more clear and accessible writing from the authors whose manuscripts they review). It also ignores the significant number of academics involved in efforts to bring the knowledge they build from behind the paywalls of closed-access journals to the public.

And, it ignores that the current structures of retention, tenure, and promotion, of hiring, of grant-awarding, keep score with metrics like impact factors that entrench the primacy of a conversation in the pages of peer-reviewed journals while making other conversations objectively worthless — at least from the point of view of the evaluation on which one’s academic career flourishes or founders.

A bit earlier in the column, Kristof includes a quote from Middle East specialist Will McCants that makes this point:

If the sine qua non for academic success is peer-reviewed publications, then academics who “waste their time” writing for the masses will be penalized.

Yet even as Kristof notes that those trying to rebel against the reward system built in to the tenure process “are too often crushed or driven away,” he seems to miss the point that exhorting academics to rebel against it anyway sounds like bad advice.

This is especially true in a world where academics lucky enough to have tenure-track jobs are keenly aware of the “excess PhDs” caught in the eternal cycle of postdoctoral appointments or conscripted in the army of adjuncts. Verily, there are throngs of people with the education, the intelligence, and the skills to be public intellectuals but who are scraping by on low pay, oppressively long hours, and the kind of deep uncertainty that comes with a job that is “temporary” by design.

If the public needs professors to be sharing their knowledge more directly, Nicholas Kristof, please explain how professors can do so without paying a high professional price? Where are the additional hours in the academic day for the “public intellectual” labor you want them to do (since they will still be expected to participate fully in the knowledge-building and discourse within their disciplinary community)? How will you encourage more professors to step up after the first wave taking your marching orders is denied tenure, or denied grants, or collapses from exhaustion?

More explicit professional recognition — professional credit — for academics engaging with the public would be a good thing. But to make it happen in a sustainable way, you need a plan. And getting buy-in from the administrators who shape and enforce the current systems of professional rewards and punishments makes more sense than exhorting the professors subject to that system to ignore the punishments they’re likely to face — especially at a moment when there are throngs of new and seasoned Ph.D.s available to replace the professors who run afoul of the system as it stands.

Kristof doesn’t say much about teaching in his column, though this is arguably a place where academics regularly do outreach to the segment of the public that shows up in the classroom. Given how few undergraduates go on to be academics themselves, this opportunity for engagement can be significant. Increasingly, though, we university teachers are micromanaged and “assessed” by administrators and committees in response to free-floating anxiety about educational quality and pressure to bring “No Child Left Behind”-style oversight and high-stakes testing to higher ed. Does this increase our ability to put knowledge and insights from our discipline into real-world contexts that matter to our students — that help them broaden their understanding of the challenges that face us individually and collectively, and of different disciplinary strategies for facing them, not just to serve their future employers’ goals, but to serve their own? In my experience, it does not.

Again, if Kristof wants better engagement between academics and the public — which, presumably, includes the students who show up in the classroom and will, in their post-college lives, be part of the public — he might get better results by casting some light on the forces that derail engagement in college teaching.

Despite all these challenges, the fact is that many academics are already engaging the public. However, Nicholas Kristof seems not to have noticed this. He writes:

Professors today have a growing number of tools available to educate the public, from online courses to blogs to social media. Yet academics have been slow to cast pearls through Twitter and Facebook.

The academics who have been regularly engaging with the public on Facebook and Twitter and G+ and YouTube and blogs and podcasts — many of us for years — would beg to differ with this assessment. Check out the #EngagedAcademics hashtag for a sampling of the response.

As well, there are academics writing for mass-circulation publications, whether online or in dead-tree form, working at science festivals and science fairs, going into elementary and secondary school classrooms, hosting or participating in local events like Café Scientifique or Socrates Café, going on radio or TV programs, writing letters to the editors of their local papers, going to town council and school board meetings.

Either all of this sort of engagement is invisible to Nicholas Kristof, or he thinks it doesn’t really count towards the work of being a public intellectual.

I wonder if this is because Kristof has in mind public intellectuals who have a huge reach and an immediate impact. If so, it would be good to ask who controls the microphone and why the academics from whom Kristof wants more aren’t invited to use it. It should be noted here that the New York Times, where Kristof has a regular column, is a pretty big microphone.

Also, it’s worth asking whether there’s good (empirical) reason to believe that one-to-many communication by academics who do have access to a big microphone is a better way to serve the needs of the public than smaller-scale communications (some of them one-to-one) in which academics are not just professing their knowledge to members of the public but also actually listening to them to find out what they want to know and what they care about? Given what seems to be a persistent attitude of suspicion and alienation from “intellectuals” among members of the public, engagement on a human level strikes me as likely to feel less manipulative — and to be less manipulative.

Maybe Nicholas Kristof has a plan to dispel the public’s reflexive distrust of academics. If so, I trust he’ll lay it out in a column in the not-so-distant future.

I don’t think Kristof is wrong that the public could benefit from engagement with professors, but asserting that we need more while ignoring the conditions that discourage such engagement — and while ignoring the work of the many academics who are engaging the public — is not particularly helpful. Moreover, it seems to put the burden on professors to step up and do more while losing sight of the fact that engagement requires active participation on both sides.

Professors cannot proclaim what they know and assume that the public will automatically absorb that knowledge and, armed with it, act according. It would be somewhat horrifying (for academics and the public alike) if engagement worked that way.

Academics and members of the public are sharing a world. Having various kinds of reliable knowledge about the world is good, as is sharing that knowledge and putting it into useful context, but this is never enough to determine just what we should do with that knowledge. We need to work out, together, our shared interests and goals.

Academics must be part of this discussion, but if other members of the public aren’t willing to engage, it probably doesn’t matter if more professors come to the table.

* * * * *
It should go without saying, but I will say it here anyway, that there are plenty of people who are not professors or academics engaging the public in meaningful ways that should make us recognize them as “public intellectuals” too. My focus here has been on professors since they are the focus of Kristof’s column.

What do I owe society for my scientific training? Obligations of scientists (part 6)

One of the dangers of thinking hard about your obligations is that you may discover one that you’ve fallen down on. As we continue our discussion of the obligations of scientist, I put myself under the microscope and invite you to consider whether I’ve incurred a debt to society that I have failed to pay back.

In the last post in this series, we discussed the claim that those in our society with scientific training have a positive duty to conduct scientific research in order to build new scientific knowledge. The source of that putative duty is two-fold. On the one hand, it’s a duty that flows from the scientist’s abilities in the face of societal needs: if people trained to build new scientific knowledge won’t build the new scientific knowledge needed to address pressing problems (like how to feed the world, or hold off climate change, or keep us all from dying from infectious diseases, or what have you), we’re in trouble. On the other hand, it’s a duty that flows from the societal investment that nurtures the development of these special scientific abilities: in the U.S., it’s essentially impossible to get scientific training at the Ph.D. level that isn’t subsidized by public funding. Public funding is used to support the training of scientists because the public expects a return on that investment in the form of grown-up scientists building knowledge which will benefit the public in some way. By this logic, people who take advantage of that heavily subsidized scientific training but don’t go on to build scientific knowledge when they are fully trained are falling down on their obligation to society.

People like me.

From September 1989 through December 1993, I was in a Ph.D. program in chemistry. (My Ph.D. was conferred January 1994.)

As part of this program, I was enrolled in graduate coursework (two chemistry courses per quarter for my first year, plus another chemistry course and three math courses, for fun, during my second year). I didn’t pay a dime for any of this coursework (beyond buying textbooks and binder paper and writing implements). Instead, tuition was fully covered by my graduate tuition stipend (which also covered “units” in research, teaching, and department seminar that weren’t really classes but appeared on our transcripts as if they were). Indeed, beyond the tuition reimbursement I was paid a monthly stipend of $1000, which seemed like a lot of money at the time (despite the fact that more than a third of it went right to rent).

I was also immersed in a research lab from January 1990 onward. Working in this lab was the heart of my training as a chemist. I was given a project to start with — a set of empirical questions to try to answer about a far-from-equilibrium chemical system that one of the recently-graduated students before me had been studying. I had to digest a significant chunk of experimental and theoretical literature to grasp why the questions mattered and what the experimental challenges in answering them might be. I had to assess the performance of the experimental equipment we had on hand, spend hours with calibrations, read a bunch of technical manuals, disassemble and reassemble pumps, write code to drive the apparatus and to collect data, identify experimental constraints that were important to control (and that, strangely, were not identified as such in the experimental papers I was working from), and also, when I determined that the chemical system I had started with was much too fussy to study with the equipment the lab could afford, to identify a different chemical system that I could use to answer similar questions and persuade my advisor to approve this new plan.

In short, my time in the lab had me learning how to build new knowledge (in a particular corner of physical chemistry) by actually building new knowledge. The earliest stages of my training had me juggling the immersion into research with my own coursework and with teaching undergraduate chemistry students as a lab instructor and teaching assistant. Some weeks, this meant I was learning less about how to make new scientific knowledge than I was about how to tackle a my problem-sets or how to explain buffers to pre-meds. Past the first year of the program, though, my waking hours were dominated by getting experiments designed, collecting loads of data, and figuring out what it meant. There were significant stretches of time during which I got into the lab by 5 AM and didn’t leave until 8 or 9 PM, and the weekend days when I didn’t go into the lab were usually consumed with coding, catching up on relevant literature, or drafting manuscripts or thesis chapters.

Once, for fun, some of us grad students did a back-of-the-envelope calculation of our hourly wages. It was remarkably close to the minimum wage I had been paid as a high school student in 1985. Still, we were getting world class scientific training, for free! We paid with the sweat of our brows, but wouldn’t we have to put in that time and effort to learn how to make scientific knowledge anyway? Sure, we graduate students did the lion’s share of the hands-on teaching of undergraduates in our chemistry department (undergraduates who were paying a significant tuition bill), but we were learning, from some of the best scientists in the world, how to be scientists!

Having gotten what amounts to a full-ride for that graduate training, due in significant part to public investment in scientific training at the Ph.D. level, shouldn’t I be hunkered down somewhere working to build more chemical knowledge to pay off my debt to society?

Do I have any good defense to offer for the fact that I’m not building chemical knowledge?

For the record, when I embarked on Ph.D. training in chemistry, I fully expected to be an academic chemist when I grew up. I really did imagine that I’d have a long career building chemical knowledge, training new chemists, and teaching chemistry to an audience that included some future scientists and some students who would go on to do other things but who might benefit from a better understanding of chemistry. Indeed, when I was applying to graduate programs, my chemistry professors were talking up the “critical shortage” of Ph.D. chemists. (By January of my first year in graduate school, I was reading reports that there were actually something like 30% more Ph.D. chemists than there were jobs for Ph.D. chemists, but a first-year grad student is not necessarily freaking out about the job market while she is wrestling with her experimental system.) I did not embark on a chemistry Ph.D. as a collectable. I did not set out to be a dilettante.

In the course of the research that was part of my Ph.D. training, I actually built some new knowledge and shared it with the public, at least to the extent of publishing it in journal articles (four of them, an average of one per year). It’s not clear what the balance sheet would say about this rate of return on the public’s investment in my scientific training — nor either whether most taxpayers would judge the knowledge I built (about the dynamics of far-from-equilibrium chemical reactions and about ways to devise useful empirical tests of proposed reaction mechanisms) as useful knowledge.

Then again, no part of how our research was evaluated in grad school was framed in terms of societal utility. You might try to describe how your research had broader implications that someone outside your immediate subfield could appreciate if you were writing a grant to get the research funded, but solving society’s pressing scientific problems was not the sine qua non of the research agendas we were advancing for our advisors or developing for ourselves.

As my training was teaching me how to conduct serious research in physical chemistry, it was also helping me to discover that my temperament was maybe not so well suited to life as a researcher in physical chemistry. I found, as I was struggling with a grant application that asked me to describe the research agenda I expected to pursue as an academic chemist, that the questions that kept me up at night were not fundamentally questions about chemistry. I learned that no part of me was terribly interested in the amount of grant-writing and lab administration that would have been required of me as a principal investigator. Looking at the few women training me at the Ph.D. level, I surmised that I might have to delay or skip having kids altogether to survive academic chemistry — and that the competition for those faculty jobs where I’d be able to do research and build new knowledge was quite fierce.

Plausibly, had I been serious about living up to my obligation to build new knowledge by conducting research, I could have been a chemist in industry. As I was finishing up my Ph.D., the competition for industry jobs for physical chemists like me was also pretty intense. What I gathered as I researched and applied for industry jobs was that I didn’t really like the culture of industry. And, while working in industry would have been a way from me to conduct research and build new knowledge, I might have ended up spending more time solving the shareholders’ problems than solving society’s problems.

If I wasn’t going to do chemical research in an academic career and I wasn’t going to do chemical research in an industrial job, how should I pay society back for the publicly-supported scientific training I received? Should I be building new scientific knowledge on my own time, in my own garage, until I’ve built enough that the debt is settled? How much new knowledge would that take?

The fact is, none of us Ph.D. students seemed to know at the time that public money was making it possible for us to get graduate training in chemistry without paying for that training. Nor was there an explicit contract we were asked to sign as we took advantage of this public support, agreeing to work for a certain number of years upon the completion of our degrees as chemists serving the public’s interests. Rather, I think most of us saw an opportunity to pursue a subject we loved and to get the preparation we would need to become principal investigators in academia or industry if we decided to pursue those career paths. Most of us probably didn’t know enough about what those career paths would be like to have told you at the beginning of our Ph.D. training whether those career paths would suit our talents or temperaments — that was part of what we were trying to find out by pursuing graduate studies. And practically, many of us would not have been able to find out if we had had to pay the costs of our Ph.D. training ourselves.

If no one who received scientific training subsidized by the public went on to build new scientific knowledge, this would surely be a problem for society. But, do we want to say that everyone who receives such subsidized training is on the hook to pay society back by building new scientific knowledge until such time as society has all the scientific knowledge it needs?

That strikes me as too strong. However, given that I’ve benefitted directly from a societal investment in Ph.D. training that, for all practical purposes, I stopped using in 1994, I’m probably not in a good position to make an objective judgment about just what I do owe society to pay back this debt. Have I paid it back already? Is society within its rights to ask more of me?

Here, I’ve thought about the scientist’s debt to society — my debt to society — in very personal terms. In the next post in the series, we’ll revisit these questions on a slightly larger scale, looking at populations of scientists interacting with the larger society and seeing what this does to our understanding of the obligations of scientists.
Posts in this series:

Questions for the non-scientists in the audience.

Questions for the scientists in the audience.

What do we owe you, and who’s “we” anyway? Obligations of scientists (part 1)

Scientists’ powers and ways they shouldn’t use them: Obligations of scientists (part 2)

Don’t be evil: Obligations of scientists (part 3)

How plagiarism hurts knowledge-building: Obligations of scientists (part 4)

What scientists ought to do for non-scientists, and why: Obligations of scientists (part 5)

What do I owe society for my scientific training? Obligations of scientists (part 6)

Are you saying I can’t go home until we cure cancer? Obligations of scientists (part 7)

On speaking up when someone in your profession behaves unethically.

On Twitter recently there was some discussion of a journalist who wrote and published a piece that arguably did serious harm to its subject.

As the conversation unfolded, Kelly Hills helpfully dropped a link to the Society of Professional Journalists Code of Ethics. Even cursory inspection of this code made it quite clear that the journalist (and editor, and publisher) involved in the harmful story weren’t just making decisions that happened to turn out badly. Rather, they were acting in ways that violate the ethical standards for the journalistic profession articulated in this code.

One take-away lesson from this is that being aware of these ethical standards and letting them guide one’s work as a journalist could head off a great deal of harm.

Something else that came up in the discussion, though, was what seemed like a relative dearth of journalists standing up to challenge the unethical conduct of the journalist (and editor, and publisher) in question. Edited to add: A significant number of journalists even used social media to give the problematic piece accolades.

I follow a lot of journalists on Twitter. A handful of them condemned the unethical behavior in this case. The rest may be busy with things offline. It is worth noting that the Society of Professional Journalists Code of Ethics includes the following:

Journalists should:

  • Clarify and explain news coverage and invite dialogue with the public over journalistic conduct.
  • Encourage the public to voice grievances against the news media.
  • Admit mistakes and correct them promptly.
  • Expose unethical practices of journalists and the news media.
  • Abide by the same high standards to which they hold others.

That fourth bullet-point doesn’t quite say that journalists ought to call out bad journalistic behavior that has already been exposed by others. However, using one’s voice to condemn unethical conduct when you see it is one of the ways that people know that you’re committed to ethical conduct. (The other way people know you’re committed to ethical conduct is that you conduct yourself ethically.)

In a world where the larger public is probably going to take your professional tribe as a package deal, extending trust to the lot of you or feeling mistrust for the lot of you, reliably speaking up about problematic conduct when you see it is vital in earning the public’s trust. Moreover, criticisms from inside the professional community seem much more likely to be effective in persuading its members to embrace ethical conduct than criticisms from outside the profession. It’s just too easy for people on the inside to dismiss the critique from people on the outside with, “They just don’t understand what we do.”

There’s a connection here between what’s good for the professional community of journalists and what’s good for the professional community of scientists.

When scientists behave unethically, other scientists need to call them out — not just because the unethical behavior harms the integrity of the scientific record or the opportunities of particular members of the scientific community to flourish, or the health or safety of patients, but because this is how members of the community teetering on the brink of questionable decisions remember that the community does not tolerate such behavior. This is how they remember that those codes of conduct are not just empty words. This is how they remember that their professional peers expect them to act with integrity very single day.

If members of a professional community are not willing to demand ethical behavior from each other in this way, how can the public be expected to trust that professional community to behave ethically?

Undoubtedly, there are situations that can make it harder to take a stand against unethical behavior in your professional community, power disparities that can make calling out the bad behavior dangerous to your own standing in the professional community. As well, shared membership in a professional community creates a situation where you’re inclined to give your fellow professional the benefit of the doubt rather than starting from a place of distrust in your engagements.

But if only a handful of voices in your professional community are raised to call out problematic behavior that the public has identified and is taking very seriously, what does that communicate to the public?

Maybe that you see the behavior, don’t think it’s problematic, but can’t be bothered to explain why it’s not problematic (because the public’s concerns just don’t matter to you).

Maybe that you see the behavior, recognize that it’s problematic, but don’t actually care that much when it happens (and if the public is concerned about it, that’s their problem, not yours).

Maybe that you’re working very hard not to see the problematic behavior (which, in this case, probably means you’re also working very hard not to hear the public voicing its concerns).

Sure, there’s a possibility that you’re working very hard within your professional community to address the problematic behavior and make sure it doesn’t happen again, but if the public doesn’t see evidence of these efforts, it’s unreasonable to expect them to know they’re happening.

It’s hard for me to see how the public’s trust in a profession is supposed to be strengthened by people in the professional community not speaking out against unethical conduct of members of that professional community that the public already knows about. Indeed, I think a profession that only calls out bed behavior in its ranks that the public already knows about is skating on pretty thin ice.

It surely feels desperately unfair to all the members of a professional community working hard to conduct themselves ethically when the public judges the whole profession on the basis of the bad behavior of a handful of its members. One may be tempted to protest, “We’re not all like that!” That’s not really addressing the public’s complaint, though: The public sees at least one of you who’s “like that”; what are the rest of you doing about that?

If the public has good reason to believe that members of the profession will be swift and effective in their policing of bad behavior within their own ranks, the public is more likely to see the bad actors as outliers.

But the public is more likely to believe that members of the profession will be swift and effective in their policing of bad behavior within their own ranks when they see that happen, regularly.

Nature and trust.

Here are some things that I know:

Nature is a high-impact scientific journal that is widely read in the scientific community.

The editorial mechanisms Nature employs are meant to ensure the quality of the publication.

Reports of scientific research submitted to Nature undergo peer review (as do manuscripts submitted to other scholarly scientific journals). As well, Nature publishes items that are not peer-reviewed — for example, news pieces and letters to the editor. Nonetheless, the pieces published in Nature that don’t undergo peer review are subjected to editorial oversight.

Our human mechanisms for ensuring the quality of items that are published are not perfect. Peer reviewers sometimes get fooled. Editors sometimes make judgments that, in retrospect, they would not endorse.

The typical non-scientist who knows about journals like Nature is in the position of being generally trusting that peer review and editorial processes do the job of ensuring the high quality of the contents of these journals, or of being generally distrusting. Moreover, my guess is that the typical non-scientist, innocent of the division of labor on the vast editorial teams employed by journals like Nature, takes for granted that the various items published in such journals reflect sound science — or, at the very least, do not put forward claims that are clearly at odds with the body of existing scientific research.

Non-scientists, in other words, are trusting that the editorial processes at work in a journal like Nature produce a kind of conversation within the scientific community, one that weeds out stuff scientists would recognize as nonsense.

This trust is important because non-scientists do not have the same ability to identify and weed out nonsense. Nature is a kind of scientific gatekeeper for the larger public.

This trust is also something that can be played — for example, by a non-expert with an agenda who manages to get a letter published in a journal like Nature. While such correspondence may not impress a scientist, a “publication in Nature” of this sort may be taken as credible by non-scientists on the basis of the trust they have that such a well-known scientific journal must have editorial processes that reliably weed out nonsense.

In a world where we divide the cognitive labor this way, where non-scientists need to trust scientists to build reliable knowledge and organs of scientific communication to weed out nonsense, the stakes are very high for the scientists and the organs of scientific communication to live up to that trust — to get it right most of the time, and to be transparent enough about their processes that when they don’t get it right it’s reasonably easy to diagnose what went wrong and to fix it.

Otherwise, scientists and the organs of scientific communication risk losing the trust of non-scientists.

I’ve been thinking about this balance of trust and accountability in the context of a letter that was published in Nature asserting, essentially, that the underrepresentation of women as authors and peer reviewers in Nature is no kind of problem, because male scientists have merit and women scientists have child care obligations.

Kelly Hills has a clear and thorough explanation of what made publishing this particular letter problematic. It’s not just that the assertion of the letter writer are not supported by the research (examples of which Kelly helpfully links). It’s not just that there’s every reason to believe that the letter writer will try to spin the publication of his letter in Nature as reason to give his views more credence.

It’s also that the decision to publish this letter suggests the question of women’s ability to do good science is a matter of legitimate debate.

In the discussion of this letter on Twitter, I saw the suggestion that the letter was selected for publication because it was representative of a view that had been communicated by many correspondents to Nature.

In a journal that the larger public takes to be a source of views that are scientifically sound, or at least scientifically plausible (rather than at odds with a growing body of empirical research), the mere fact that many people have expressed a view in letters strikes me as insufficient reason to publish it. I suspect that if a flurry of letters were to arrive asserting that the earth is stationary in the center of the universe, or that the earth is flat, that the editorial staff in charge of correspondence wouldn’t feel the need to publish letters conveying these views — especially if the letters came from people without scientific training or active involvement in scientific work of some sort. I’d even be willing to make a modest bet that Nature regularly gets a significant amount of correspondence communicating crackpot theories of one sort or another. (I’m not running a major organ of scientific communication and I regularly get a significant amount of correspondence communicating crackpot theories of one sort or another.) Yet these crackpot theories do not regularly populate Nature’s “Correspondence” page.

In response to the objections raised to the publication of this letter, the Nature Editorial staff posted this comment:

Nature has a strong history of supporting women in science and of reflecting the views of the community in our pages, including Correspondence. Our Correspondence pages do not reflect the views of the journal or its editors; they reflect the views only of the correspondents.

We do not endorse the views expressed in this Correspondence (or indeed any Correspondences unless we explicitly say so). On re-examining the letter and the process, we consider that it adds no value to the discussion and unnecessarily inflames it, that it did not receive adequate editorial attention, and that we should not have published it, for which we apologize. This note will appear online on in the notes section of the Correspondence and in the Correspondence’s pdf.

Nature’s own positive views and engagement in the issues concerning women in science are represented by our special from 2013:
Philip Campbell, Editor-in-Chief, Nature

(Bold emphasis added.)

I think this editorial pivot is a wise one. The letter in question may have represented a view many people have, but it didn’t offer any new facts or novel insight. And it’s not like women in science don’t know that they are fighting against biases — even biases in their own heads — every single day. They didn’t need to read a letter from some guy in Nature to become aware of this bit of their professional terrain.

So, the apology is good. But it is likely insufficient.

At this point, Nature may also have trust they need to rebuild with women, whether those women are members of the scientific community or members of the larger public. While it is true that Nature devoted a special issue to challenges faced by women in science, they also gave the editorial green light to a piece of “science fiction” that reinforced, rather than challenging the gendered assumption that make it harder for women in science.

And yes, we understand that different editors oversee the peer-reviewed reports of scientific research and the news items, the correspondence and the short fiction. But our view of organizations — our trust of organizations — tends to bundle these separate units together. This is pretty unavoidable unless we personally know each of the editors in each of the units (and even personal acquaintance doesn’t mean our trust is indestructible).

All of which is to say: as an organization, Nature still has some work to do to win back the trust of women (and others) who cannot think of the special issue on women in science without also thinking of “Womanspace” or the letter arguing that underrepresentation of women in Nature’s pages is just evidence of a meritocracy working as it should.

It would be nice to trust that Nature’s editorial processes will go forth and get it right from here on out, but we don’t want to be played for fools. As well, we may have to do additional labor going forward cleaning up the fallout from this letter in public discourses on women in science when we already had plenty of work to do in that zone.

This is a moment where Nature may want women scientists to feel warmly toward the journal, to focus on the good times as representative of where Nature really stands, but trust is something that is rebuilt, or eroded, over iterated engagements every single day.

Trust can’t be demanded. Trust is earned.

Given the role Nature plays in scientific communications and in the communication of science to a broader public, I’m hopeful the editorial staff is ready to do the hard work to earn that trust — from scientists and non-scientists alike — going forward.

* * * * *
Related posts:

Hope Jahren, Why I Turned Down a Q-and-A in Nature Magazine

Anne Jefferson, Megaphones, broken records and the problem with institutional amplification of sexism and racism

What scientists ought to do for non-scientists, and why: Obligations of scientists (part 5)

If you’re a scientist, are there certain things you’re obligated to do for society (not just for your employer)? If so, where does this obligation come from?

This is part of the discussion we started back in September about special duties or obligations scientists might have to the non-scientists with whom they share a world. If you’re just coming to the discussion now, you might want to check out the post where we set out some groundwork for the discussion, plus the three posts on scientists’ negative duties (i.e., the things scientists have an obligation not to do): our consideration of powers that scientists have and should not misuse, our discussion of scientific misconduct, the high crimes against science that scientists should never commit, and our examination of how plagiarism is not only unfair but also hazardous to knowledge-building.

In this post, finally, we lay out some of the positive duties that scientists might have.

In her book Ethics of Scientific Research, Kristin Shrader-Frechette gives a pretty forceful articulation of a set of positive duties for scientists. She asserts that scientists have a duty to do research, and a duty to use research findings in ways that serve the public good. Recall that these positive duties are in addition to scientists’ negative duty to ensure that the knowledge and technologies created by the research do not harm anyone.

Where do scientists’ special duties come from? Shrader-Frechette identifies a number of sources. For one thing, she says, there are obligations that arise from holding a monopoly on certain kinds of knowledge and services. Scientists are the ones in society who know how to work the electron microscopes and atom-smashers. They’re the ones who have the equipment and skills to build scientific knowledge. Such knowledge is not the kind of thing your average non-scientist could build for himself.

Scientists also have obligations that arise from the fact that they have a good chance of success (at least, better than anyone else) when it comes to educating the public about scientific matters or influencing public policy. The scientists who track the evidence that human activity leads to climate change, for example, are the ones who might be able to explain that evidence to the public and argue persuasively for measures that are predicted to slow climate change.

As well, scientists have duties that arise from the needs of the public. If the public’s pressing needs can only be met with the knowledge and technologies produced by scientific research – and if non-scientists cannot produce such knowledge and technologies themselves – then if scientists do no work to meet these needs, who can?

As we’ve noted before, there is, in all of this, that Spiderman superhero ethos: with great power comes great responsibility. When scientists realize how much power their knowledge and skills give them relative to the non-scientists in society, they begin to see that their duties are greater than they might have thought.

Let’s turn to what I take to be Shrader-Frechette’s more controversial claim: that scientists have a positive duty to conduct research. Where does this obligation come from?

For one thing, she argues, knowledge itself is valuable, especially in democratic societies where it could presumably help us make better choices than we’d be able to make with less knowledge. Thus, those who can produce knowledge should produce it.

For another thing, Shrader-Frechette points out, society funds research projects (through various granting agencies and direct funding from governmental entities). Researchers who accept such research funding are not free to abstain from research. They can’t take the grants and put an addition on the house. Rather, they are obligated to perform the contracted research. This argument is pretty uncontroversial, I think, since asking for money to do the research that will lead to more scientific knowledge and then failing to use that money to build more scientific knowledge is deceptive.

But here’s the argument that I think will meet with more resistance, at least from scientists: In the U.S., in addition to funding particular pieces of scientific research, society pays the bill for training scientists. This is not just true for scientists trained at public colleges and universities. Even private universities get a huge chunk of their money to fund research projects, research infrastructure, and the scientific training they give their students from public sources, including but not limited to federal funding agencies like the National Science Foundation and the National Institutes of Health.

The American people are not putting up this funding out of the goodness of their hearts. Rather, the public invests in the training of scientists because it expects a return on this investment in the form of the vital knowledge those trained scientists go on to produce and share with the public. Since the public pays to train people who can build scientific knowledge, the people who receive this training have a duty to go forth and build scientific knowledge to benefit the public.

Finally, Shrader-Frechette says, scientists have a duty to do research because if they don’t do research regularly, they won’t remain knowledgeable in their field. Not only will they not be up on the most recent discoveries or what they mean, but they will start to lose the crucial experimental and analytic skills they developed when they were being trained as scientists. For the philosophy fans in the audience, this point in Shrader-Frechette’s argument is reminiscent of Immanuel Kant’s example of how the man who prefers not to cultivate his talents is falling down on his duties. If everyone in society chose not to cultivate her talents, each of us would need to be completely self-sufficient (since we could not receive aid from others exercising their talents on our behalf) – and even that would not be enough, since we would not be able to rely on our own talents, having decided not to cultivate them.

On the basis of Shrader-Frechette’s argument, it sounds like every member of society who has had the advantage of scientific training (paid for by your tax dollars and mine) should be working away in the scientific knowledge salt-mine, at least until science has built all the knowledge society needs it to build.

And here’s where I put my own neck on the line: I earned a Ph.D. in chemistry (conferred in January 1994, almost exactly 20 years ago). Like other students in U.S. Ph.D. programs in chemistry, I did not pay for that scientific training. Rather, as Shrader-Frechette points out, my scientific training was heavily subsidized by the American tax payer. I have not build a bit of new chemical knowledge since the middle of 1994 (since I wrapped up one more project after completing my Ph.D.).

Have I fallen down on my positive duties as a trained scientist? Would it be fair for American tax payers to try to recover the funds they invested in my scientific training?

We’ll take up these questions (among others) in the next installment of this series. Stay tuned!

Shrader-Frechette, K. S. (1994). Ethics of scientific research. Rowman & Littlefield.
Posts in this series:

Questions for the non-scientists in the audience.

Questions for the scientists in the audience.

What do we owe you, and who’s “we” anyway? Obligations of scientists (part 1)

Scientists’ powers and ways they shouldn’t use them: Obligations of scientists (part 2)

Don’t be evil: Obligations of scientists (part 3)

How plagiarism hurts knowledge-building: Obligations of scientists (part 4)

What scientists ought to do for non-scientists, and why: Obligations of scientists (part 5)

What do I owe society for my scientific training? Obligations of scientists (part 6)

Are you saying I can’t go home until we cure cancer? Obligations of scientists (part 7)

How plagiarism hurts knowledge-building: Obligations of scientists (part 4)

In the last post, we discussed why fabrication and falsification are harmful to scientific knowledge-building. The short version is that if you’re trying to build a body of reliable knowledge about the world, making stuff up (rather than, say, making careful observations of that world and reporting those observations accurately) tends not to get you closer to that goal.

Along with fabrication and falsification, plagiarism is widely recognized as a high crime against the project of science, but the explanations for why it’s harmful generally make it look like a different kind of crime than fabrication and falsification. For example, Donald E. Buzzelli (1999) writes:

[P]lagiarism is an instance of robbing a scientific worker of the credit for his or her work, not a matter of corrupting the record. (p. 278)

Kenneth D, Pimple (2002) writes:

One ideal of science, identified by Robert Merton as “disinterestedness,” holds that what matters is the finding, not who makes the finding. Under this norm, scientists do not judge each other’s work by reference to the race, religion, gender, prestige, or any other incidental characteristic of the researcher; the work is judged by the work, not the worker. No harm would be done to the Theory of Relativity if we discovered Einstein had plagiarized it…

[P]lagiarism … is an offense against the community of scientists, rather than against science itself. Who makes a particular finding will not matter to science in one hundred years, but today it matters deeply to the community of scientists. Plagiarism is a way of stealing credit, of gaining credit where credit is not due, and credit, typically in the form of authorship, is the coin of the realm in science. An offense against scientists qua scientists is an offense against science, and in its way plagiarism is as deep an offense against scientists as falsification and fabrication are offenses against science. (p. 196)

Pimple is claiming that plagiarism is not an offense that undermines the knowledge-building project of science per se. Rather, the crime is in depriving other scientists of the reward they are due for participating in this knowledge-building project. In other words, Pimple says that plagiarism is problematic not because it is dishonest, but rather because it is unfair.

While I think Pimple is right to identify an additional component of responsible conduct of science besides honesty, namely, a certain kind of fairness to one’s fellow scientists, I also think this analysis of plagiarism misses an important way in which misrepresenting the source of words, ideas, methods, or results can undermine the knowledge-building project of science.

On the surface, plagiarism, while potentially nasty to the person whose report is being stolen, might seem not to undermine the scientific community’s evaluation of the phenomena. We are still, after all, bringing together and comparing a number of different observation reports to determine the stable features of our experience of the phenomenon. But this comparison often involves a dialogue as well. As part of the knowledge-building project, from the earliest planning of their experiments to well after results are published, scientists are engaged in asking and answering questions about the details of the experience and of the conditions under which the phenomenon was observed.

Misrepresenting someone else’s honest observation report as one’s own strips the report of accurate information for such a dialogue. It’s hard to answer questions about the little, seemingly insignificant experimental details of an experiment you didn’t actually do, or to refine a description of an experience someone else had. Moreover, such a misrepresentation further undermines the process of building more objective knowledge by failing to contribute the actual insight of the scientist who appears to be contributing his own view but is actually contributing someone else’s. And while it may appear that a significant number of scientists are marshaling their resources to understand a particular phenomenon, if some of those scientists are plagiarists, there are fewer scientists actually grappling with the problem than it would appear.

In such circumstances, we know less than we think we do.

Given the intersubjective route to objective knowledge, failing to really weigh in to the dialogue may end up leaving certain of the subjective biases of others in place in the collective “knowledge” that results.

Objective knowledge is produced when the scientific community’s members work with each other to screen out subjective biases. This means the sort of honesty required for good science goes beyond the accurate reporting of what has been observed and under what conditions. Because each individual report is shaped by the individual’s perspective, objective scientific knowledge also depends on honesty about the individual agency actually involved in making the observations. Thus, plagiarism, which often strikes scientists as less of a threat to scientific knowledge (and more of an instance of “being a jerk”), may pose just as much of a threat to the project of producing objective scientific knowledge as outright fabrication.

What I’m arguing here is that plagiarism is a species of dishonesty that can undermine the knowledge-building project of science in a direct way. Even if what has been lifted by the plagiarist is “accurate” from the point of view of the person who actually collected or analyzed the data or drew conclusions from it, separating this contribution from its true author means it doesn’t function the same way in the ongoing scientific dialogue.

In the next post, we’ll continue our discussion of the duties of scientists by looking at what the positive duties of scientists might be, and by examining the sources of these duties.

Buzzelli, D. E. (1999). Serious deviation from accepted practices. Science and Engineering Ethics, 5(2), 275-282.

Pimple, K. D. (2002). Six domains of research ethics. Science and Engineering Ethics, 8(2), 191-205.
Posts in this series:

Questions for the non-scientists in the audience.

Questions for the scientists in the audience.

What do we owe you, and who’s “we” anyway? Obligations of scientists (part 1)

Scientists’ powers and ways they shouldn’t use them: Obligations of scientists (part 2)

Don’t be evil: Obligations of scientists (part 3)

How plagiarism hurts knowledge-building: Obligations of scientists (part 4)

What scientists ought to do for non-scientists, and why: Obligations of scientists (part 5)

What do I owe society for my scientific training? Obligations of scientists (part 6)

Are you saying I can’t go home until we cure cancer? Obligations of scientists (part 7)

Don’t be evil: Obligations of scientists (part 3)

In the last installation of our ongoing discussion of the obligations of scientists, I said the next post in the series would take up scientists’ positive duties (i.e., duties to actually do particular kinds of things). I’ve decided to amend that plan to say just a bit more about scientists’ negative duties (i.e., duties to refrain from doing particular kinds of things).

Here, I want to examine a certain minimalist view of scientists’ duties (or of scientists’ negative duties) that is roughly analogous to the old Google motto, “Don’t be evil.” For scientists, the motto would be “Don’t commit scientific misconduct.” The premise is that if X isn’t scientific misconduct, then X is acceptable conduct — at least, acceptable conduct within the context of doing science.

The next question, if you’re trying to avoid committing scientific misconduct, is how scientific misconduct is defined. For scientists in the U.S., a good place to look is to the federal agencies that provide funding for scientific research and training.

Here’s the Office of Research Integrity’s definition of misconduct:

Research misconduct means fabrication, falsification, or plagiarism in proposing, performing, or reviewing research, or in reporting research results. …

Research misconduct does not include honest error or differences of opinion.

Here’s the National Science Foundation’s definition of misconduct:

Research misconduct means fabrication, falsification, or plagiarism in proposing or performing research funded by NSF, reviewing research proposals submitted to NSF, or in reporting research results funded by NSF. …

Research misconduct does not include honest error or differences of opinion.

These definitions are quite similar, although NSF restricts its definition to actions that are part of a scientist’s interaction with NSF — giving the impression that the same actions committed in a scientist’s interaction with NIH would not be scientific misconduct. I’m fairly certain that NSF officials view all scientific plagiarism as bad. However, when the plagiarism is committed in connection with NIH funding, NSF leaves it to the ORI to pursue sanctions. This is a matter of jurisdiction for enforcement.

It’s worth thinking about why federal funders define (and forbid) scientific misconduct in the first place rather than leaving it to scientists as a professional community to police. One stated goal is to ensure that the money they are distributing to support scientific research and training is not being misused — and to have a mechanism with which they can cut off scientists who have proven themselves to be bad actors from further funding. Another stated goal is to protect the quality of the scientific record — that is, to ensure that the published results of the funded research reflect honest reporting of good scientific work rather than lies.

The upshot here is that public money for science comes with strings attached, and that one of those strings is that the money be used to conduct actual science.

Ensuring the proper use of the funding and protecting the integrity of the scientific record needn’t be the only goals of federal funding agencies in the U.S. in their interactions with scientists or in the way they frame their definitions of scientific misconduct, but at present these are the goals in the foreground in discussions of why federally funded scientists should avoid scientific misconduct.

Let’s consider the three high crimes identified in these definitions of scientific misconduct.

Fabrication is making up data or results rather than actually collecting them from observation or experimentation. Obviously, fabrication undermines the project of building a reliable body of knowledge about the world – faked data can’t be counted on to give us an accurate picture of what the world is really like.

A close cousin of fabrication is falsification. Here, rather than making up data out of whole cloth, falsification involves “adjusting” real data – changing the values, adding some data points, omitting other data points. As with fabrication, falsification is lying about your empirical data, representing the falsified data as an honest report of what you observed when it isn’t.

The third high crime is plagiarism, misrepresenting the words or ideas (or, for that matter, data or computer code, for example) of others as your own. Like fabrication and falsification, plagiarism is a variety of dishonesty.

Observation and experimentation are central in establishing the relevant facts about the phenomena scientists are trying to understand. Establishing such relevant facts requires truthfulness about what is observed or measured and under what conditions. Deception, therefore, undermines this aim of science. So at a minimum, scientists must embrace the norm of truthfulness or abandon the goal of building accurate pictures of reality. This doesn’t mean that honest scientists never make mistakes in setting up their experiments, making their measurements, performing data analysis, or reporting what they found to other scientists. However, when honest scientists discover these mistakes, they do what they can to correct them, so that they don’t mislead their fellow scientists even accidentally.

The importance of reliable empirical data, whether as the source of or a test of one’s theory, is why fabrication and falsification of data are rightly regarded as cardinal sins against science. Made-up data are no kind of reliable indicator of what the world is like or whether a particular theory is a good one. Similarly, “cooking” data sets to better support particular hypotheses amounts to ignoring the reality of what has actually been measured. The scientific rules of engagement with phenomena hold the scientist to account for what has actually been observed. While the scientist is always permitted to get additional data about the object of study, one cannot willfully ignore facts one finds puzzling or inconvenient. Even if these facts are not explained, they must be acknowledged.

Those who commit falsification and fabrication undermine the goal of science by knowingly introducing unreliable data into, or holding back relevant data from, the formulation and testing of theories. They sin by not holding themselves accountable to reality as observed in scientific experiments. When they falsify or fabricate in reports of research, they undermine the integrity of the scientific record. When they do it in grant proposals, they are attempting to secure funding under false pretenses.

Plagiarism, the third of the cardinal sins against responsible science, is dishonesty of another sort, namely, dishonesty about the source of words, ideas, methods, or results. A number of people who think hard about research ethics and scientific misconduct view plagiarism as importantly different in its effects from fabrication and falsification. For example, Donald E. Buzzelli (1999) writes:

[P]lagiarism is an instance of robbing a scientific worker of the credit for his or her work, not a matter of corrupting the record. (p. 278)

Kenneth D, Pimple (2002) writes:

One ideal of science, identified by Robert Merton as “disinterestedness,” holds that what matters is the finding, not who makes the finding. Under this norm, scientists do not judge each other’s work by reference to the race, religion, gender, prestige, or any other incidental characteristic of the researcher; the work is judged by the work, not the worker. No harm would be done to the Theory of Relativity if we discovered Einstein had plagiarized it…

[P]lagiarism … is an offense against the community of scientists, rather than against science itself. Who makes a particular finding will not matter to science in one hundred years, but today it matters deeply to the community of scientists. Plagiarism is a way of stealing credit, of gaining credit where credit is not due, and credit, typically in the form of authorship, is the coin of the realm in science. An offense against scientists qua scientists is an offense against science, and in its way plagiarism is as deep an offense against scientists as falsification and fabrication are offenses against science. (p. 196)

In fact, I think we can make a good argument that plagiarism does threaten the integrity of the scientific record (although I’ll save that argument for a separate post). However, I agree with both Buzzelli and Pimple that plagiarism is also a problem because it embodies a particular kind of unfairness within scientific practice. That federal funders include plagiarism by name in their definitions of scientific misconduct suggests that their goals extend further than merely protecting the integrity of the scientific record.

Fabrication, falsification, and plagiarism are clearly instances of scientific misconduct, but the misconduct definitions of the United States Public Health Service (whose umbrella includes NIH) and NSF used to define scientific misconduct as fabrication, falsification, plagiarism, and other serious deviations from accepted research practices. The “other serious deviations” clause was controversial, with a panel of the National Academy of Sciences (among others) arguing that this language was ambiguous enough that it shouldn’t be part of an official misconduct definition. Maybe, the panel worried, “serious deviations from accepted research practices” might be interpreted to include cutting-edge methodological innovations, meaning that scientific innovation would count as misconduct.

In his article 1993 article, “The Definition of Misconduct in Science: A View from NSF,” Buzzelli claimed that there was no evidence that the broader definitions of misconduct had been used to lodge this kind of misconduct complaint. Since then, however, there there have been instances where definitions of scientific misconduct containing an “other serious deviations” clause could be argued to take advantage of the ambiguity of the clause to go after a scientist for political reasons.

If the “other serious deviations” clause isn’t meant to keep scientists from innovating, what kinds of misconduct is it supposed to cover? These include things like sabotaging other scientists’ experiments or equipment, falsifying colleagues’ data, violating agreements about sharing important research materials like cultures and reagents, making misrepresentations in grant proposals, and violating the confidentiality of the peer review process. None of these activities is necessarily covered by fabrication, falsification, or plagiarism, but each of these activities can be seriously harmful to scientific knowledge-building.

Buzzelli (1993) discusses a particular deviation from accepted research practices that the NSF judged as misconduct, one where a principal investigator directing an undergraduate primatology research experience funded by an NSF grant sexually harassed student researchers and graduate assistants. Buzzelli writes:

In carrying out this project, the senior researcher was accused of a range of coercive sexual offenses against various female undergraduate students and research assistants, up to and including rape. … He rationed out access to the research data and the computer on which they were stored and analyzed, as well as his own assistance, so they were only available to students who accepted his advances. He was also accused of threatening to blackball some of the graduate students in the professional community and to damage their careers if they reported his activities. (p. 585)

Even opponents of the “other serious deviations” clause would be unlikely to argue that this PI was not behaving very badly. However, they did argue that this PI’s misconduct was not scientific misconduct — that it should be handled by criminal or civil authorities rather than funding agencies, and that it was not conduct that did harm to science per se.

Buzzelli (who, I should mention, was writing as a senior scientist in the Office of the Inspector General in the National Science Foundation) disagreed with this assessment. He argued that NSF had to get involved in this sexual harassment case in order to protect the integrity of its research funds. The PI in question, operating with NSF funds designated to provide an undergraduate training experience, used his power as a research director and mentor to make sexual demands of his undergraduate trainees. The only way for the undergraduate trainees to receive the training, mentoring, and even access to their own data that they were meant to receive in this research experience at a remote field site was for them to submit to the PI’s demands. In other words, while the PI’s behavior may not have directly compromised the shared body of scientific knowledge, it undermined the other central job of the tribe of science: the training of new scientists. Buzzelli writes:

These demands and assaults, plus the professional blackmail mentioned earlier, were an integral part of the subject’s performance as a research mentor and director and ethically compromised that performance. Hence, they seriously deviated from the practices accepted in the scientific community. (p. 647)

Buzzelli makes the case for an understanding of scientific misconduct as practices that do harm to science. Thus, practices that damage the integrity of training and supervision of associates and students – an important element of the research process – would count as misconduct. Indeed, in his 1999 article, he notes that the first official NIH definition of scientific misconduct (in 1986) used the phrase “serious deviations, such as fabrication, falsification, or plagiarism, from accepted practices in carrying out research or in reporting the results of research.” (p. 276) This language shifted in subsequent statements of the definition of scientific misconduct, for example “fabrication, falsification, plagiarism, and other serious deviations from accepted practices” in the NSF definition that was in place in 1999.

Reordering the words this way might not seem like a big shift, but as Buzzelli points out, it conveys the impression that “other serious deviations” is a fourth item in the list after the clearly enumerated fabrication, falsification, and plagiarism, an ill-defined catch-all meant to cover cases too fuzzy to enumerate in advance. The original NIH wording, in contrast, suggests that the essence of scientific misconduct is that it is an ethical deviation from accepted scientific practice. In this framing of the definition, fabrication, falsification, and plagiarism are offered as three examples of the kind of deviation that counts as scientific misconduct, but there is no claim that these three examples are the only deviations that count as scientific misconduct.

To those still worried by the imprecision of this definition, Buzzelli offers the following:

[T]he ethical import of “serious deviations from accepted practices” has escaped some critics, who have taken it to refer instead to such things as doing creative and novel research, exhibiting personality quirks, or deviating from some artificial ideal of scientific method. They consider the language of the present definition to be excessively broad because it would supposedly allow misconduct findings to be made against scientists for these inappropriate reasons.

However, the real import of “accepted practices” is that is makes the ethical standards held by the scientific community itself the regulatory standard that a federal agency will use in considering a case of misconduct against a scientist. (p. 277)

In other words, Buzzelli is arguing that a definition of scientific misconduct that is centered on practices that the scientific community finds harmful to knowledge-building is better for ensuring the proper use of research funding and protecting the integrity of the scientific record than a definition that restricts scientific misconduct to fabrication, falsification, and plagiarism. Refraining from fabrication, falsification, and plagiarism, then, would not suffice to fulfill the negative duties of a scientist.

We’ll continue our discussion of the duties of scientists with a sidebar discussion on what kind of harm I claim plagiarism does to scientific knowledge-building. From there, we will press on to discuss what the positive duties of scientists might be, as well as the sources of these duties.

Buzzelli, D. E. (1993). The definition of misconduct in science: a view from NSF. Science, 259(5095), 584-648.

Buzzelli, D. (1999). Serious deviation from accepted practices. Science and Engineering Ethics, 5(2), 275-282.

Pimple, K. D. (2002). Six domains of research ethics. Science and Engineering Ethics, 8(2), 191-205.
Posts in this series:

Questions for the non-scientists in the audience.

Questions for the scientists in the audience.

What do we owe you, and who’s “we” anyway? Obligations of scientists (part 1)

Scientists’ powers and ways they shouldn’t use them: Obligations of scientists (part 2)

Don’t be evil: Obligations of scientists (part 3)

How plagiarism hurts knowledge-building: Obligations of scientists (part 4)

What scientists ought to do for non-scientists, and why: Obligations of scientists (part 5)

What do I owe society for my scientific training? Obligations of scientists (part 6)

Are you saying I can’t go home until we cure cancer? Obligations of scientists (part 7)

Join Virtually Speaking Science for a conversation about sexism in science and science journalism.

Today at 5 P.M. Eastern/2 P.M. Pacific, I’ll be on Virtually Speaking Science with Maryn McKenna and Tom Levenson to discuss sexual harassment, gender bias, and related issues in the world of science, science journalism, and online science communication. Listen live online or, if you have other stuff to do in that bit of spacetime, you can check out the archived recording later. If you do the Second Life thing, you can join us there at the Exploratorium and text in questions for us.

Tom has a nice post with some background to orient our conversation.

Here, I’m going to give you a few links that give you a taste of what I’ve been thinking about in preparation for this conversation, and then I’ll say a little about what I hope will come out of the conversation.

Geek Feminism Wiki Timeline of incidents from 2013 (includes tech and science blogosphere)

Danielle Lee’s story about the “urban whore” incident and Scientific American’s response to it.

Kate Clancy’s post on how Danielle Lee’s story and the revelations about former Scientific American blog editor Bora Zivkovic are connected to the rape-y Einstein bobble head video incident (with useful discussion of productive strategies for community response)

Andrew David Thaler’s post “On being an ally and being called out on your privilege”

A post I wrote with a link to research on implicit gender bias among science faculty at universities, wherein I point out that the empirical findings have some ethical implications if we’re committed to reducing gender bias

A short film exploring the pipeline problem for women in chemistry, “A Chemical Imbalance” (Transcript)

The most recent of Zuska’s excellent posts on the pipeline problem, “Rethinking the Normality of Attrition”

As far as I’m concerned, the point of our conversation is not to say science, or science journalism, or online science communication, has a bigger problem with sexual harassment or sexism or gender disparities than other professional communities or than the broader societies from which members of these professional communities are drawn. The issue, as far as I can tell, is that these smaller communities reproduce these problems from the broader society — but, they don’t need to. Recognizing that the problem exists — that we think we have merit-driven institutions, or that we’re better at being objective than the average Jo(e), but that the evidence indicates we’re not — is a crucial step on the way to fixing it.

I’m hopeful that we’ll be able to talk about more than individual incidents of sexism or harassment in our discussion. The individual incidents matter, but they don’t emerge fully formed from the hearts, minds, mouths, and hands of evil-doers. They are reflections of cultural influences we’re soaking in, of systems we have built.

Among other things, this suggests to me that any real change will require thinking hard about how to change systems rather than keeping our focus at the level of individuals. Recognizing that it will take more than good intentions and individual efforts to overcome things like unconscious bias in human interactions in the professional sphere (including but not limited to hiring decisions) would be a huge step forward.

Such progress will surely be hard, but I don’t think it’s impossible, and I suspect the effort would be worth it.

If you can, do listen (and watch). I’ll be sure to link the archived broadcast once that link is available.

Scientists’ powers and ways they shouldn’t use them: Obligations of scientists (part 2)

In this post, we’re returning to a discussion we started back in September about whether scientists have special duties or obligations to society (or, if the notion of “society” seems too fuzzy and ill-defined to you, to the other people who are not scientists with whom they share a world) in virtue of being scientists.

You may recall that, in the post where we set out some groundwork for the discussion, I offered one reason you might think that scientists have duties that are importantly different from the duties of non-scientists:

The main arguments for scientists having special duties tend to turn on scientists being in possession of special powers. This is the scientist as Spider-Man: with great power comes great responsibility.

What kind of special powers are we talking about? The power to build reliable knowledge about the world – and in particular, about phenomena and mechanisms in the world that are not so transparent to our everyday powers of observation and the everyday tools non-scientists have at their disposal for probing features of their world. On account of their training and experience, scientists are more likely to be able to set up experiments or conditions for observation that will help them figure out the cause of an outbreak of illness, or the robust patterns in global surface temperatures and the strength of their correlation with CO2 outputs from factories and farms, or whether a particular plan for energy generation is thermodynamically plausible. In addition, working scientists are more likely to have access to chemical reagents and modern lab equipment, to beamtimes at particle accelerators, to purpose-bred experimental animals, to populations of human subjects and institutional review boards for well-regulated clinical trials.

Scientists can build specialist knowledge that the rest of us (including scientists in other fields) cannot, and many of them have access to materials, tools, and social arrangements for use in their knowledge-building that the rest of us do not. That may fall short of a superpower, but we shouldn’t kid ourselves that this doesn’t represent significant power in our world.

In her book Ethics of Scientific Research, Kristin Shrader-Frechette argues that these special abilities give rise to obligations for scientists. We can separate these into positive duties and negative duties. A positive duty is an obligation to actually do something (e.g., a duty to care for the hungry, a duty to tell the truth), while a negative duty is an obligation to refrain from doing something (e.g., a duty not to lie, a duty not to steal, a duty not to kill). There may well be context sensitivity in some of these duties (e.g, if it’s a matter of self-defense, your duty not to kill may be weakened), but you get the basic difference between the two flavors of duties.

Let’s start with ways scientists ought not to use their scientific powers. Since scientists have to share a world with everyone else, Shrader-Frechette argues that this puts some limits on the research they can do. She says that scientists shouldn’t do research that causes unjustified risks to people. Nor should they do research that violates informed consent of the human subjects who participate in the research. They should not do research that unjustly converts public resources to private profits. Nor should they do research that seriously jeopardizes environmental welfare. Finally, scientists should not do biased research.

One common theme in these prohibitions is the idea that knowledge in itself is not more important than the welfare of people. Given how focused scientific activity is on knowledge-building, this may be something about which scientists need to be reminded. For the people with whom scientists share a world, knowledge is valuable instrumentally – because people in society can benefit from it. What this means is that scientific knowledge-building that harms people more than it helps them, or that harms shared resources like the environment, is on balance a bad thing, not a good thing. This is not to say that the knowledge scientists are seeking should not be built at all. Rather, scientists need to find a way to build it without inflicting those harms – because it is their duty to avoid inflicting those harms.

Shrader-Frechette makes the observation that for research to be valuable at all to the broader public, it must be research that produces reliable knowledge. This is a big reason scientists should avoid conducting biased research. And, she notes that not doing certain research can also pose a risk to the public.

There’s another way scientists might use their powers against non-scientists that’s suggested by the Mertonian norm of disinterestedness, an “ought” scientists are supposed to feel pulling at them because of how they’ve been socialized as members of their scientific tribe. Because the scientific expert has knowledge and knowledge-building powers that the non-scientist does not, she could exploit the non-scientist’s ignorance or his tendency to trust the judgment of the expert. The scientist, in other words, could put one over on the layperson for her own benefit. This is how snake oil gets sold — and arguably, this is the kind of thing that scientists ought to refrain from doing in their interactions with non-scientists.

The overall duties of the scientist, as Shrader-Frechette describes them, also include positive duties to do research and to use research findings in ways that serve the public good, as well as to ensure that the knowledge and technologies created by the research do not harm anyone. We’ll take up these positive duties in the next post in the series.
Shrader-Frechette, K. S. (1994). Ethics of scientific research. Rowman & Littlefield.
Posts in this series:

Questions for the non-scientists in the audience.

Questions for the scientists in the audience.

What do we owe you, and who’s “we” anyway? Obligations of scientists (part 1)

Scientists’ powers and ways they shouldn’t use them: Obligations of scientists (part 2)

Don’t be evil: Obligations of scientists (part 3)

How plagiarism hurts knowledge-building: Obligations of scientists (part 4)

What scientists ought to do for non-scientists, and why: Obligations of scientists (part 5)

What do I owe society for my scientific training? Obligations of scientists (part 6)

Are you saying I can’t go home until we cure cancer? Obligations of scientists (part 7)