Faith in rehabilitation (but not in official channels): how unethical behavior in science goes unreported.

Can a scientist who has behaved unethically be rehabilitated and reintegrated as a productive member of the scientific community? Or is your first ethical blunder grounds for permanent expulsion from the community?

In practice, this isn’t just a question about the person who commits the ethical violation. It’s also a question about what other scientists in the community can stomach in dealing with the offenders — especially when the offender turns out to be a close colleague or a trainee.

In the case of a hard line — one ethical strike and you’re out — what kind of decision does this place on the scientific mentor who discovers that his or her graduate student or postdoc has crossed an ethical line? Faced with someone you judge to have talent and promise, someone you think could contribute to the scientific endeavor, someone whose behavior you are convinced was the result of a moment of bad judgment rather than evil intent or an irredeemably flawed character, what do you do?

Do you hand the matter on to university administrators or federal funders (who don’t know your trainee, might not recognize or value his or her promise, might not be able to judge just how out of character this ethical misstep really was) and let them mete out punishment? Or, do you try to address the transgression yourself, as a mentor, addressing the actual circumstances of the ethical blunder, the other options your trainee should have recognized as better ones to pursue, and the kind of harm this bad decision could bring to the trainee and to other members of the scientific community?

Clearly, there are downsides to either of these options.

One problem with handling an ethical transgression privately is that it’s hard to be sure it has really been handled in a lasting way. Given the persistent patterns of escalating misbehavior that often come to light when big frauds are exposed, it’s hard not to wonder whether scientific mentors were aware, and perhaps even intervening in ways they hoped would be effective.

It’s the building over time of ethical violations that is concerning. Is such an escalation the result of a hands-off (and eyes-off) policy from mentors and collaborators? Could intervention earlier in the game have stopped the pattern of infractions and led the researcher to cultivate more honest patterns of scientific behavior? Or is being caught by a mentor or collaborator who admonishes you privately and warns that he or she will keep an eye on you almost as good as getting away with it — an outcome with no real penalties and no paper-trail that other members of the scientific community might access?

It’s even possible that some of these interventions might happen at an institutional level — the department or the university becomes aware of ethical violations and deals with them “internally” without involving “the authorities” (who, in such cases, are usually federal funding agencies). I dare say that the feds would be pretty unhappy about being kept out of the loop if the ethical violations in question occur in research supported by federal funding. But if the presumption is that getting the feds involved raises the available penalties to the draconian, it is understandable that departments and universities might want to try to address the ethical missteps while still protecting the investment they have made in a promising young researcher.

Of course, the rest of the scientific community has relevant interests here. These include an interest in being able to trust that other scientists present honest results to the community, whether in journal articles, conference presentations, grant applications, or private communications. Arguably, they also include an interest in having other members of the community expose dishonesty when they detect it. Managing an ethical infraction privately is problematic if it leaves the scientific community with misleading literature that isn’t corrected or retracted (for example).

It’s also problematic if it leaves someone with a habit of cheating in the community, presumed by all but a few of the community’s members to have a good record of integrity.

But I’m inclined to think that the impulse to deal with science’s youthful offenders privately is a response to the fear that handing them over to federal authorities has a high likelihood of ending their scientific careers forever. There is a fear that a first offense will be punished with the career equivalent of the death penalty.

As it happens, administrative sanctions imposed by Office of Research Integrity are hardly ever permanent removal. Findings of scientific misconduct are much more likely to be punished with exclusion from federal funding for three years, or five years, or ten years. Still, in an extremely competitive environment, with multitudes of scientists competing for scarce grant dollars and permanent jobs, even a three year disbarment may be enough to seriously derail a scientific career. The mentor making the call about whether to report a trainee’s unethical behavior may judge the likely fallout as enough to end the trainee’s career.

Permanent expulsion or a slap on the wrist is not much of a range of penalties. And, neither of these options really addresses the question of whether rehabilitation is possible and in the best interests of both the individual and the scientific community.

If no errors in judgment are tolerated, people will do anything to conceal such errors. Mentors who are trying to be humane may become accomplices in the concealment. The conversations about how to make better judgments may not happen because people worry that their hypothetical situations will be scrutinized for clues about actual screw-ups.

None of this is to say that ethical violations should be without serious consequences — they shouldn’t. But this need not preclude the possibility that people can learn from their mistakes. Violators may have to meet a heavy burden to demonstrate that they have learned from their mistakes. Indeed, it is possible they may never fully regain the trust of their fellow researchers (who may go forward reading their papers and grant proposals with heightened skepticism in light of their past wrongdoing).

However, it seems perverse for the scientific community to adopt a stance that rehabilitation is impossible when so many of its members seem motivated to avoid official channels for dealing with misconduct precisely because they feel rehabilitation is possible. If the official penalty structure denies the possibility of rehabilitation, those scientists who believe in rehabilitation will take matters into their own hands. To the extent that this may exacerbate the problem, it might be good if paths to rehabilitation were given more prominence in official responses to misconduct.

_____________
This post is an updated version of an ancestor post on my other blog.

Are you saying I can’t go home until we cure cancer? Obligations of scientists (part 7)

In the previous post in this series, we examined the question of what scientists who are trained with significant financial support from the public (which, in the U.S., means practically every scientist trained at the Ph.D. level) owe to the public providing that support. The focus there was personal: I was trained to be a physical chemist, free of charge due to the public’s investment, but I stopped making new scientific knowledge in 1994, shortly after my Ph.D. was conferred.

From a certain perspective, that makes me a deadbeat, a person who has fallen down on her obligations to society.

Maybe that perspective strikes you as perverse, but there are working scientists who seem to share it.

Consider this essay by cancer researcher Scott E. Kern raising the question of whether cancer researchers at Johns Hopkins who don’t come into the lab on a Sunday afternoon have lost sight of their obligations to people with cancer.

Kern wonders if scientists who manage to fit their laboratory research into the confines of a Monday-through-Friday work week might lack a real passion for scientific research. He muses that full weekend utilization of their modern cancer research facility might waste less money (in terms of facilities and overhead, salaries and benefits). He suggests that the researchers who have are not hard at work in the lab on a weekend are falling down on their moral duty to cure cancer as soon as humanly possible.

The unsupported assumptions in Kern’s piece are numerous (and far from novel). Do we know that having each research scientist devote more hours in the lab increases the rate of scientific returns? Or might there plausibly be a point of diminishing returns, where additional lab-hours produce no appreciable return? Where’s the economic calculation to consider the potential damage to the scientists from putting in 80 hours a week (to their cognitive powers, their health, their personal relationships, their experience of a life outside of work, maybe even their enthusiasm for science)? After all, lots of resources are invested in educating and training researchers — enough so that one wouldn’t want to damage those researchers on the basis of an (unsupported) hypothesis offered in the pages of Cancer Biology & Therapy.

And while Kern is doing economic calculations, he might want to consider the impact on facilities of research activity proceeding full-tilt, 24/7. Without some downtime, equipment and facilities might wear out faster than they would otherwise.

Nowhere here does Kern consider the option of hiring more researchers to work 40 hour weeks, instead of persuading the existing research workforce into spending 60, 80, 100 hours a week in the lab.

These researchers might still end up bringing work home (if they ever get a chance to go home).

Kern might dismiss this suggestion on purely economic grounds — organizations are more likely to want to pay for fewer employees (with benefits) who can work more hours than to pay to have the same number of hours of work done my more employees. He might also dismiss it on the basis that the people who really have the passion needed to do the research to cure cancer will not prioritize anything else in their lives above doing that research and finding that cure.

But one assumes passion of the sort Kern seems to have in mind would be the kind of thing that would drive researchers to the lab no matter what, even in the face of long hours, poor pay, grinding fatigue. If that is so, it’s not clear how the problem is solved by browbeating researchers without this passion into working more hours because they owe it to cancer patients. Indeed, Kern might consider, in light of the relative dearth of researchers with passion sufficient to fill the cancer research facilities on weekends, the necessity of making use of the research talents and efforts of people who don’t want to spend 60 hours a week in the lab. Kern’s piece suggests he’d have a preference for keeping such people out of the research ranks (despite the significant societal investment made in their scientific training), but by his own account there would hardly be enough researchers left in that case to keep research moving forward.

Might not these conditions prompt us to reconsider whether the received wisdom of scientific mentors is always so wise? Wouldn’t this be a reasonable place to reevaluate the strategy for accomplishing the grand scientific goal?

And Kern does not even consider a pertinent competing hypothesis, that people often have important insights into how to move research forward in the moments when they step back and allow their minds to wander. Perhaps less time away from one’s project means fewer of these insights — which, on its face, would be bad for the project of curing cancer.

The strong claim at the center of Kern’s essay is an ethical claim about what researchers owe cancer patients, about what cancer patients can demand from researchers (or any other members of society), and on what basis.

He writes:

During the survey period, off-site laypersons offer comments on my observations. “Don’t the people with families have a right to a career in cancer research also?” I choose not to answer. How would I? Do the patients have a duty to provide this “right”, perhaps by entering suspended animation? Should I note that examining other measures of passion, such as breadth of reading and fund of knowledge, may raise the same concern and that “time” is likely only a surrogate measure? Should I note that productive scientists with adorable family lives may have “earned” their positions rather than acquiring them as a “right”? Which of the other professions can adopt a country-club mentality, restricting their activities largely to a 35–40 hour week? Don’t people with families have a right to be police? Lawyers? Astronauts? Entrepreneurs?

Kern’s formulation of this interaction of rights and duties strikes me as odd. Essentially, he’s framing this as a question of whether people with families have a right to a career in cancer research, rather than whether cancer researchers have a right to have families (or any other parts of their lives that exist beyond their careers). Certainly, there have been those who have treated scientific careers as vocations requiring many sacrifices, who have acted as if there is a forced choice between having a scientific career and having a family (unless one has a wife to tend to that family).

We should acknowledge, however, that having a family life is just one way to “have a life.” Therefore, let’s consider the question this way: Do cancer researchers have a right to a life outside of work?

Kern’s suggestion is that this “right,” when exercised by researchers, is something that cancer patients end up paying for with their lives (unless they go into suspended animation while cancer researchers are spending time with their families or puttering around their gardens).

The big question, then, is what the researcher’s obligations are to the cancer patient — or to society in general.

If we’re to answer that question, I don’t think it’s fair to ignore the related questions: What are society’s obligations to the cancer patient? What are society’s obligations to researchers? And what are the cancer patient’s obligations in all of this?

We’ve already spent some time discussing scientists’ putative obligation to repay society’s investment in their training:

  • society has paid for the training the scientists have received (through federal funding of research projects, training programs, etc.)
  • society has pressing needs that can best (only?) be addressed if scientific research is conducted
  • those few members of society who have specialized skills that are needed to address particular societal needs have a duty to use those skills to address those needs (i.e., if you can do research and most other people can’t, then to the extent that society as a whole needs the research that you can do, you ought to do it)

Arguably, finding cures and treatments for cancer would be among those societal needs.

Once again the Spider-Man ethos rears its head: with great power comes great responsibility, and scientific researchers have great power. If cancer researchers won’t help find cures and treatments for cancer, who else can?

Here, I think we should pause to note that there is probably an ethically relevant difference between offering help and doing everything you possibly can. It’s one thing to donate a hundred bucks to charity and quite another to give all your money and sell all your worldly goods in order to donate the proceeds. It’s a different thing for a healthy person to donate one kidney than to donate both kidneys plus the heart and lungs.

In other words, there is help you can provide, but there seems also to be a level of help that it would be wrong for anyone else to demand of you. Possibly there is also a level of help that it would be wrong for you to provide even if you were willing to do so because it harms you in a fundamental and/or irreparable way.

And once we recognize that such a line exists between the maximum theoretical help you could provide and the help you are obligated to provide, I think we have to recognize that the needs of cancer patients do not — and should not — trump every other interest of other individuals or of society as a whole. If a cancer patient cannot lay claim to the heart and lungs of a cancer researcher, then neither can that cancer patient lay claim to every moment of a cancer researcher’s time.

Indeed, in this argument of duties that spring from ability, it seems fair to ask why it is not the responsibility of everyone who might get cancer to train as a cancer researcher and contribute to the search for a cure. Why should tuning out in high school science classes, or deciding to pursue a degree in engineering or business or literature, excuse one from responsibility here? (And imagine how hard it’s going to be to get kids to study for their AP Chemistry or AP Biology classes when word gets out that their success is setting them up for a career where they ought never to take a day off, go to the beach, or cultivate friendships outside the workplace. Nerds can connect the dots.)

Surely anyone willing to argue that cancer researchers owe it to cancer patients to work the kind of hours Kern seems to think would be appropriate ought to be asking what cancer patients — and the precancerous — owe here.

Does Kern think researchers owe all their waking hours to the task because there are so few of them who can do this research? Reports from job seekers over the past several years suggest that there are plenty of other trained scientists who could do this research but have not been able to secure employment as cancer researchers. Some may be employed in other research fields. Others, despite their best efforts, may not have secured research positions at all. What are their obligations here? Ought those employed in other research areas to abandon their current research to work on cancer, departments and funders be damned? Ought those who are not employed in a research field to be conducting their own cancer research anyway, without benefit of institution or facilities, research funding or remuneration?

Why would we feel scientific research skills, in particular, should make the individuals who have them so subject to the needs of others, even to the exclusion of their own needs?

Verily, if scientific researchers and the special skills they have are so very vital to providing for the needs of other members of society — vital enough that people like Kern feel it’s appropriate to criticize them for wanting any time out of the lab — doesn’t society owe it to its members to give researchers every resource they need for the task? Maybe even to create conditions in which everyone with the talent and skills to solve the scientific problems society wants solved can apply those skills and talents — and live a reasonably satisfying life while doing so?

My hunch is that most cancer patients would actually be less likely than Kern to regard cancer researchers as of merely instrumental value. I’m inclined to think that someone fighting a potentially life-threatening disease would be reluctant to deny someone else the opportunity to spend time with loved ones or to savor an experience that makes life worth living. To the extent that cancer researchers do sacrifice some aspects of the rest of their life to make progress on their work, I reckon most cancer patients appreciate these sacrifices. If more is needed for cancer patients, it seems reasonable to place this burden on society as a whole — teeming with potential cancer patients and their relatives and friends — to enable more (and more effective) cancer research to go on without drastically restricting the lives of the people qualified to conduct it, or writing off their interests in their own human flourishing.

As a group, scientists do have special capabilities with which they could help society address pressing problems. To the extent that they can help society address those problems, scientists probably should — not least because scientists are themselves part of society. But despite their special powers, scientists are still human beings with needs, desires, interests, and aspirations. A society that asks scientists to direct their skills and efforts towards solving its problems also has a duty to give scientists the same opportunities to flourish that it provides for its members who happen not to be scientists.

In the next post in this series, I’ll propose a less economic way to think about just what society might be buying when it invests in the training of scientists. My hope is that this will give us a richer and more useful picture of the obligations scientists and non-scientists have to each other as they are sharing a world.

* * * * *
Ancestors of this post first appeared on Adventures in Ethics and Science
_____

Kern, S. E. (2010). Where’s the passion?. Cancer biology & therapy, 10(7),655-657.
_____
Posts in this series:

Questions for the non-scientists in the audience.

Questions for the scientists in the audience.

What do we owe you, and who’s “we” anyway? Obligations of scientists (part 1)

Scientists’ powers and ways they shouldn’t use them: Obligations of scientists (part 2)

Don’t be evil: Obligations of scientists (part 3)

How plagiarism hurts knowledge-building: Obligations of scientists (part 4)

What scientists ought to do for non-scientists, and why: Obligations of scientists (part 5)

What do I owe society for my scientific training? Obligations of scientists (part 6)

Are you saying I can’t go home until we cure cancer? Obligations of scientists (part 7)

What do I owe society for my scientific training? Obligations of scientists (part 6)

One of the dangers of thinking hard about your obligations is that you may discover one that you’ve fallen down on. As we continue our discussion of the obligations of scientist, I put myself under the microscope and invite you to consider whether I’ve incurred a debt to society that I have failed to pay back.

In the last post in this series, we discussed the claim that those in our society with scientific training have a positive duty to conduct scientific research in order to build new scientific knowledge. The source of that putative duty is two-fold. On the one hand, it’s a duty that flows from the scientist’s abilities in the face of societal needs: if people trained to build new scientific knowledge won’t build the new scientific knowledge needed to address pressing problems (like how to feed the world, or hold off climate change, or keep us all from dying from infectious diseases, or what have you), we’re in trouble. On the other hand, it’s a duty that flows from the societal investment that nurtures the development of these special scientific abilities: in the U.S., it’s essentially impossible to get scientific training at the Ph.D. level that isn’t subsidized by public funding. Public funding is used to support the training of scientists because the public expects a return on that investment in the form of grown-up scientists building knowledge which will benefit the public in some way. By this logic, people who take advantage of that heavily subsidized scientific training but don’t go on to build scientific knowledge when they are fully trained are falling down on their obligation to society.

People like me.

From September 1989 through December 1993, I was in a Ph.D. program in chemistry. (My Ph.D. was conferred January 1994.)

As part of this program, I was enrolled in graduate coursework (two chemistry courses per quarter for my first year, plus another chemistry course and three math courses, for fun, during my second year). I didn’t pay a dime for any of this coursework (beyond buying textbooks and binder paper and writing implements). Instead, tuition was fully covered by my graduate tuition stipend (which also covered “units” in research, teaching, and department seminar that weren’t really classes but appeared on our transcripts as if they were). Indeed, beyond the tuition reimbursement I was paid a monthly stipend of $1000, which seemed like a lot of money at the time (despite the fact that more than a third of it went right to rent).

I was also immersed in a research lab from January 1990 onward. Working in this lab was the heart of my training as a chemist. I was given a project to start with — a set of empirical questions to try to answer about a far-from-equilibrium chemical system that one of the recently-graduated students before me had been studying. I had to digest a significant chunk of experimental and theoretical literature to grasp why the questions mattered and what the experimental challenges in answering them might be. I had to assess the performance of the experimental equipment we had on hand, spend hours with calibrations, read a bunch of technical manuals, disassemble and reassemble pumps, write code to drive the apparatus and to collect data, identify experimental constraints that were important to control (and that, strangely, were not identified as such in the experimental papers I was working from), and also, when I determined that the chemical system I had started with was much too fussy to study with the equipment the lab could afford, to identify a different chemical system that I could use to answer similar questions and persuade my advisor to approve this new plan.

In short, my time in the lab had me learning how to build new knowledge (in a particular corner of physical chemistry) by actually building new knowledge. The earliest stages of my training had me juggling the immersion into research with my own coursework and with teaching undergraduate chemistry students as a lab instructor and teaching assistant. Some weeks, this meant I was learning less about how to make new scientific knowledge than I was about how to tackle a my problem-sets or how to explain buffers to pre-meds. Past the first year of the program, though, my waking hours were dominated by getting experiments designed, collecting loads of data, and figuring out what it meant. There were significant stretches of time during which I got into the lab by 5 AM and didn’t leave until 8 or 9 PM, and the weekend days when I didn’t go into the lab were usually consumed with coding, catching up on relevant literature, or drafting manuscripts or thesis chapters.

Once, for fun, some of us grad students did a back-of-the-envelope calculation of our hourly wages. It was remarkably close to the minimum wage I had been paid as a high school student in 1985. Still, we were getting world class scientific training, for free! We paid with the sweat of our brows, but wouldn’t we have to put in that time and effort to learn how to make scientific knowledge anyway? Sure, we graduate students did the lion’s share of the hands-on teaching of undergraduates in our chemistry department (undergraduates who were paying a significant tuition bill), but we were learning, from some of the best scientists in the world, how to be scientists!

Having gotten what amounts to a full-ride for that graduate training, due in significant part to public investment in scientific training at the Ph.D. level, shouldn’t I be hunkered down somewhere working to build more chemical knowledge to pay off my debt to society?

Do I have any good defense to offer for the fact that I’m not building chemical knowledge?

For the record, when I embarked on Ph.D. training in chemistry, I fully expected to be an academic chemist when I grew up. I really did imagine that I’d have a long career building chemical knowledge, training new chemists, and teaching chemistry to an audience that included some future scientists and some students who would go on to do other things but who might benefit from a better understanding of chemistry. Indeed, when I was applying to graduate programs, my chemistry professors were talking up the “critical shortage” of Ph.D. chemists. (By January of my first year in graduate school, I was reading reports that there were actually something like 30% more Ph.D. chemists than there were jobs for Ph.D. chemists, but a first-year grad student is not necessarily freaking out about the job market while she is wrestling with her experimental system.) I did not embark on a chemistry Ph.D. as a collectable. I did not set out to be a dilettante.

In the course of the research that was part of my Ph.D. training, I actually built some new knowledge and shared it with the public, at least to the extent of publishing it in journal articles (four of them, an average of one per year). It’s not clear what the balance sheet would say about this rate of return on the public’s investment in my scientific training — nor either whether most taxpayers would judge the knowledge I built (about the dynamics of far-from-equilibrium chemical reactions and about ways to devise useful empirical tests of proposed reaction mechanisms) as useful knowledge.

Then again, no part of how our research was evaluated in grad school was framed in terms of societal utility. You might try to describe how your research had broader implications that someone outside your immediate subfield could appreciate if you were writing a grant to get the research funded, but solving society’s pressing scientific problems was not the sine qua non of the research agendas we were advancing for our advisors or developing for ourselves.

As my training was teaching me how to conduct serious research in physical chemistry, it was also helping me to discover that my temperament was maybe not so well suited to life as a researcher in physical chemistry. I found, as I was struggling with a grant application that asked me to describe the research agenda I expected to pursue as an academic chemist, that the questions that kept me up at night were not fundamentally questions about chemistry. I learned that no part of me was terribly interested in the amount of grant-writing and lab administration that would have been required of me as a principal investigator. Looking at the few women training me at the Ph.D. level, I surmised that I might have to delay or skip having kids altogether to survive academic chemistry — and that the competition for those faculty jobs where I’d be able to do research and build new knowledge was quite fierce.

Plausibly, had I been serious about living up to my obligation to build new knowledge by conducting research, I could have been a chemist in industry. As I was finishing up my Ph.D., the competition for industry jobs for physical chemists like me was also pretty intense. What I gathered as I researched and applied for industry jobs was that I didn’t really like the culture of industry. And, while working in industry would have been a way from me to conduct research and build new knowledge, I might have ended up spending more time solving the shareholders’ problems than solving society’s problems.

If I wasn’t going to do chemical research in an academic career and I wasn’t going to do chemical research in an industrial job, how should I pay society back for the publicly-supported scientific training I received? Should I be building new scientific knowledge on my own time, in my own garage, until I’ve built enough that the debt is settled? How much new knowledge would that take?

The fact is, none of us Ph.D. students seemed to know at the time that public money was making it possible for us to get graduate training in chemistry without paying for that training. Nor was there an explicit contract we were asked to sign as we took advantage of this public support, agreeing to work for a certain number of years upon the completion of our degrees as chemists serving the public’s interests. Rather, I think most of us saw an opportunity to pursue a subject we loved and to get the preparation we would need to become principal investigators in academia or industry if we decided to pursue those career paths. Most of us probably didn’t know enough about what those career paths would be like to have told you at the beginning of our Ph.D. training whether those career paths would suit our talents or temperaments — that was part of what we were trying to find out by pursuing graduate studies. And practically, many of us would not have been able to find out if we had had to pay the costs of our Ph.D. training ourselves.

If no one who received scientific training subsidized by the public went on to build new scientific knowledge, this would surely be a problem for society. But, do we want to say that everyone who receives such subsidized training is on the hook to pay society back by building new scientific knowledge until such time as society has all the scientific knowledge it needs?

That strikes me as too strong. However, given that I’ve benefitted directly from a societal investment in Ph.D. training that, for all practical purposes, I stopped using in 1994, I’m probably not in a good position to make an objective judgment about just what I do owe society to pay back this debt. Have I paid it back already? Is society within its rights to ask more of me?

Here, I’ve thought about the scientist’s debt to society — my debt to society — in very personal terms. In the next post in the series, we’ll revisit these questions on a slightly larger scale, looking at populations of scientists interacting with the larger society and seeing what this does to our understanding of the obligations of scientists.
______
Posts in this series:

Questions for the non-scientists in the audience.

Questions for the scientists in the audience.

What do we owe you, and who’s “we” anyway? Obligations of scientists (part 1)

Scientists’ powers and ways they shouldn’t use them: Obligations of scientists (part 2)

Don’t be evil: Obligations of scientists (part 3)

How plagiarism hurts knowledge-building: Obligations of scientists (part 4)

What scientists ought to do for non-scientists, and why: Obligations of scientists (part 5)

What do I owe society for my scientific training? Obligations of scientists (part 6)

Are you saying I can’t go home until we cure cancer? Obligations of scientists (part 7)

On speaking up when someone in your profession behaves unethically.

On Twitter recently there was some discussion of a journalist who wrote and published a piece that arguably did serious harm to its subject.

As the conversation unfolded, Kelly Hills helpfully dropped a link to the Society of Professional Journalists Code of Ethics. Even cursory inspection of this code made it quite clear that the journalist (and editor, and publisher) involved in the harmful story weren’t just making decisions that happened to turn out badly. Rather, they were acting in ways that violate the ethical standards for the journalistic profession articulated in this code.

One take-away lesson from this is that being aware of these ethical standards and letting them guide one’s work as a journalist could head off a great deal of harm.

Something else that came up in the discussion, though, was what seemed like a relative dearth of journalists standing up to challenge the unethical conduct of the journalist (and editor, and publisher) in question. Edited to add: A significant number of journalists even used social media to give the problematic piece accolades.

I follow a lot of journalists on Twitter. A handful of them condemned the unethical behavior in this case. The rest may be busy with things offline. It is worth noting that the Society of Professional Journalists Code of Ethics includes the following:

Journalists should:

  • Clarify and explain news coverage and invite dialogue with the public over journalistic conduct.
  • Encourage the public to voice grievances against the news media.
  • Admit mistakes and correct them promptly.
  • Expose unethical practices of journalists and the news media.
  • Abide by the same high standards to which they hold others.

That fourth bullet-point doesn’t quite say that journalists ought to call out bad journalistic behavior that has already been exposed by others. However, using one’s voice to condemn unethical conduct when you see it is one of the ways that people know that you’re committed to ethical conduct. (The other way people know you’re committed to ethical conduct is that you conduct yourself ethically.)

In a world where the larger public is probably going to take your professional tribe as a package deal, extending trust to the lot of you or feeling mistrust for the lot of you, reliably speaking up about problematic conduct when you see it is vital in earning the public’s trust. Moreover, criticisms from inside the professional community seem much more likely to be effective in persuading its members to embrace ethical conduct than criticisms from outside the profession. It’s just too easy for people on the inside to dismiss the critique from people on the outside with, “They just don’t understand what we do.”

There’s a connection here between what’s good for the professional community of journalists and what’s good for the professional community of scientists.

When scientists behave unethically, other scientists need to call them out — not just because the unethical behavior harms the integrity of the scientific record or the opportunities of particular members of the scientific community to flourish, or the health or safety of patients, but because this is how members of the community teetering on the brink of questionable decisions remember that the community does not tolerate such behavior. This is how they remember that those codes of conduct are not just empty words. This is how they remember that their professional peers expect them to act with integrity very single day.

If members of a professional community are not willing to demand ethical behavior from each other in this way, how can the public be expected to trust that professional community to behave ethically?

Undoubtedly, there are situations that can make it harder to take a stand against unethical behavior in your professional community, power disparities that can make calling out the bad behavior dangerous to your own standing in the professional community. As well, shared membership in a professional community creates a situation where you’re inclined to give your fellow professional the benefit of the doubt rather than starting from a place of distrust in your engagements.

But if only a handful of voices in your professional community are raised to call out problematic behavior that the public has identified and is taking very seriously, what does that communicate to the public?

Maybe that you see the behavior, don’t think it’s problematic, but can’t be bothered to explain why it’s not problematic (because the public’s concerns just don’t matter to you).

Maybe that you see the behavior, recognize that it’s problematic, but don’t actually care that much when it happens (and if the public is concerned about it, that’s their problem, not yours).

Maybe that you’re working very hard not to see the problematic behavior (which, in this case, probably means you’re also working very hard not to hear the public voicing its concerns).

Sure, there’s a possibility that you’re working very hard within your professional community to address the problematic behavior and make sure it doesn’t happen again, but if the public doesn’t see evidence of these efforts, it’s unreasonable to expect them to know they’re happening.

It’s hard for me to see how the public’s trust in a profession is supposed to be strengthened by people in the professional community not speaking out against unethical conduct of members of that professional community that the public already knows about. Indeed, I think a profession that only calls out bed behavior in its ranks that the public already knows about is skating on pretty thin ice.

It surely feels desperately unfair to all the members of a professional community working hard to conduct themselves ethically when the public judges the whole profession on the basis of the bad behavior of a handful of its members. One may be tempted to protest, “We’re not all like that!” That’s not really addressing the public’s complaint, though: The public sees at least one of you who’s “like that”; what are the rest of you doing about that?

If the public has good reason to believe that members of the profession will be swift and effective in their policing of bad behavior within their own ranks, the public is more likely to see the bad actors as outliers.

But the public is more likely to believe that members of the profession will be swift and effective in their policing of bad behavior within their own ranks when they see that happen, regularly.

What scientists ought to do for non-scientists, and why: Obligations of scientists (part 5)

If you’re a scientist, are there certain things you’re obligated to do for society (not just for your employer)? If so, where does this obligation come from?

This is part of the discussion we started back in September about special duties or obligations scientists might have to the non-scientists with whom they share a world. If you’re just coming to the discussion now, you might want to check out the post where we set out some groundwork for the discussion, plus the three posts on scientists’ negative duties (i.e., the things scientists have an obligation not to do): our consideration of powers that scientists have and should not misuse, our discussion of scientific misconduct, the high crimes against science that scientists should never commit, and our examination of how plagiarism is not only unfair but also hazardous to knowledge-building.

In this post, finally, we lay out some of the positive duties that scientists might have.

In her book Ethics of Scientific Research, Kristin Shrader-Frechette gives a pretty forceful articulation of a set of positive duties for scientists. She asserts that scientists have a duty to do research, and a duty to use research findings in ways that serve the public good. Recall that these positive duties are in addition to scientists’ negative duty to ensure that the knowledge and technologies created by the research do not harm anyone.

Where do scientists’ special duties come from? Shrader-Frechette identifies a number of sources. For one thing, she says, there are obligations that arise from holding a monopoly on certain kinds of knowledge and services. Scientists are the ones in society who know how to work the electron microscopes and atom-smashers. They’re the ones who have the equipment and skills to build scientific knowledge. Such knowledge is not the kind of thing your average non-scientist could build for himself.

Scientists also have obligations that arise from the fact that they have a good chance of success (at least, better than anyone else) when it comes to educating the public about scientific matters or influencing public policy. The scientists who track the evidence that human activity leads to climate change, for example, are the ones who might be able to explain that evidence to the public and argue persuasively for measures that are predicted to slow climate change.

As well, scientists have duties that arise from the needs of the public. If the public’s pressing needs can only be met with the knowledge and technologies produced by scientific research – and if non-scientists cannot produce such knowledge and technologies themselves – then if scientists do no work to meet these needs, who can?

As we’ve noted before, there is, in all of this, that Spiderman superhero ethos: with great power comes great responsibility. When scientists realize how much power their knowledge and skills give them relative to the non-scientists in society, they begin to see that their duties are greater than they might have thought.

Let’s turn to what I take to be Shrader-Frechette’s more controversial claim: that scientists have a positive duty to conduct research. Where does this obligation come from?

For one thing, she argues, knowledge itself is valuable, especially in democratic societies where it could presumably help us make better choices than we’d be able to make with less knowledge. Thus, those who can produce knowledge should produce it.

For another thing, Shrader-Frechette points out, society funds research projects (through various granting agencies and direct funding from governmental entities). Researchers who accept such research funding are not free to abstain from research. They can’t take the grants and put an addition on the house. Rather, they are obligated to perform the contracted research. This argument is pretty uncontroversial, I think, since asking for money to do the research that will lead to more scientific knowledge and then failing to use that money to build more scientific knowledge is deceptive.

But here’s the argument that I think will meet with more resistance, at least from scientists: In the U.S., in addition to funding particular pieces of scientific research, society pays the bill for training scientists. This is not just true for scientists trained at public colleges and universities. Even private universities get a huge chunk of their money to fund research projects, research infrastructure, and the scientific training they give their students from public sources, including but not limited to federal funding agencies like the National Science Foundation and the National Institutes of Health.

The American people are not putting up this funding out of the goodness of their hearts. Rather, the public invests in the training of scientists because it expects a return on this investment in the form of the vital knowledge those trained scientists go on to produce and share with the public. Since the public pays to train people who can build scientific knowledge, the people who receive this training have a duty to go forth and build scientific knowledge to benefit the public.

Finally, Shrader-Frechette says, scientists have a duty to do research because if they don’t do research regularly, they won’t remain knowledgeable in their field. Not only will they not be up on the most recent discoveries or what they mean, but they will start to lose the crucial experimental and analytic skills they developed when they were being trained as scientists. For the philosophy fans in the audience, this point in Shrader-Frechette’s argument is reminiscent of Immanuel Kant’s example of how the man who prefers not to cultivate his talents is falling down on his duties. If everyone in society chose not to cultivate her talents, each of us would need to be completely self-sufficient (since we could not receive aid from others exercising their talents on our behalf) – and even that would not be enough, since we would not be able to rely on our own talents, having decided not to cultivate them.

On the basis of Shrader-Frechette’s argument, it sounds like every member of society who has had the advantage of scientific training (paid for by your tax dollars and mine) should be working away in the scientific knowledge salt-mine, at least until science has built all the knowledge society needs it to build.

And here’s where I put my own neck on the line: I earned a Ph.D. in chemistry (conferred in January 1994, almost exactly 20 years ago). Like other students in U.S. Ph.D. programs in chemistry, I did not pay for that scientific training. Rather, as Shrader-Frechette points out, my scientific training was heavily subsidized by the American tax payer. I have not build a bit of new chemical knowledge since the middle of 1994 (since I wrapped up one more project after completing my Ph.D.).

Have I fallen down on my positive duties as a trained scientist? Would it be fair for American tax payers to try to recover the funds they invested in my scientific training?

We’ll take up these questions (among others) in the next installment of this series. Stay tuned!

_____
Shrader-Frechette, K. S. (1994). Ethics of scientific research. Rowman & Littlefield.
______
Posts in this series:

Questions for the non-scientists in the audience.

Questions for the scientists in the audience.

What do we owe you, and who’s “we” anyway? Obligations of scientists (part 1)

Scientists’ powers and ways they shouldn’t use them: Obligations of scientists (part 2)

Don’t be evil: Obligations of scientists (part 3)

How plagiarism hurts knowledge-building: Obligations of scientists (part 4)

What scientists ought to do for non-scientists, and why: Obligations of scientists (part 5)

What do I owe society for my scientific training? Obligations of scientists (part 6)

Are you saying I can’t go home until we cure cancer? Obligations of scientists (part 7)

Don’t be evil: Obligations of scientists (part 3)

In the last installation of our ongoing discussion of the obligations of scientists, I said the next post in the series would take up scientists’ positive duties (i.e., duties to actually do particular kinds of things). I’ve decided to amend that plan to say just a bit more about scientists’ negative duties (i.e., duties to refrain from doing particular kinds of things).

Here, I want to examine a certain minimalist view of scientists’ duties (or of scientists’ negative duties) that is roughly analogous to the old Google motto, “Don’t be evil.” For scientists, the motto would be “Don’t commit scientific misconduct.” The premise is that if X isn’t scientific misconduct, then X is acceptable conduct — at least, acceptable conduct within the context of doing science.

The next question, if you’re trying to avoid committing scientific misconduct, is how scientific misconduct is defined. For scientists in the U.S., a good place to look is to the federal agencies that provide funding for scientific research and training.

Here’s the Office of Research Integrity’s definition of misconduct:

Research misconduct means fabrication, falsification, or plagiarism in proposing, performing, or reviewing research, or in reporting research results. …

Research misconduct does not include honest error or differences of opinion.

Here’s the National Science Foundation’s definition of misconduct:

Research misconduct means fabrication, falsification, or plagiarism in proposing or performing research funded by NSF, reviewing research proposals submitted to NSF, or in reporting research results funded by NSF. …

Research misconduct does not include honest error or differences of opinion.

These definitions are quite similar, although NSF restricts its definition to actions that are part of a scientist’s interaction with NSF — giving the impression that the same actions committed in a scientist’s interaction with NIH would not be scientific misconduct. I’m fairly certain that NSF officials view all scientific plagiarism as bad. However, when the plagiarism is committed in connection with NIH funding, NSF leaves it to the ORI to pursue sanctions. This is a matter of jurisdiction for enforcement.

It’s worth thinking about why federal funders define (and forbid) scientific misconduct in the first place rather than leaving it to scientists as a professional community to police. One stated goal is to ensure that the money they are distributing to support scientific research and training is not being misused — and to have a mechanism with which they can cut off scientists who have proven themselves to be bad actors from further funding. Another stated goal is to protect the quality of the scientific record — that is, to ensure that the published results of the funded research reflect honest reporting of good scientific work rather than lies.

The upshot here is that public money for science comes with strings attached, and that one of those strings is that the money be used to conduct actual science.

Ensuring the proper use of the funding and protecting the integrity of the scientific record needn’t be the only goals of federal funding agencies in the U.S. in their interactions with scientists or in the way they frame their definitions of scientific misconduct, but at present these are the goals in the foreground in discussions of why federally funded scientists should avoid scientific misconduct.

Let’s consider the three high crimes identified in these definitions of scientific misconduct.

Fabrication is making up data or results rather than actually collecting them from observation or experimentation. Obviously, fabrication undermines the project of building a reliable body of knowledge about the world – faked data can’t be counted on to give us an accurate picture of what the world is really like.

A close cousin of fabrication is falsification. Here, rather than making up data out of whole cloth, falsification involves “adjusting” real data – changing the values, adding some data points, omitting other data points. As with fabrication, falsification is lying about your empirical data, representing the falsified data as an honest report of what you observed when it isn’t.

The third high crime is plagiarism, misrepresenting the words or ideas (or, for that matter, data or computer code, for example) of others as your own. Like fabrication and falsification, plagiarism is a variety of dishonesty.

Observation and experimentation are central in establishing the relevant facts about the phenomena scientists are trying to understand. Establishing such relevant facts requires truthfulness about what is observed or measured and under what conditions. Deception, therefore, undermines this aim of science. So at a minimum, scientists must embrace the norm of truthfulness or abandon the goal of building accurate pictures of reality. This doesn’t mean that honest scientists never make mistakes in setting up their experiments, making their measurements, performing data analysis, or reporting what they found to other scientists. However, when honest scientists discover these mistakes, they do what they can to correct them, so that they don’t mislead their fellow scientists even accidentally.

The importance of reliable empirical data, whether as the source of or a test of one’s theory, is why fabrication and falsification of data are rightly regarded as cardinal sins against science. Made-up data are no kind of reliable indicator of what the world is like or whether a particular theory is a good one. Similarly, “cooking” data sets to better support particular hypotheses amounts to ignoring the reality of what has actually been measured. The scientific rules of engagement with phenomena hold the scientist to account for what has actually been observed. While the scientist is always permitted to get additional data about the object of study, one cannot willfully ignore facts one finds puzzling or inconvenient. Even if these facts are not explained, they must be acknowledged.

Those who commit falsification and fabrication undermine the goal of science by knowingly introducing unreliable data into, or holding back relevant data from, the formulation and testing of theories. They sin by not holding themselves accountable to reality as observed in scientific experiments. When they falsify or fabricate in reports of research, they undermine the integrity of the scientific record. When they do it in grant proposals, they are attempting to secure funding under false pretenses.

Plagiarism, the third of the cardinal sins against responsible science, is dishonesty of another sort, namely, dishonesty about the source of words, ideas, methods, or results. A number of people who think hard about research ethics and scientific misconduct view plagiarism as importantly different in its effects from fabrication and falsification. For example, Donald E. Buzzelli (1999) writes:

[P]lagiarism is an instance of robbing a scientific worker of the credit for his or her work, not a matter of corrupting the record. (p. 278)

Kenneth D, Pimple (2002) writes:

One ideal of science, identified by Robert Merton as “disinterestedness,” holds that what matters is the finding, not who makes the finding. Under this norm, scientists do not judge each other’s work by reference to the race, religion, gender, prestige, or any other incidental characteristic of the researcher; the work is judged by the work, not the worker. No harm would be done to the Theory of Relativity if we discovered Einstein had plagiarized it…

[P]lagiarism … is an offense against the community of scientists, rather than against science itself. Who makes a particular finding will not matter to science in one hundred years, but today it matters deeply to the community of scientists. Plagiarism is a way of stealing credit, of gaining credit where credit is not due, and credit, typically in the form of authorship, is the coin of the realm in science. An offense against scientists qua scientists is an offense against science, and in its way plagiarism is as deep an offense against scientists as falsification and fabrication are offenses against science. (p. 196)

In fact, I think we can make a good argument that plagiarism does threaten the integrity of the scientific record (although I’ll save that argument for a separate post). However, I agree with both Buzzelli and Pimple that plagiarism is also a problem because it embodies a particular kind of unfairness within scientific practice. That federal funders include plagiarism by name in their definitions of scientific misconduct suggests that their goals extend further than merely protecting the integrity of the scientific record.

Fabrication, falsification, and plagiarism are clearly instances of scientific misconduct, but the misconduct definitions of the United States Public Health Service (whose umbrella includes NIH) and NSF used to define scientific misconduct as fabrication, falsification, plagiarism, and other serious deviations from accepted research practices. The “other serious deviations” clause was controversial, with a panel of the National Academy of Sciences (among others) arguing that this language was ambiguous enough that it shouldn’t be part of an official misconduct definition. Maybe, the panel worried, “serious deviations from accepted research practices” might be interpreted to include cutting-edge methodological innovations, meaning that scientific innovation would count as misconduct.

In his article 1993 article, “The Definition of Misconduct in Science: A View from NSF,” Buzzelli claimed that there was no evidence that the broader definitions of misconduct had been used to lodge this kind of misconduct complaint. Since then, however, there there have been instances where definitions of scientific misconduct containing an “other serious deviations” clause could be argued to take advantage of the ambiguity of the clause to go after a scientist for political reasons.

If the “other serious deviations” clause isn’t meant to keep scientists from innovating, what kinds of misconduct is it supposed to cover? These include things like sabotaging other scientists’ experiments or equipment, falsifying colleagues’ data, violating agreements about sharing important research materials like cultures and reagents, making misrepresentations in grant proposals, and violating the confidentiality of the peer review process. None of these activities is necessarily covered by fabrication, falsification, or plagiarism, but each of these activities can be seriously harmful to scientific knowledge-building.

Buzzelli (1993) discusses a particular deviation from accepted research practices that the NSF judged as misconduct, one where a principal investigator directing an undergraduate primatology research experience funded by an NSF grant sexually harassed student researchers and graduate assistants. Buzzelli writes:

In carrying out this project, the senior researcher was accused of a range of coercive sexual offenses against various female undergraduate students and research assistants, up to and including rape. … He rationed out access to the research data and the computer on which they were stored and analyzed, as well as his own assistance, so they were only available to students who accepted his advances. He was also accused of threatening to blackball some of the graduate students in the professional community and to damage their careers if they reported his activities. (p. 585)

Even opponents of the “other serious deviations” clause would be unlikely to argue that this PI was not behaving very badly. However, they did argue that this PI’s misconduct was not scientific misconduct — that it should be handled by criminal or civil authorities rather than funding agencies, and that it was not conduct that did harm to science per se.

Buzzelli (who, I should mention, was writing as a senior scientist in the Office of the Inspector General in the National Science Foundation) disagreed with this assessment. He argued that NSF had to get involved in this sexual harassment case in order to protect the integrity of its research funds. The PI in question, operating with NSF funds designated to provide an undergraduate training experience, used his power as a research director and mentor to make sexual demands of his undergraduate trainees. The only way for the undergraduate trainees to receive the training, mentoring, and even access to their own data that they were meant to receive in this research experience at a remote field site was for them to submit to the PI’s demands. In other words, while the PI’s behavior may not have directly compromised the shared body of scientific knowledge, it undermined the other central job of the tribe of science: the training of new scientists. Buzzelli writes:

These demands and assaults, plus the professional blackmail mentioned earlier, were an integral part of the subject’s performance as a research mentor and director and ethically compromised that performance. Hence, they seriously deviated from the practices accepted in the scientific community. (p. 647)

Buzzelli makes the case for an understanding of scientific misconduct as practices that do harm to science. Thus, practices that damage the integrity of training and supervision of associates and students – an important element of the research process – would count as misconduct. Indeed, in his 1999 article, he notes that the first official NIH definition of scientific misconduct (in 1986) used the phrase “serious deviations, such as fabrication, falsification, or plagiarism, from accepted practices in carrying out research or in reporting the results of research.” (p. 276) This language shifted in subsequent statements of the definition of scientific misconduct, for example “fabrication, falsification, plagiarism, and other serious deviations from accepted practices” in the NSF definition that was in place in 1999.

Reordering the words this way might not seem like a big shift, but as Buzzelli points out, it conveys the impression that “other serious deviations” is a fourth item in the list after the clearly enumerated fabrication, falsification, and plagiarism, an ill-defined catch-all meant to cover cases too fuzzy to enumerate in advance. The original NIH wording, in contrast, suggests that the essence of scientific misconduct is that it is an ethical deviation from accepted scientific practice. In this framing of the definition, fabrication, falsification, and plagiarism are offered as three examples of the kind of deviation that counts as scientific misconduct, but there is no claim that these three examples are the only deviations that count as scientific misconduct.

To those still worried by the imprecision of this definition, Buzzelli offers the following:

[T]he ethical import of “serious deviations from accepted practices” has escaped some critics, who have taken it to refer instead to such things as doing creative and novel research, exhibiting personality quirks, or deviating from some artificial ideal of scientific method. They consider the language of the present definition to be excessively broad because it would supposedly allow misconduct findings to be made against scientists for these inappropriate reasons.

However, the real import of “accepted practices” is that is makes the ethical standards held by the scientific community itself the regulatory standard that a federal agency will use in considering a case of misconduct against a scientist. (p. 277)

In other words, Buzzelli is arguing that a definition of scientific misconduct that is centered on practices that the scientific community finds harmful to knowledge-building is better for ensuring the proper use of research funding and protecting the integrity of the scientific record than a definition that restricts scientific misconduct to fabrication, falsification, and plagiarism. Refraining from fabrication, falsification, and plagiarism, then, would not suffice to fulfill the negative duties of a scientist.

We’ll continue our discussion of the duties of scientists with a sidebar discussion on what kind of harm I claim plagiarism does to scientific knowledge-building. From there, we will press on to discuss what the positive duties of scientists might be, as well as the sources of these duties.

_____
Buzzelli, D. E. (1993). The definition of misconduct in science: a view from NSF. Science, 259(5095), 584-648.

Buzzelli, D. (1999). Serious deviation from accepted practices. Science and Engineering Ethics, 5(2), 275-282.

Pimple, K. D. (2002). Six domains of research ethics. Science and Engineering Ethics, 8(2), 191-205.
______
Posts in this series:

Questions for the non-scientists in the audience.

Questions for the scientists in the audience.

What do we owe you, and who’s “we” anyway? Obligations of scientists (part 1)

Scientists’ powers and ways they shouldn’t use them: Obligations of scientists (part 2)

Don’t be evil: Obligations of scientists (part 3)

How plagiarism hurts knowledge-building: Obligations of scientists (part 4)

What scientists ought to do for non-scientists, and why: Obligations of scientists (part 5)

What do I owe society for my scientific training? Obligations of scientists (part 6)

Are you saying I can’t go home until we cure cancer? Obligations of scientists (part 7)

Careers (not just jobs) for Ph.D.s outside the academy.

A week ago I was in Boston for the 2013 annual meeting of the History of Science Society. Immediately after the session in which I was a speaker, I attended a session (Sa31 in this program) called “Happiness beyond the Professoriate — Advising and Embracing Careers Outside the Academy.” The discussion there was specifically pitched at people working in the history of science (whether earning their Ph.D.s or advising those who are), but much of it struck me as broadly applicable to people in other fields — not just fields like philosophy, but also science, technology, engineering, and mathematics (STEM) fields.

The discourse in the session was framed in terms of recognizing, and communicating, that getting a job just like your advisor’s (i.e., as a faculty member at a research university with a Ph.D. program in your field — or, loosening it slightly, as permanent faculty at a college or university, even one not primarily focused on research or on training new members of the profession at the Ph.D. level) shouldn’t be a necessary condition for maintaining your professional identity and place in the professional community. Make no mistake, people in one’s discipline (including those training new members of the profession at the Ph.D. level) frequently do discount people as no longer really members of the profession for failing to succeed in the One True Career Path, but the panel asserted that they shouldn’t.

And, they provided plenty of compelling reasons why the “One True Career Path” approach is problematic. Chief among these, at least in fields like history, is that this approach feeds the creation and growth of armies of adjunct faculty, hoping that someday they will become regular faculty, and in the meantime working for very low wages relative to the amount of work they do (and relative to their training and expertise), experiencing serious job insecurity (sometimes not finding out whether they’ll have classes to teach until the academic term is actually underway), and enduring all manner of employer shenanigans (like having their teaching loads reduced to 50% of full time so the universities employing them are not required by law to provide health care coverage). Worse, insistence on One True Career Path fails to acknowledge that happiness is important.

Panelist Jim Grossman noted that the very language of “alternative careers” reinforces this problematic view by building in the assumption that there is a default career path. Speaking of “alternatives” instead might challenge the assumption that all options other than the default are lesser options.

Grossman identified other bits of vocabulary that ought to be excised from these discussions. He argued against speaking of “the job market” when one really means “the academic job market”. Otherwise, the suggestion is that you can’t really consider those other jobs without exiting the profession. Talking about “job placement,” he said, might have made sense back in the day when the chair of a hiring department called the chair of another department to say, “Send us your best man!” rather than conducting an actual job search. Those days are long gone.

And Grossman had lots to say about why we should stop talking about “overproduction of Ph.D.s.”

Ph.D.s, he noted, are earned by people, not produced like widgets on a factory line. Describing the number of new Ph.D.-holders each year as overproduction is claiming that there are too many — but again, this is too many relative to a specific kind of career trajectory assumed implicitly to be the only one worth pursuing. There are many sectors in the career landscape that could benefit from the talents of these Ph.D.-holders, so why are we not describing the current situation as one of “underconsumption of Ph.D.s”? Finally, the “overproduction of Ph.D.s.” locution doesn’t seem helpful in a context where these seems to be no good way to stop departments from “producing” as many Ph.D.s as they want to. If market forces were enough to address this imbalance, we wouldn’t have armies of adjuncts.

Someone in the discussion pointed out that STEM fields have for some time had similar issues of Ph.D. supply and demand, suggesting that they might be ahead of the curve in developing useful responses which other disciplines could borrow. However, the situation in STEM fields differs in that industrial career paths have been treated as legitimate (and as not removing you from the profession). And, more generally, society seems to take the skills and qualities of mind developed during a STEM Ph.D. as useful and broadly applicable, while those developed during a history or philosophy Ph.D. are assumed to be hopelessly esoteric. However, it was noted that while STEM fields don’t generate the same armies of adjuncts as humanities field, they do have what might be described as the “endless postdoc” problem.

Given that structural stagnation of the academic job market is real (and has been reality for something like 40 years in the history of science), panelist Lynn Nyhart observed that it would be foolish for Ph.D. students not to consider — and prepare for — other kinds of jobs. As well, Nyhart argues that as long as faculty take on graduate students, they have a responsibility to help them find jobs.

Despite profession that they are essentially clueless about career paths other than academia, advisors do have resources they can draw upon in helping their graduate students. Among these is the network of Ph.D. alumni from their graduate program, as well as the network of classmates from their own Ph.D. training. Chances are that a number of people in these networks are doing a wide range of different things with their Ph.D.s — and that they could provide valuable information and contacts. (Also, keeping in contact with these folks recognizes that they are still valued members of your professional community, rather than treating them as dead to you if they did not pursue the One True Career Path.)

Nyhart also recommended Versatilephd.com, especially the PhD Career Finder tab, as a valuable resource for exploring the different kinds of work for which Ph.D.s in various fields can serve as preparation. Some of the good stuff on the site is premium content, but if your university subscribes to the site your access to that premium content may already be paid for.

Nyhart noted that preparing Ph.D. students for a wide range of careers doesn’t require lowering discipline-specific standards, nor changing the curriculum — although, as Grossman pointed out, it might mean thinking more creatively about what skills, qualities of mind, and experiences existing courses impart. After all, skills that are good training for a career in academia — being a good teacher, an effective committee member, an excellent researcher, a persuasive writer, a productive collaborator — are skills that are portable to other kinds of careers.

David Attis, who has a Ph.D. in history of science and has been working in the private sector for about a decade, mentioned some practical skills worth cultivating for Ph.D.s pursuing private sector careers. These include having a tight two-minute explanation of your thesis geared to a non-specialist audience, being able to demonstrate your facility in approaching and solving non-academic problems, and being able to work on the timescale of business, not thesis writing (i.e., five hours to write a two-page memo is far too slow). Attis said that private sector employers are looking for people who can work well on teams and who can be flexible in contexts beyond teaching and research.

I found the discussion in this session incredibly useful, and I hope some of the important issues raised there will find their way to the graduate advisors and Ph.D. students who weren’t in the room for it, no matter what their academic discipline.

On the labor involved in being part of a community.

On Thursday of this week, registration for ScienceOnline Together 2014, the “flagship annual conference” of ScienceOnline opened (and closed). ScienceOnline describes itself as a “global, ongoing, online community” made up of “a diverse and growing group of researchers, science writers, artists, programmers, and educators —those who conduct or communicate science online”.

On Wednesday of this week, Isis the Scientist expressed her doubts that the science communication community for which ScienceOnline functions as a nexus is actually a “community” in any meaningful sense:

The major fundamental flaw of the SciComm “community” is that it is a professional community with inconsistent common values. En face, one of its values is the idea of promoting science. Another is promoting diversity and equality in a professional setting. But, at its core, its most fundamental value are these notions of friendship, support, and togetherness. People join the community in part to talk about science, but also for social interactions with other members of the “community”.  While I’ve engaged in my fair share of drinking and shenanigans  at scientific conferences, ScienceOnline is a different beast entirely.  The years that I participated in person and virtually, there was no doubt in my mind that this was a primarily social enterprise.  It had some real hilarious parts, but it wasn’t an experience that seriously upgraded me professionally.

People in SciComm feel confident talking about “the community” as a tangible thing with values and including people in it, even when those people don’t value the social structure in the same way. People write things that are “brave” and bloviate in ways that make each other feel good and have “deep and meaningful conversations about issues” that are at the end of the day nothing more than words. It’s a “community” that gives out platters full of cookies to people who claim to be “allies” to causes without actually having to ever do anything meaningful. Without having to outreach in any tangible way, simply because they claim to be “allies.” Deeming yourself an “ally” and getting a stack of “Get Out of Jail, Free” cards is a hallmark of the “community”.

Isis notes that the value of “togetherness” in the (putative) SciComm community is often prioritized over the value of “diversity” — and that this is a pretty efficient way to undermine the community. She suggests that focusing on friendship rather than professionalism entrenches this problem and writes “I have friends in academia, but being a part of academic science is not predicated on people being my friends.”

I’m very sympathetic to Isis’s concerns here. I don’t know that I’d say there’s no SciComm community, but that might come down to a disagreement about where the line is between a dysfunctional community and a lack of community altogether. But that’s like the definitional dispute about how many hairs one needs on one’s head to shift from the category of “bald” to the category of “not-bald” — for the case we’re trying to categorize there’s still agreement that there’s a whole lot of bare skin hanging out in the wind.

The crux of the matter, whether we have a community or are trying to have one, is whether we have a set of shared values and goals that is sufficient for us to make common cause with each other and to take each other seriously — to take each other seriously even when we offer critiques of other members of the community. For if people in the community dismiss your critiques out of hand, if they have the backs of some members of the community and not others (and whose they have and whose they don’t sorts out along lines of race, gender, class, and other dimensions that the community’s shared values and goals purportedly transcend), it’s pretty easy to wonder whether you are actually a valued member of the community, whether the community is for you in any meaningful way.

I do believe there’s something like a SciComm community, albeit a dysfunctional one. I will be going to ScienceOnline Together 2014, as I went to the seven annual meetings preceding it. Personally, even though I am a full-time academic like Dr. Isis, I do find professional value from this conference. Probably this has to do with my weird interdisciplinary professional focus — something that makes it harder for me to get all the support and inspiration and engagement I need from the official professional societies that are supposed to be aligned with my professional identity. And because of the focus of my work, I am well aware of dysfunction in my own professional community and in other academic and professional communities.

While there has been a pronounced social component to ScienceOnline as a focus of the SciComm community, ScienceOnline (and its ancestor conferences) have never felt purely social to me. I have always had a more professional agenda there — learning what’s going on in different realms of practice, getting my ideas before people who can give me useful feedback on them, trying to build myself a big-picture, nuanced understanding of science engagement and how it matters.

And in recent years, my experience of the meetings has been more like work. Last year, for example, I put a lot of effort into coordinating a kid-friendly room at the conference so that attendees with small children could have some child-free time in the sessions. It was a small step towards making the conference — and the community — more accessible and welcoming to all the people who we describe as being part of the community. There’s still significant work to do on this front. If we opt out of doing that work, we are sending a pretty clear message about who we care about having in the community and who we view as peripheral, about whose voices and interests we value and whose we do not.

Paying attention to who is being left out, to whose voices are not being heard, to whose needs are not being met, takes effort. But this effort is part of the regular required maintenance for any community that is not completely homogeneous. Skipping it is a recipe for dysfunction.

And the maintenance, it seems, is required pretty much every damn day.

Friday, in the Twitter stream for the ScienceOnline hashtag #scio14, I saw this:

To find out what was making Bug Girl feel unsafe, I went back and watched Joe Hanson’s Thanksgiving video, in which Albert Einstein was portrayed as making unwelcome advances on Marie Curie, cheered on by his host, culminating in a naked assault on Curie.

Given the recent upheaval in the SciComm community around sexual harassment — with lots of discussion, because that’s how we roll — it is surprising and shocking that this video plays sexual harassment and assault for laughs, apparently with no thought to how many women are still targets of harassment, no consideration of how chilly the climate for women in science remains.

Here’s a really clear discussion of what makes the video problematic, and here’s Joe Hanson’s response to the criticisms. I’ll be honest: it looks to me like Joe still doesn’t really understand what people (myself included) took to the social media to explain to him. I’m hopeful that he’ll listen and think and eventually get it better. If not, I’m hopeful that people will keep piping up to explain the problem.

But not everyone was happy that members of our putative community responded to a publicly posted video (on a pretty visible platform — PBS Digital Studio — supported by taxpayers in the U.S.) was greeted with a public critique.

The objections raised on Twitter — many of them raised with obvious care as far as being focused on the harm and communicated constructively — were described variously as “drama,” “infighting,” a “witch hunt” and “burning [Joe] at the stake”. (I’m not going to link the tweets because a number of the people who made those characterizations thought about it and walked them back.)

People insisted, as they do pretty much every time, that the proper thing to do was to address the problem privately — as if that’s the only ethical way to deal with a public wrong, or as if it’s the most effective way to fix the harm. Despite what some will argue, I don’t think we have good evidence for either of those claims.

So let’s come back to regular maintenance of the community and think harder about this. I’ve written before that

if bad behavior is dealt with privately, out of view of members of the community who witnessed the bad behavior in question, those members may lose faith in the community’s commitment to calling it out.

This strikes me as good reason not to take all the communications to private channels. People watching and listening on the sidelines are gathering information on whether their so-called community shares their values, on whether it has their back.

Indeed, the people on the sidelines are also watching and listening to the folks dismissing critiques as drama. Operationally, “drama” seems to amount to “Stuff I’d rather you not discuss where I can see or hear it,” which itself shades quickly into “Stuff that really seems to bother other people, for whom I seem to be unable to muster any empathy, because they are not me.”

Let me pause to note what I am not claiming. I am not saying that every member of a community must be an active member of every conversation within that community. I am not saying that empathy requires you to personally step up and engage in every difficult dialogue every time it rolls around. Sometimes you have other stuff to do, or you know that the cost of being patient and calm is more than you can handle at the moment, or you know you need to listen and think for awhile before you get it well enough to get into it.

But going to the trouble to speak up to convey that the conversation is a troublesome one to have happening in your community — that you wish people would stop making an issue of it, that they should just let it go for the sake of peace in the community — that’s something different. That’s telling the people expressing their hurt and disappointment and higher expectations that they should swallow it, that they should keep it to themselves.

For the sake of the community.

For the sake of the community of which they are clearly not really valued members, if they are the ones, always, who need to shut up and let their issues go for the greater good.

Arguably, if one is really serious about the good of the community, one should pay attention to how this kind of dismissal impacts the community. Now is as good a moment as any to start.

How far does the tether of your expertise extend?

Talking about science in the public sphere is tricky, even with someone with a lot of training in a science.

On the one hand, there’s a sense that it would be a very good thing if the general level of understanding of science was significantly higher than it is at present — if you could count on the people in your neighborhood to have a basic grasp of where scientific knowledge comes from, as well as of the big pieces of scientific knowledge directly relevant to the project of getting through their world safely and successfully.

But there seem to be a good many people in our neighborhood who don’t have this relationship with science. (Here, depending on your ‘druthers, you can fill in an explanation in terms of inadequately inspiring science teachers and/or curricula, or kids too distracted by TV or adolescence or whatever to engage with those teachers and/or curricula.) This means that, if these folks aren’t going to go it alone and try to evaluate putative scientific claims they encounter themselves, they need to get help from scientific experts.

But who’s an expert?

It’s well and good to say that a journalism major who never quite finished his degree is less of an authority on matters cosmological than a NASA scientist, but what should we say about engineers or medical doctors with “concerns” about evolutionary theory? Is a social scientist who spent time as an officer on a nuclear submarine an expert on nuclear power? Is an actor or talk show host with an autistic child an expert on the aetiology of autism? How important is all that specialization research scientists do? To some extent, doesn’t all science follow the same rules, thus equipping any scientist to weigh in intelligently about it?

Rather than give you a general answer to that question, I thought it best to lay out the competence I personally am comfortable claiming, in my capacity as a trained scientist.

As someone trained in a science, I am qualified:

  1. to say an awful lot about the research projects I have completed (although perhaps a bit less about them when they were still underway).
  2. to say something about the more or less settled knowledge, and about the live debates, in my research area (assuming, of course, that I have kept up with the literature and professional meetings where discussions of research in this area take place).
  3. to say something about the more or less settled (as opposed to “frontier”) knowledge for my field more generally (again, assuming I have kept up with the literature and the meetings).
  4. perhaps, to weigh in on frontier knowledge in research areas other than my own, if I have been very diligent about keeping up with the literature and the meetings and about communicating with colleagues working in these areas.
  5. to evaluate scientific arguments in areas of science other than my own for logical structure and persuasiveness (though I must be careful to acknowledge that there may be premises of these arguments — pieces of theory or factual claims from observations or experiments that I’m not familiar with — that I’m not qualified to evaluate).
  6. to recognize, and be wary of, logical fallacies and other less obvious pseudo-scientific moves (e.g., I should call shenanigans on claims that weaknesses in theory T1 necessarily count as support for alternative theory T2).
  7. to recognize that experts in fields of science other than my own generally know what the heck they’re talking about.
  8. to trust scientists in fields other than my own to rein in scientists in those fields who don’t know what they are talking about.
  9. to face up to the reality that, as much as I may know about the little piece of the universe I’ve been studying, I don’t know everything (which is part of why it takes a really big community to do science).

This list of my qualifications is an expression of my comfort level more than anything else. I would argue that it’s not elitist — good training and hard work can make a scientist out of almost anyone. But, it recognizes that with as much as there is to know, you can’t be an expert on everything. Knowing how far the tether of your expertise extends — and owning up to that when people look to you as an expert — is part of being a responsible scientist.

_______
An ancestor version of this post was published on my other blog.

Professional communities, barriers to inclusion, and the value of a posse.

Last week, I wrote a post about an incident connected to a professional conference. A male conference-goer wrote a column attempting to offer praise for a panel featuring four female conference-goers but managed to package this praise in a way that reinforced sexist assumptions about the value women colleagues add to a professional community.

The women panelists communicated directly with the male commentator about his problematic framing. The male commentator seemed receptive to this feedback. I blogged about it as an example of why it’s important to respond to disrespect within professional communities, even if it’s not intended as disrespect, and despite the natural inclination to let it go. And my post was praised for offering a discussion of the issue that was calm, sensitive, and measured.

But honestly? I’m unconvinced that my calm, sensitive, measured discussion will do one whit of good to reduce the incidence of such casual sexism in the future, in the community of science journalist or in any other professional community. Perhaps there were some readers who, owing to the gentle tone, were willing to examine the impact of describing colleagues who are women primarily in terms of their looks, but if a less gentle tone would have put them off from considering the potential for harm to members of their professional communities, it’s hard to believe these readers would devote much energy to combatting these harms — whether or not they were being asked nicely to do so.

Sometimes someone has to really get your attention — in a way that shakes you up and makes you deeply uncomfortable — in order for you to pay attention going forward. Maybe feeling bad about the harm to someone else is a necessary first step to developing empathy.

And certainly, laying out the problem while protecting you from what it feels like to be one of the people struggling under the effects of that problem takes some effort. If going to all that trouble doesn’t actually leave enough of an impression to keep the problem from happening some more, what’s the point?

* * * * *

What does it take to create a diverse professional community? It requires more than an absence of explicit rules or standing practices that bar certain kinds of people from membership, more even that admitting lots of different kinds of people into the “pipeline” for that profession. If you’re in the community by virtue of your educational or employment status but you’re not actually part of the discussions that define your professional community, it may help the appearance of diversity, but not the reality of it.

The chilly climate women have been talking about in a variety of male-dominated professional communities is a real thing.

Being a real member of a professional community includes being able to participate fully in venues for getting your work and insights into the community’s discussions. These venues include journals and professional meetings, as well as panels or study sections that evaluate grant proposals. Early in one’s membership in a professional community, venues like graduate seminars and department symposia are also really important.

One problem here is that usually individuals without meaningful access to participation are also without the power in the community required to effectively address particular barriers to their access. Such individuals can point out the barriers, but they are less likely to be listened to than someone else in the community without those barriers.

Everyday sexism is just one such barrier.

This barrier can take a number of particular forms.

For the students on their way into a professional community, it’s a barrier to find out that senior members of the community who you expected would help train you and eventually take you seriously as a colleague are more inclined to sexualize you or full-on sexually harass you. It’s a barrier when you see people in your community minimize that behavior, whether offhandedly or with rather more deliberation.

It’s a barrier when members of your community focus on your looks rather than your intellectual contributions, or act like it’s cute or somehow surprising that someone like you could actually make an intellectual contribution. It’s a further barrier when other members of your community advise you to ignore tangible disrespect because surely it wasn’t intentional — especially when those other members of the community make no visible effort to help address the disrespect.

It’s a barrier when students don’t see people like themselves represented among the recognized knowledge-builders in the professional community as they are being taught the core knowledge expected of members of that community. It’s also a barrier when the more senior members of the professional community are subject to implicit biases in their expert evaluations of who’s cut out to be a full contributing member of the community.

Plenty of well-meaning folks in professional communities that have a hard time fully integrating women (among others) may be puzzled as to why this is so. If they don’t personally experience the barriers, they may not even realize that they’re there. Listening to lived experiences of their female colleagues might reveal some of the barriers — but listening also assumes that the community really takes its female members seriously as part of the community, when this is precisely the problem with which the women in the community are struggling.

* * * * *

Professional meetings can be challenging terrain for women in predominantly male professional communities. Such meetings are essential venues in which to present one’s work and get career credit for doing so. They are also crucially important for networking and building relationships with people who might become collaborators, who will be called on to evaluate one’s work, and who are the peers with whom one hopes to be engaged in productive discussions over the course of one’s career.

There is also a strong social component to these meetings, an imperative to have fun with one’s people — which is to say, in this context, the people with whom one shares a professional community. Part of this, I think, is related to how strongly people identify with their professional community: the connection is not just about what people in that community do but about who they are. They have taken on the values and goals of the professional community as their own. It’s not just a job, it’s a social identity.

For some people, the social component of professional meetings has a decidedly carnal flavor. Unfortunately, rejecting a pass from someone in your professional community, especially someone with more power in that community than you, can screw with your professional relationships within the community — even assuming that the person who made the pass accepts your “no” and moves on. In other cases, folks within the professional community may be perfectly aware of power gradients and willing to use them to get what they want, applying persistent unwanted attention that can essentially deprive the target of full participation in the conference. Given the importance professional conferences have, this is a significant professional harm.

Lest you imagine that this is a merely hypothetical worry, I assure you that it is not. If you ask around you may discover that some of the members of your professional community choose which conference sessions to attend in order to avoid their harassers. That is surely a constraint on how much one can get out of a professional meeting.

Recently a number of conferences and conventions have adopted policies against harassment, policies that are getting some use. Many of these are fan-oriented conventions or tech conferences, rather than the kind of research oriented, academically inclined professional meetings most of us university types attend. I know of at least one scientific professional society (the American Astronomical Society) that has adopted a harassment policy for its meetings and that seems generally to be moving in a good direction from the point of view of building an inclusive community. However, when I checked the websites of three professional societies to which I belong (American Chemical Society, American Philosophical Association, and Philosophy of Science Association), I could find no sign of anti-harassment policies for their conferences. This is disappointing, but not surprising to me.

The absence of anti-harassment policies doesn’t mean that there’s no harassment happening at the meetings of these professional societies, either.

And even if a professional community has anti-harassment policies in place for its meetings, this doesn’t remove the costs — especially on a relatively junior member of the community — associated with asking that the policies be enforced. Will a professional society be willing to caution a member of the program committee for the conference? To eject the most favored grad student of a luminary in the field — or, for that matter, a luminary — who violates the policy? Shining light on over-the-line behavior at conferences is a species of whistleblowing, and is likely to be received about as warmly as other forms.

* * * * *

Despite the challenges, I don’t think the prospects for building diverse and productive professional communities are dim. Progress is being made, even if most weeks the pace of progress is agonizingly slow.

But I think things could get better faster if people who take their professional communities for granted step up and become more active in maintaining them.

In much the same way that it is not science that is self-correcting but rather individual scientists who bother to engage critically with particular contributions to the ongoing scientific conversation and keep the community honest, a healthy professional community doesn’t take care of itself — at least, not without effort on the part of individual members of the community.

Professional communities require everyday maintenance. They require tending to keep their collective actions aligned with the values members of the community say they share.

People who work very hard to be part of a professional community despite systemic barriers are people committed enough to the values of the professional community to fight their way through a lot of crap. These are people who really care about the values you purport to care about as a member of the professional community, else why would they waste their time and effort fighting through the crap?

These are the kind of people you should want as colleagues, at least if you value what you say you value. Their contributions could be huge in accomplishing your community’s shared goals and ensuring your community a vibrant future.

Even more than policies that aim to address systemic barriers to their entry to the professional community, these people need a posse. They need others in the community who are unwilling to sacrifice their values — or the well-being of less powerful people who share those values — to take consistent stands against behaviors that create barriers and that undermine the shared work of the community.

These stands needn’t be huge heroic gestures. It could be as simple as reliably being that guy who asks for better gender balance in planning seminars, or who reacts to casual sexist banter with, “Dude, not cool!” It could take the form of asking about policies that might lessen barriers, and taking on some of the work involved in creating or implementing them.

It could be listening to your women colleagues when they describe what it has been like for them within your professional community and assuming the default position of believing them, rather than looking for possible ways they must have misunderstood their own experiences.

If you care about your professional community, in other words, the barriers to entry in the way of people who want badly to be part of that community because they believe fiercely in its values are your problem, too. Acting like it, and doing your part to address these barriers, is sharing the regular maintenance of the professional community you count on.

_____________
While this post is focused on barriers to full participation in professional communities that flow from gender bias, there are plenty of other types of bias that throw up similar barriers, and that could benefit from similar types of response from members of the professional communities not directly targeted by these biases.