The purpose of a funding agency (and how that should affect its response to misconduct).

In the “Ethics in Science” course I regularly teach, students spend a good bit of time honing their ethical decision-making skills by writing responses to case studies. (A recent post lays out the basic strategy we take in approaching these cases.) Over the span of the semester, my students’ responses to the cases give me pretty good data about the development of their ethical decision-making.

From time to time, they also advance claims that make me say, “Hmmm …”

Here’s one such claim, recently asserted in response to a case in which the protagonist, a scientist serving on a study section for the NIH (i.e., a committee that ranks the merit of grant proposals submitted to the NIH for funding), has to make a decision about how to respond when she detects plagiarism in a proposal:

The main purpose of the NIH is to ensure that projects with merit get funded, not to punish scientists for plagiarism.

Based on this assertion, the student argued that it wasn’t clear that the study section member had to make an official report to the NIH about the plagiarism.

I think the claim is interesting, though I think maybe we would do well to unpack it a little. What, for instance, counts as a project with merit?

Is it enough that the proposed research would, if successful, contribute a new piece of knowledge to our shared body of scientific knowledge? Does the anticipated knowledge that the research would generate need to be important, and if so, according to what metric? (Clearly applicable to a pressing problem? Advancing our basic understanding of some part of our world? Surprising? Resolving an ongoing scientific debate?) Does the proposal need to convey evidence that the proposers have a good chance at being successful in conducting the research (because they have the scientific skills, the institutional resources, etc.)?

Does plagiarism count as evidence against merit here?

Perhaps we answer this question differently if we think what should be evaluated is the proposal rather than the proposer. Maybe the proposed research is well-designed, likely to work, and likely to make an important contribution to knowledge in the field — even if the proposer is judged lacking in scholarly integrity (because she seems not to know how properly to cite the words or ideas of others, or not to care to do so if she knows how).

But, one of the expectations of federal funders like the NIH is that scientists whose research is funded will write up the results and share them in the scientific literature. Among other things, this means that one of the scientific skills that a proposer will need to see a project through to completion (including publishing the results) successfully is the ability to write without running afoul of basic standards of honest scholarship. A paper which communicates important results while also committing plagiarism will not bring glory to the NIH for funding the researcher.

More broadly, the fact that something (like detecting or punishing plagiarism) is not a primary goal does not mean it is not a goal that might support the primary goal. To the extent that certain kinds of behavior in proposing research might mark a scientist as a bad risk to carry out research responsibly, it strikes me as entirely appropriate for funding agencies to flag those behaviors when they see them — and also to share that information with other funding agencies.

As well, to the extent that an agency like the NIH might punish a scientist for plagiarism, the kind of punishment it imposes is generally barring that scientist from eligibility for funding for a finite number of years. In other words, the punishment amounts to “You don’t get our money, and you don’t get to ask us for money again for the next N years.” To me, this punishment doesn’t look like it’s disproportional, and it doesn’t look like imposing it on a plagiarist grant proposer diverges wildly from the main goal of ensuring that projects with merit get funded.

But, as always, I’m interested in what you all think about it.

Is it worth fighting about what’s taught in high school biology class?

It is probably no surprise to my regular readers that I get a little exercised about the science wars that play out across the U.S. in various school boards and court actions. It’s probably unavoidable, given that I think about science for a living — when you’ve got a horse in the race, you end up spending a lot of time at the track.

From time to time, though, thoughtful people ask whether some of these battles are distractions from more important issues — and, specifically, whether the question of what a community decides to include in, or omit from, its high school biology curriculum ought to command so much of our energy and emotional investment.

About seven years ago, the focus was on Dover, Pennsylvania, whose school board required that the biology curriculum must include the idea of an intelligent designer (not necessarily God, but … well, not necessarily not-God) as the origin of life on Earth. Parents sued, and U.S. District Judge John E. Jones III ruled that the requirement was unconstitutional. If you missed it as it was happening, there’s a very good NOVA documentary on the court case.

As much as the outcome of this trial felt like a victory to supporters of science, some expressed concerns that the battle over the Dover biology curriculum was focusing on one kind of problem but missing many bigger problems in the process — for example, this dispatch from Dover, PA by Eyal Press, printed in The Nation in November 2005.

Press describes the Dover area as it unfolded for him in a drive-along with former Dover school board member Casey Brown:

We drove out past some cornfields, a sheep farm, a meadow and a couple of barns, along the back roads of York County, a region where between 1970 and 2000, 11 percent of the manufacturing jobs disappeared, and where in the more rural areas one in five children grows up in a low-income family (in the city of York the figure is one in three). Dover isn’t dirt poor, but neither is it wealthy. It’s the kind of place where people work hard and save what they can. Looking out at the soy, wheat and dairy farms while Brown explained that lots of older people in the area can’t afford to keep up with their mortgages and end up walking away from their homes, I was struck by the thought that this was a part of the country where, a century ago, the populist movement might have made inroads by organizing small farmers against the monopolies and trusts. These days, of course, a different sort of populism prevails, infused by religion and defining itself against “outside” forces like the ACLU.

Press also went to see what the students in Dover thought of the controversy:

What do the intended beneficiaries of the Dover school board’s actions make of the intelligent design debate? A few days before meeting Casey Brown, I drove out to Dover high school to find out. It was late in the afternoon and a couple of kids were milling about outside, waiting for rides. When I asked them what they thought of the controversy, they looked at me with blank stares that suggested I could not have posed a question of less relevance to their lives. “I think you should leave us alone,” one of them said. “Everyone just sleeps through that class anyway,” said another. I approached a third kid, who was standing alone. Nobody he knew ever talked about the issue, he told me; it was no big deal.

Press suggests that this is not just a matter of teen ennui. The schools in the area may not be up to the challenge of addressing the real needs of their students:

For the most part, though, kids in Dover seem perplexed that so much attention is being paid to what happens in a single class. It is a sentiment shared by Pat Jennings, an African-American woman who runs the Lighthouse Youth Center, an organization that offers after-school programs, recreational services and parenting and Bible study classes to kids throughout York County. The center, which is privately funded, is located in a brown-brick building in downtown York, next to a church. … A deeply religious woman who describes her faith as “very important” to her, Jennings nonetheless confessed that she hasn’t paid much attention to the evolution controversy, since she’s too busy thinking about other problems the children she serves face–drugs, gangs, lack of access to opportunity, racism. “When we are in this building there are no Latinos, blacks, Caucasian children–just children,” she explained after giving me a tour of the center. “But when I go out there”–she pointed to the street–“I’m reminded that I’m different.”

“There’s a lot of kids out there looking for something,” Jennings continued. “They have questions that need answering. They’re looking for someone to trust.” I asked her if she thought schools were providing that thing. She shook her head. “I don’t know if it’s the schools or the parents or whatever, but something is wrong. The kids I see lack discipline. They lack reading skills.” Listening to her, it was hard not to view the dust-up over intelligent design as a tragic illustration of how energy that could be poured into other problems is wasted on symbolic issues of comparatively minor significance.

Why those symbolic issues have assumed such importance in America has a lot to do with the fact that, in places like Dover, the only institutions around that seem willing to address the concerns of many people are fundamentalist churches.

I take it that Press is not primarily interested in taking scientists to task. Rather, his point seems to be that folks in Dover and places like it are much less concerned about “direction” of curriculum by fundamentalist churches because those churches are perceived as taking care of social needs that no one else — including the government — seems willing or able to address in these communities. It doesn’t seem altogether irrational to bend a little to the folks keeping things together, especially if the bending involves changing the curriculum that the high school students are going to sleep through anyway, does it?

This is a variant of the ongoing debate I have at my university about what is supposed to be going on here. As it occasionally plays out with students in my “Philosophy of Science” class, it goes roughly like this:

Me: A college education should help you understand different kinds of knowledge and reasoning. My class should help you understand what’s distinctive about scientific knowledge.

Jaded Student: Dude, I really just want to sit in the chair and do the minimum I need to do to get the three units of upper division science general education credit. Don’t bug me.

Me: You’re a college student! Learning this is good for you!

Jaded Student: I’m only in college so I can get a job that pays a decent wage. If I could do that any other way, I wouldn’t be here.

Me: How will you navigate the modern world without some understanding of science?

Jaded Student: Unless understanding science gets me a better salary it ain’t gonna happen. Learning for its own sake is for suckers.

And here’s where I want to say that, although Eyal Press is right that there are very bad things that are much larger than the details of the biology curriculum happening in communities like Dover, the fight over quality public education is central rather than merely symbolic.

Whether intelligent design is presented as legitimate and empirically supported scientific theory in the classroom is one piece of delivering quality education, but it’s not the only piece. Making sure schools have the funding they for current books, for lab supplies, for computers and internet connections is another piece. So is making sure teachers can incorporate active learning that is not completely driven by a standardized test. So is ensuring small enough classes that students can get the interaction with their teachers and their classmate that they need to learn effectively. So is finding ways to support student learning in more basic ways — say, by making sure kids get adequate nutrition so they can focus on what they’re learning rather than on gnawing hunger, and making their trips to and from school (not to mention their walks down the school corridors) safer. Each of these issues ought to be addressed. None of them strikes me as a place where it would be legitimate for us to give up rather than to fight for what kids deserve.

Education is not a dispensible luxury. Rather, it is an essential tool for people in making reasonable choices about their own lives. Education isn’t just about teaching specific skills for the workforce; it also lays a foundation with which to learn new skills to keep up with a changing economy (or, dare I say it, with one’s changing interests). Even more, education is supposed to open up a world quite apart from the world of work. The world may need ditch diggers (or repair technicians for the ditch-digging robots), but it would be a much better world if the ditch diggers (and repair technicians) not only earned a decent wage but also had enough left over to buy a few books and to think about things they wanted to think about. (Yes, I’m going on my “everyone deserves a life of the mind” rant. It happens.)

Making a better world may require choosing one’s battles. Some would suggest that the battle over science education is a high-investment, low-payoff battle. But my own sense is that the minute we decide a certain population of students don’t really need good science education, we’ve put up the white flag.

Do we help students who are in difficult socio-economic circumstances by reducing their future prospects to succeed in further science classes or pursue a career in science? Do we help these students when we throw them out into the world as voters and consumers without a clear understanding of how scientific knowledge is produced and of how it is different from other kinds of knowledge? Might it not reinforce the feeling that the larger society really doesn’t actually care much about you or your future if you find out that people with a voice didn’t even whimper as you were subjected to an “education” these people wouldn’t have allowed their own kids to suffer through?

One of the guiding ideals of science is that it is a project in which anyone can engage — provided they have the necessary training. Scientists try to work out accounts of what’s going on in the world that are tested against and built upon observation that human beings can make regardless of their home country, their socio-economic status, their race, their gender, their age. The scientific ideal of universality ought to make science a realm of work that is open to anyone willing to put in the work to become scientist. A career in science could be a real avenue for class mobility.

Unless, of course, we decide that public school students in less affluent communities (or more rural communities, or red states, or whatever) aren’t really entitled to the best science education we can give them. If keeping them fed and out of gangs and passing the standardized tests in reading and writing is the extent of our obligation to these students, maybe a sound science education is a luxury. But if this is the case, we probably ought to cut out the whole “American dream” story and admit to ourselves that this place is not a perfect meritocracy. Those who have the luxury of a quality education have an advantage over those who don’t, and by golly they should own up to that. Especially when budgets are being hammered out, or when elections are coming up.

Lately, of course, as public schools are trying to weather dramatic cuts in state and local budgets (and for those far from the action it keeps getting worse despite claims that the economy is showing signs of improvement), science instruction of any kind has come to be viewed as a frill, something that could be cut in favor of more focus on reading or math (the areas most important for the high-stakes standardized tests). Or perhaps science instruction will need to be cut because budgetary pressures require a shorter school day. Or maybe science instruction will end up being delivered in ever more overcrowded classrooms, with fewer materials for hands-on learning that might give students experience with something like scientific methods for inquiry. Sure, in a perfect world we might want to provide more opportunities for active learning and guided inquiry, but, we are told, we just can’t afford it.

But what does it cost us in the long run not to make this educational investment?

The kids in Dover, and Iowa, and Kansas, whose science classes have become the ground on which grown-ups play out their anxieties about science, are part of your future and mine. So are the kids in the public schools cutting back on science instruction for lack of funds. So are the kids in classrooms where teachers convey the message that one has to be really, really smart — smarter than they are, certainly — to understand anything about science. These kids are the electorate of tomorrow, the workforce of tomorrow, the people who will have to make sensible decisions in their everyday lives as consumers of scientific information.

Even if, as 15 year olds, they don’t fully appreciate the stand being taken on their behalf, I’m not willing to back down from taking it, just the same way I’m not willing to let jaded students out of my classes without some learning taking place. Valuing other members of our society means valuing their future options to set their own course and to find meaning in their own lives.

Making good science education is not sufficient here, but my gut says it may be necessary.

Whither mentoring?

Drugmonkey takes issue with the assertion that mentoring is dead*:

Seriously? People are complaining that mentoring in academic science sucks now compared with some (unspecified) halcyon past?

Please.

What should we say about the current state of mentoring in science, as compared to scientific mentoring in days of yore? Here are some possibilities:

Maybe there has been a decline in mentoring.

This might be because mentoring is not incentivized in the same way, or to the same degree, as publishing, grant-getting, etc. (Note, though, that some programs require evidence of successful mentoring for faculty promotion. Note also that some funding mechanisms require that the early-career scientist being funded have a mentor.)

Or it might be because no one trained the people who are expected to mentor (such as PIs) in how to mentor. (In this case, though, we might take this as a clue that the mentoring these PIs received in days of yore was not so perfect after all.)

Or, it might be that mentoring seems to PIs like a risky move given that it would require too much empathetic attachment with the trainees who are also one’s primary source of cheap labor, and whose prospects for getting a job like the PI’s are perhaps nowhere near as good as the PI (or the folks running the program) have led the trainees to believe.

Or, possibly PIs are not mentoring so well because the people they are being asked to mentor are increasingly diverse and less obviously like the PIs.

Maybe mentoring is no worse than it has ever been.

Perhaps it has always been a poorly defined part of the advisor’s job duties, not to mention one for which hardly anyone gets formal training in how to do. Moreover, the fact that it may depend on inclination and personal compatibility might make it more chancy than things like joining a lab or writing a dissertation.

Maybe mentoring has actually gotten better than it used to be.

It’s even possible that increased diversity in training populations might tend to improve mentoring by forcing PIs to be more conscious of their interactions (since they recognize that the people they are mentoring are not just like them). Similarly, awareness that trainees are facing a significantly different employment landscape than the one the mentor faced might help the mentor think harder about what kind of advice could actual be useful.

Here, I think that we might also want to recognize the possibility that what has changed is not the level of mentoring being delivered, but rather the expectations the trainees have for what kind of mentoring they should receive.

Pulling back from the question of whether mentoring has gotten better, worse, or stayed the same, there are two big issues that prevent us from being able to answer that question. One is whether we can get our hands on sensible empirical data to make anything like an apples-to-apples comparison of mentoring in different times (or, for that matter, in different places). The other is whether we’re all even talking about the same thing when we’re holding forth about mentoring and its putative decline.

Let’s take the second issue first. What do we have in mind when we say that trainees should have mentors? What exactly is it that they are supposed to get out of mentoring.

Vivian Weil [1], among others, points us to the literary origin of the term mentor, and the meanings this origin suggests, in the relationship between the characters Mentor and Telemachus in Homer’s epic poem, the Odyssey. Telemachus was the son of Odysseus; his father was off fighting the Trojan war, and his mother was busy fending off suitors (which involved a lot of weaving and unweaving), so the kid needed a parental surrogate to help him find his way through a confusing and sometimes dangerous world. Mentor took up that role.**

At the heart of mentoring, Weil argues, is the same kind of commitment to protect the interests of someone just entering the world of your discipline, and to help the mentee to develop skills sufficient to take care of himself or herself in this world:

All the activities of mentoring, but especially the nurturing activities, require interacting with those mentored, and so to be a mentor is to be involved in a relationship. The relationships are informal, fully voluntary for both members, but at least initially and for some time thereafter, characterized by a great disparity of experience and wisdom. … In situations where neophytes or apprentices are learning to “play the game”, mentors act on behalf of the interests of these less experienced, more vulnerable parties. (Weil, 473)

In the world of academic science, the guidance a mentor might offer would then be focused on the particular challenges the mentee is likely to face in graduate school, the period in which one is expected to make the transition from being a learner of scientific knowledge to being a maker of new knowledge:

On the traditional model, the mentoring relationship is usually thought of as gradual, evolving, long-term, and involving personal closeness. Conveying technical understanding and skills and encouraging investigative efforts, the mentor helps the mentee move through the graduate program, providing feedback needed for reaching milestones in a timely fashion. Mentors interpret the culture of the discipline for their mentees, and help them identify good practices amid the complexities of the research environment. (Weil, 474)

A mentor, in other words, is a competent grown-up member of the community in which the mentee is striving to become a grown-up. The mentor understands how things work, including what kinds of social interactions are central to conducting research, critically evaluating knowledge claims, and coordinating the efforts of members of the scientific community more generally.

Weil emphasizes that the the role of mentor, understood in this way, is not perfectly congruent with the role of the advisor:

While mentors advise, and some of their other activities overlap with or supplement those of an advisor, mentors should not be confused with advisors. Advising is a structured role in graduate education. Advisors are expected to perform more formal and technical functions, such as providing information about the program and degree requirements and periodic monitoring of advisees’ progress. The advisor may also have another structured role, that of research (dissertation) director, for advisors are often principal investigators or laboratory directors for projects on which advisees are working. In the role of research director, they “may help students formulate research projects and instruct them in technical aspects of their work such as design, methodology, and the use of instrumentation.” Students sometimes refer to the research or laboratory director as “boss”, conveying an employer/employee relationship rather than a mentor/mentee relationship. It is easy to see that good advising can become mentoring and, not surprisingly, advisors sometimes become mentors. Nevertheless, it is important to distinguish the institutionalized role of advisor from the informal activities of a mentor. (Weil, 474)

Mentoring can happen in an advising relationship, but the evaluation an advisor needs to do of the advisee may be in tension with the kind of support and encouragement a mentor should give. The advisor might have to sideline an advisee in the interests of the larger research project; the mentor would try to prioritize the mentee’s interests.

Add to this that the mentoring relationship is voluntary to a greater degree than the advising relationship (where you have to be someone’s advisee to get through), and the interaction is personal rather than strictly professional.

Among other things, this suggests that good advising is not necessarily going to achieve the desired goal of providing good mentoring. It also suggests that it’s a good idea to seek out multiple mentors (e.g., so in situations where an advisor cannot be a mentor due to the conflicting duties of the advisor, another mentor without these conflicts can pick up the slack).

So far, we have a description of the spirit of the relationship between mentor and mentee, and a rough idea of how that relationship might advance the welfare of the mentee, but it’s not clear that this is precise enough that we could use it to assess mentoring “in the wild”.

And surely, if we want to do more than just argue based on subjective anecdata about how mentoring for today’s scientific trainees compares to the good old days, we need to find some way to be more precise about the mentoring we have in mind, and to measure whether it’s happening. (Absent a time machine, or some stack of data collected on mentoring in the halcyon past, we probably have to acknowledge that we just don’t know how past mentoring would have measured up.)

A faculty team from the School of Nursing at Johns Hopkins University, led by Roland A. Berk [2], grappled with the issue of how to measure whether effective mentoring was going on. Here, the mentoring relationships in question were between more junior and more senior faculty members (rather than between graduate students and faculty members), and the impetus for developing a reliable way to measure mentoring effectiveness was the fact that evidence of successful mentoring activities was a criterion for faculty promotion.

Finding no consistent definition of mentoring in the literature on medical faculty mentoring programs, Berk et al. put forward this one:

A mentoring relationship is one that may vary along a continuum from informal/short-term to formal/long-term in which faculty with useful experience, knowledge, skills, and/or wisdom offers advice, information, guidance, support, or opportunity to another faculty member or student for that individual’s professional development. (Note: This is a voluntary relationship initiated by the mentee.) (Berk et al., 67)

Then, they spelled out central responsibilities within this relationship:

[F]aculty must commit to certain concrete responsibilities for which he or she will be held accountable by the mentees. Those concrete responsibilities are:

  • Commits to mentoring
  • Provides resources, experts, and source materials in the field
  • Offers guidance and direction regarding professional issues
  • Encourages mentee’s ideas and work
  • Provides constructive and useful critiques of the mentee’s work
  • Challenges the mentee to expand his or her abilities
  • Provides timely, clear, and comprehensive feedback to mentee’s questions
  • Respects mentee’s uniqueness and his or her contributions
  • Appropriately acknowledges contributions of mentee
  • Shares success and benefits of the products and activities with mentee

(Berk et al., 67)

These were then used to construct a “Mentorship Effectiveness Scale” that mentees could use to share their perceptions of how well their mentors did on each of these responsibilities.

Here, one might raise concerns that there might be a divergence between how effective a mentee thinks the mentor is in each of these areas and how effective the mentor actually is. Still, tracking the perceptions of the mentees with the instrument developed by Berk et al. provides some kind of empirical data. In discussions about whether mentoring is getting better or worse, such data might be useful.

And, if this data isn’t enough, it should be possible to work out strategies to get the data you want: Survey PIs to see what kind of mentoring they want to provide and how this compares to what kind of mentoring they feel able to provide. (If there are gaps here, follow-up questions might explore the perceived impediments to delivering certain elements of mentoring.) Survey the people running graduate programs to see what kind of mentoring they think they are (or should be) providing and what kind of mechanisms they have in place to ensure that if it doesn’t happen informally between the student and the PI, it’s happening somewhere.

To the extent that successful mentoring is already linked to tangible career rewards in some places, being able to make a reasonable assessment of it seems appropriate.

It’s possible that making it a standard thing to evaluate mentoring and to tie it to tangible career rewards (or penalties, if one does an irredeemably bad job of it) might help focus attention on mentoring as an important thing for grown-up members of the scientific community to do. This might also lead to more effort to help people learn how to mentor effectively and to offer support and remediation for people whose mentoring skills are not up to snuff.

But, I have a worry (not a huge one, but not nanoscale either). Evaluation of effective mentoring seems to rely on breaking out particular things the mentor does for the mentee, or particular kinds of interactions that take place between the two. In other words, the assessment tracks measurable proxies for a more complicated relationship.

That’s fine, but there’s a risk that a standardized assessment might end up reducing the “mentorship” that mentors offer, and that mentees seek, to these proxies. Were this to happen, we might lose sight of the broader, richer, harder-to-evaluate thing that mentoring can be — an entanglement of interests, a transmission of wisdom, and of difficult questions, and of hopes, and of fears, in what boils down to a personal relationship based on a certain kind of care.

The thing we want the mentorship relationship to be is not something that you could force two people to be in — any more than we could force two people to be in love. We feel the outcomes are important, but we cannot compel them.

And obviously, the assessable outcomes that serve as proxies for successful mentoring are better than nothing. Still, it’s not unreasonable for us to hope for more as mentees, nor to try to offer more as mentors.

After all, having someone on the inside of the world of which you are trying to become a part, someone who knows the way and can lead you through, and someone who believes in you and your potential even a little more than you believe in them yourself, can make all the difference.

_____
*Drugmonkey must know that my “Ethics in Science” class will be discussing mentoring this coming week, or else he’s just looking for ways to distract me from grading.

**As it happened, Mentor was actually Athena, the goddess of wisdom and war, in disguise. Make of that what you will.

[1] Weil, V. (2001) Mentoring: Some Ethical Considerations. Science and Engineering Ethics. 7 (4): 471-482.

[2] Berk, R. A., Berg, J., Mortimer, R., Walton-Moss, B., and Yeo, T. P. (2005) Measuring the Effectiveness of Faculty Mentoring Relationships. Academic Medicine. 80: 66-71.

Who matters (or should) when scientists engage in ethical decision-making?

One of the courses I teach regularly at my university is “Ethics in Science,” a course that explores (among other things) what’s involved in being a good scientist in one’s interactions with the phenomena about which one is building knowledge, in one’s interactions with other scientists, and in one’s interactions with the rest of the world.

Some bits of this are pretty straightforward (e.g., don’t make up data out of whole cloth, don’t smash your competitor’s lab apparatus, don’t use your mad science skillz to engage in a campaign of super-villainy that brings Gotham City to its knees). But, there are other instances where what a scientist should or should not do is less straightforward. This is why we spend significant time and effort talking about — and practicing — ethical decision-making (working with a strategy drawn from Muriel J. Bebeau, “Developing a Well-Reasoned Response to a Moral Problem in Scientific Research”). Here’s how I described the basic approach in a post of yore:

Ethical decision-making involves more than having the right gut-feeling and acting on it. Rather, when done right, it involves moving past your gut-feeling to see who else has a stake in what you do (or don’t do); what consequences, good or bad, might flow from the various courses of action available to you; to whom you have obligations that will be satisfied or ignored by your action; and how the relevant obligations and interests pull you in different directions as you try to make the best decision. Sometimes it’s helpful to think of the competing obligations and interests as vectors, since they come with both directions and magnitudes — which is to say, in some cases where they may be pulling you in opposite directions, it’s still obvious which way you should go because the magnitude of one of the obligations is so much bigger than of the others.

We practice this basic strategy by using it to look at a lot of case studies. Basically, the cases describe a situation where the protagonist is trying to figure out what to do, giving you a bunch of details that seem salient to the protagonist and leaving some interesting gaps where the protagonist maybe doesn’t have some crucial information, or hasn’t looked for it, or hasn’t thought to look for it. Then we look at the interested parties, the potential consequences, the protagonist’s obligations, and the big conflicts between obligations and interests to try to work out what we think the protagonist should do.

Recently, one of my students objected to how we approach these cases.

Specifically, the student argued that we should radically restrict our consideration of interested parties — probably to no more than the actual people identified by name in the case study. Considering the interests of a university department, or of a federal funder, or of the scientific community, the student asserted, made the protagonist responsible to so many entities that the explicit information in the case study was not sufficient to identify the correct course of action.*

And, the student argued, one interested party that it was utterly inappropriate for a scientist to include in thinking through an ethical decision is the public.

Of course, I reminded the student of some reasons you might think the public would have an interest in what scientists decide to do. Members of the public share a world with scientists, and scientific discoveries and scientific activities can have impacts on things like our environment, the safety of our buildings, what our health care providers know and what treatments they are able to offer us, and so forth. Moreover, at least in the U.S., public funds play an essential role in supporting both scientific research and the training of new scientists (even at private universities) — which means that it’s hard to find an ethical decision-making situation in a scientific training environment that is completely isolated from something the public paid for.

My student was not moved by the suggestion that financial involvement should buy the public any special consideration as a scientist was trying to decide the right thing to do.

Indeed, central to the student’s argument was the idea that the interests of the public, whether with respect to science or anything else, are just too heterogeneous. Members of the public want lots of different things. Taking these interests into account could only be a distraction.

As well, the student asserted, too small a proportion of the public actually cares about what scientists are up to that the public, even if it were more homogeneous, ought to be taken into account by the scientists grappling with their own ethical quandaries. Even worse, the student ventured, those that do care what scientists are up to are not necessarily well-informed.

I’m not unsympathetic to the objection to the extreme case here: if a scientist felt required to somehow take into account the actual particular interests of each individual member of the public, that would make it well nigh impossible to actually make an ethical decision without the use of modeling methods and supercomputers (and even then, maybe not). However, it strikes me that it shouldn’t be totally impossible to anticipate some reasonable range of interests non-scientists have that might be impacted by the consequences of a scientist’s decision in various ways. Which is to say, the lack of total fine-grained information about the public, or of complete predictability of the public’s reactions, would surely make it more challenging to make optimal ethical decisions, but these challenges don’t seem to warrant ignoring the public altogether just so the problem you’re trying to solve becomes more tractable.

In any case, I figure that there’s a good chance some members of the public** may be reading this post. To you, I pose the following questions:

  1. Do you feel like you have an interest in what science and scientists are up to? If so, how would you describe that interest? If not, why not?
  2. Do you think scientists should treat “the public” as an interested party when they try to make ethical decisions? Why or why not?
  3. If you think scientists should treat “the public” as an interested party when they try to make ethical decisions, what should scientists be doing to get an accurate read on the public’s interests?
  4. And, for the sake of symmetry, do you think members of the public ought to take account of the interests of science or scientists when they try to make ethical decisions? Why or why not?

If, for some reason, you feel like chiming in on these questions in the comments would expose you to unwanted blowback, you can also email me your responses (dr dot freeride at gmail dot com) for me to anonymize and post on your behalf.

Thanks in advance for sharing your view on this!

_____
*Here I should note that I view the ambiguities within the case studies as a feature, not a bug. In real life, we have to make good ethical decisions despite uncertainties about what consequences will actually follow our actions, for example. Those are the breaks.

**Officially, scientists are also members of the public — even if you’re stuck in the lab most of the time!

What does a Ph.D. in chemistry get you?

A few weeks back, Chemjobber had an interesting post looking at the pros and cons of a PhD program in chemistry at a time when job prospects for PhD chemists are grim. The post was itself a response to a piece in the Chronicle of Higher Education by a neuroscience graduate student named Jon Bardin which advocated strongly that senior grad students look to non-traditional career pathways to have both their Ph.D.s and permanent jobs that might sustain them. Bardin also suggested that graduate students “learn to approach their education as a series of learning opportunities rather than a five-year-long job interview,” recognizing the relative luxury of having a “safe environment” in which to learn skills that are reasonably portable and useful in a wide range of career trajectories — all while taking home a salary (albeit a graduate-stipend sized one).

Chemjobber replied:

Here’s what I think Mr. Bardin’s essay elides: cost. His Ph.D. education (and mine) were paid for by the US taxpayer. Is this the best deal that the taxpayer can get? As I’ve said in the past, I think society gets a pretty good deal: they get 5+ years of cheap labor in science, (hopefully) contributions to greater knowledge and, at the end of the process, they get a trained scientist. Usually, that trained scientist can go on to generate new innovations in their independent career in industry or academia. It’s long been my supposition that the latter will pay (directly and indirectly) for the former. If that’s not the case, is this a bargain that society should continue to support? 

Mr. Bardin also shows a great deal of insouciance about the costs to himself: what else could he have done, if he hadn’t gone to graduate school? When we talk about the costs of getting a Ph.D., I believe that we don’t talk enough about the sheer length of time (5+ years) and what other training might have been taken during that time. Opportunity costs matter! An apprenticeship at a microbrewery (likely at a similar (if not higher) pay scale as a graduate student) or a 1 or 2 year teaching certification process easily fits in the half-decade that most of us seem to spend in graduate school. Are the communications skills and the problem-solving skills that he gained worth the time and the (opportunity) cost? Could he have obtained those skills somewhere else for a lower cost? 

Chemjobber also note that while a Ph.D. in chemistry may provide tools for range of careers, actually having a Ph.D. in chemistry on your resume is not necessarily advantageous in securing a job in one of those career.

As you might imagine this is an issue to which I have given some thought. After all, I have a Ph.D. in chemistry and am not currently employed in a job that is at all traditional for a Ph.D. in chemistry. However, given that it has been nearly two decades since I last dipped a toe into the job market for chemistry Ph.D.s, my observations should be taken with a large grain of sodium chloride.

First off, how should one think of a Ph.D. program in chemistry? There are many reasons you might value a Ph.D. program. A Ph.D. program may be something you value primarily because it prepares you for a career of a certain sort. It may also be something you value for what it teaches you, whether about your own fortitude in facing challenges, or about how the knowledge is built. Indeed, it is possible — maybe even common — to value your Ph.D. program for more than one of these reasons at a time. And some weeks, you may value it primarily because it seemed like the path of least resistance compared to landing a “real job” right out of college.

I certainly don’t think it’s the case that valuing one of these aspects of a Ph.D. program over the others is right or wrong. But …

Economic forces in the world beyond your graduate program might be such that there aren’t as many jobs suited to your Ph.D. chemist skills as there are Ph.D. chemists competing for those jobs. Among other things, this means that earning a Ph.D. in chemistry does not guarantee you a job in chemistry on the other end.

To which, as the proud holder of a Ph.D. in philosophy, I am tempted to respond: join the club! Indeed, I daresay that recent college graduates in many, many majors have found themselves in a world where a bachelors degree guarantees little except that the student loans will still need to be repaid.

To be fair, my sense is that the mismatch between supply of Ph.D. chemists and demand for Ph.D. chemists in the workplace is not new. I have a vivid memory of being an undergraduate chemistry major, circa 1988 or 1989, and being told that the world needed more Ph.D. chemists. I have an equally vivid memory of being a first-year chemistry graduate student, in early 1990, and picking up a copy of Chemical & Engineering News in which I read that something like 30% too many Ph.D. chemists were being produced given the number of available jobs for Ph.D. chemists. Had the memo not reached my undergraduate chemistry professors? Or had I not understood the business model inherent in the production of new chemists?

Here, I’m not interested in putting forward a conspiracy theory about how this situation came to be. My point is that even back in the last millennium, those in the know had no reason to believe that making it through a Ph.D. program in chemistry would guarantee your employment as a chemist.

So, what should we say about this situation?

One response to this situation might be to throttle production of Ph.D. chemists.

This might result in a landscape where there is a better chance of getting a Ph.D. chemist job with your Ph.D. in chemistry. But, the market could shift suddenly (up or down). Were this to happen, it would take time to adjust the Ph.D. throughput in response. As well, current PIs would have to adjust to having fewer graduate students to crank out their data. Instead, they might have to pay more technicians and postdocs. Indeed, the number of available postdocs would likely drop once the number of Ph.D.s being produced more closely matched the number of permanent jobs for holders of those Ph.D.s.

Needless to say, this might be a move that the current generation of chemists with permanent positions at the research institutions that train new chemists would find unduly burdensome.

We might also worry about whether the thinning of the herd of chemists ought to happen on the basis of bachelors-level training. Being a successful chemistry major tends to reflect your ability to learn scientific knowledge, but it’s not clear to me that this is a great predictor of how good you would be at the project of making new scientific knowledge.

In fact, the thinning of the herd wherever it happens seems to put a weird spin on the process of graduate-level education. Education, after all, tends to aim for something bigger, deeper, and broader than a particular set of job skills. This is not to say that developing skills is not an important part of an education — it is! But in addition to these skills, one might want an understanding of the field in which one is being educated and its workings. I think this is connected to how being a chemist becomes linked to our identity, a matter of who we are rather than just of what we do.

Looked at this way, we might actually wonder about who could be harmed by throttling Ph.D. program enrollments.

Shouldn’t someone who’s up for the challenge have that experience open to her, even if there’s no guarantee of a job at the other end? As long as people have accurate information with which to form reasonable expectations about their employment prospects, do we want to be paternalistic and tell them they can’t?

(There are limits here, of course. There are not unlimited resources for the training of Ph.D. chemists, nor unlimited slots in graduate programs, nor in the academic labs where graduate students might participate meaningfully in research. The point is that maybe these limits are the ones that ought to determine how many people who want to learn how to be chemists get to do that.)

Believe it or not, we had a similar conversation in a graduate seminar filled with first and second year students in my philosophy Ph.D. program. Even philosophy graduate students have an interest in someday finding stable employment, the better to eat regularly and live indoors. Yet my sense was that even the best graduate students in my philosophy Ph.D. program recognized that employment in a job tailor-made for a philosophy Ph.D. was a chancy thing. Certainly, there were opportunity costs to being there. Certainly, there was a chance that one might end up trying to get hired to a job for which having a PhD would be viewed as a disadvantage to getting hired. But the graduate students in my philosophy program had, upon weighing the risks, decided to take the gamble.

How exactly are chemistry graduate students presumed to be different here? Maybe they are placing their bets at a table with higher payoffs, and where the game is more likely to pay off in the first place. But this is still not a situation in which one should expect that everyone is always going to win. Sometimes the house will win instead.

(Who’s the house in this metaphor? Is it the PIs who depend on cheap grad-student labor? Universities with hordes of pre-meds who need chemistry TAs and lab instructors? The public that gets a screaming deal on knowledge production when you break it down in terms of price per publishable unit? A public that includes somewhat more members with a clearer idea of how scientific knowledge is built? Specifying the identity of the house is left as an exercise for the reader.)

Maybe the relevant difference between taking a gamble on a philosophy Ph.D. and taking a gamble on a chemistry Ph.D. is that the players in the latter have, purposely or accidentally, not been given accurate information about the odds of the game.

I think it’s fair for chemistry graduate students to be angry and cynical about having been misled as far as likely prospects for employment. But given that it’s been going on for at least a couple decades (and maybe more), how the hell is it that people in Ph.D. programs haven’t already figured out the score? Is it that they expect that they will be the ones awesome enough to get those scarce jobs? Have they really not thought far enough ahead to seek information (maybe even from a disinterested source) about how plausible their life plans are before they turn up at grad school? Could it be that they have decided that they want to be chemists when they grow up without doing sensible things like reading the blogs of chemists at various stages of careers and training?

Presumably, prospective chemistry grad students might want to get ahold of the relevant facts and take account of them in their decision-making. Why this isn’t happening is somewhat mysterious to me, but for those who regard their Ph.D. training in chemistry as a means to a career end, it’s absolutely crucial — and trusting the people who stand to benefit from your labors as a graduate student to hook you up with those facts seems not to be the best strategy ever.

And, as I noted in comments on Chemjobber’s post, the whole discussion suggests to me that the very best reason to pursue a Ph.D. in chemistry is because you want to learn what it is like to build new knowledge in chemistry, in an academic setting. Since being plugged into a particular kind of career (or even job) on the other end is a crap-shoot, if you don’t want to learn about this knowledge-building process — and want it enough to put up with long hours, crummy pay, unrewarding piles of grading, and the like — then possibly a Ph.D. program is not the best way to spend 5+ years of your life.

Who profits from killing Pluto?

You may recall (as I and my offspring do) the controversy about six years ago around the demotion of Pluto. There seemed to me to be reasonable arguments on both sides, and indeed, my household included pro-Pluto partisans and partisans for a new, clear definition of “planet” that might end up leaving Pluto on the exo-planet side of the line.

At the time, Neil deGrasse Tyson was probably the most recognizable advocate of the anti-Pluto position, and since then he has not been shy about reaffirming his position. I had taken this vocal (even gleeful) advocacy as just an instance of a scientist working to do effective public outreach, but recently, I’ve been made aware of reasons to believe that there may be more going on with Neil deGrasse Tyson here.

You may be familiar with the phenomenon of offshore banking, which involves depositors stashing their assets in bank accounts in countries with much lower taxes than the jurisdictions in which the depositors actually reside. Indeed, residents of the U.S. have occasionally used offshore bank accounts (and bank secrecy policies) to hide their money from the prying (and tax-assessing) eyes of the Internal Revenue Service.

Officially, those who are subject to U.S. income tax are required to declare any offshore bank accounts they might have. However, since the offshore banks themselves have generally not been required by law to report interest income on their accounts to the U.S. tax authorities, lots of account holders have kept mum about it, too.

Recently, however, the U.S. government has been more vigorous in its efforts to track down this taxable offshore income, and has put more pressure on the offshore bankers not to aid their depositors in hiding assets. International pressure seems to be pushing banks in the direction of more transparency and accountability.

What does any of this have to do with Neil deGrasse Tyson, or with Pluto?

You may recall, back when the International Astronomical Union (IAU) was formally considering the question of Pluto’s status, that Neil deGrasse Tyson was a vocal proponent of demoting Pluto from planethood. Despite his position at the Hayden Planetarium, a position in which he had rather more contact with school children and other interested non-scientists making heartfelt arguments in support of Pluto’s planethood, Neil deGrasse Tyson was utterly unmoved.

Steely in his determination to get Pluto reclassified. And forward looking. Add to that remarkably well-dressed (seriously, have you seen his vests?) for a Ph.D. astrophysicist who has spent most of his career working for museums.

The only way it makes sense is if Neil deGrasse Tyson has been stashing money someplace it can earn interest without being taxed. Given his connections, this can only mean off-world banking.

But again, what does this have to do with Pluto?

Pluto killer though he may be, Neil deGrasse Tyson is law abiding. There have so far been no legal requirements to report interest income earned in banks on other planets. But Neil deGrasse Tyson, as a forward looking kind of guy, undoubtedly recognizes that regulators are rapidly moving in the direction of requiring those subject to U.S. income tax to declare their bank accounts on other planets.

The regulators, however, seem uninterested in making any such requirements for those with assets in off-world banks that are not on planets. Which means that while Pluto is less than 1/5 the mass of Earth’s Moon, as a non-planet, it will remain a convenient place for Neil deGrasse Tyson to benefit from compound interest without increasing his tax liability.

It kind of casts his stance on Pluto in a different light, doesn’t it?

[More details in this story from the Associated Press.]

Reading “White Coat, Black Hat” and discovering that ethicists might be black hats.

During one of my trips this spring, I had the opportunity to read Carl Elliott’s book White Coat, Black Hat: Adventures on the Dark Side of Medicine. It is not always the case that reading I do for my job also works as riveting reading for air travel, but this book holds its own against any of the appealing options at the airport bookstore. (I actually pounded through the entire thing before cracking open the other book I had with me, The Girl Who Kicked the Hornet’s Nest, in case you were wondering.)

Elliott takes up a number of topics of importance in our current understanding of biomedical research and how to do it ethically. He considers the role of human subjects for hire, of ghostwriters in the production of medical papers, of physicians who act as consultants and spokespeople for pharmaceutical companies, and of salespeople for the pharmaceutical companies who interact with scientists and physicians. There are lots of important issues here, engagingly presented and followed to some provocative conclusions. But the chapter of the book that gave me the most to think about, perhaps not surprisingly, is the chapter called “The Ethicists”.

You might think, since Elliott is writing a book that points out lots of ways that biomedical research could be more ethical, that he would present a picture where ethicists rush in and solve the problems created by unwitting research scientists, well-meaning physicians, and profit driven pharmaceutical company. However, Elliott presents instead reasons to worry that professional ethicists will contribute to the ethical tangles of the biomedical world rather than sorting them out. Indeed Elliott identifies what seem to be special vulnerabilities in the psyche of the professional ethicist. For example, he writes, “There is no better way to enlist bioethicists in the cause of consumer capitalism than to convince them they are working for social justice.” (139-140) Who, after all, could be against social justice? Yet, when efforts on behalf of social justice takes the form of debates on television news programs about fair access to new pharmaceuticals, the big result seems to be free advertising for the companies making those pharmaceuticals. Should bioethicists be accountable for these unforeseen results? This chapter suggests that careful bioethicists ought to foresee them, and to take responsibility.

There is an irony in professionals who see part of their job as pointing out conflicts of interest to others that they may be placing themselves right in the path of equally overwhelming conflicts of interest. Some of these have to do with the practical problem of how to fund their professional work. Universities these days are struggling with reduced budgets, which means they are encouraging their faculty to be more entrepreneurial — including by cultivating relationships that might lead to donations from the private sector. To the extent that bioethics is seen as relevant to pharmaceutical development, pharmaceutical companies, which have deeper pockets than do universities, are seen as attractive targets for fundraising.

As Elliott notes, bioethicists have seen a great deal of success in this endeavor. He writes,

For the last three decades bioethics has been vigorously generating new centers, new commissions, new journals, and new graduate programs, not to mention a highly politicized role in American public life. In the same way that sociologists saw their fortunes climb during the 1960s as the public eye turned towards social issues like poverty, crime, and education, bioethics started to ascend when medical care and scientific research began generating social questions of their own. As the field grows more prominent, bioethicists are considering a funding model familiar to the realm of business ethics, one that embraces partnership and collaboration with corporate sponsors as long as outright conflict of interest can be managed. …

Corporate funding present a public relations challenge, of course. It looks unseemly for an ethicist to share in the profits of arms dealers, industrial polluters, or multinationals that exploit the developing world. Credibility is also a concern. Bioethicist teach about pharmaceutical company issues in university classrooms, write about those issues in books and articles, and comment on them in the press. Many bioethicists evaluate industry policies and practices for professional boards, government bodies, and research ethics committees. To critics, this raises legitimate questions about the field of bioethics itself. Where does the authority of ethicists come from, and why are corporations so willing to fund them? (140-141)

That comparison of bioethics to business, by the way, is the kind of thing that gets my attention; one of the spaces frequently assigned for “Business and Professional Ethics” courses at my university is the Arthur Anderson Conference Room. Perhaps this is a permanent teachable moment, but I can’t help worry that really the lesson has to do with the vulnerability of the idealistic academic partner in the academic-corporate partnership.

Where does the authority of ethicist come from? I have scrawled in the margin something about appropriate academic credentials and good arguments. But connect this first question to Elliott’s second question: why are corporations so willing to fund them? Here, we need to consider the possibility that their credibility and professional status is, in a pragmatic sense, directly linked to corporations paying bioethicists for their labors. What, exactly, are those corporations paying for?

Let’s put that last question aside for a moment.

Arguably, the ethicist has some skills and training that render her a potentially useful partner for people trying to work out how to be ethical in the world. One hopes what she says would be informed by some amount of ethical education, serious scholarship, and decision-making strategies grounded in a real academic discipline.

Elliott notes that “[s]ome scholars have recoiled, emphatically rejecting the notion that their voices should count more than others’ on ethical affairs.” (142) Here, I agree if the claim is, in essence, that the interests of the bioethicists are no more important than others’. Surely the perspectives of others who are not ethicists matter, but one might reasonably expect that ethicists can add value, drawing on their experience in taking those interests, and the interest of other stakeholders, into account to make reasonable ethical decisions.

Maybe, though, those of us who do ethics for a living just tell ourselves we are engaged in a more or less objective decision-making process. Maybe the job we are doing is less like accounting and more like interpreting pictures in inkblots. As Elliott writes,

But ethical analysis does not really resemble a financial audit. If a company is cooking its books and the accountant closes his eyes to this fact in his audit, the accountant’s wrongdoing can be reliably detected and verified by outside monitors. It is not so easy with an ethics consultant. Ethicists have widely divergent views. They come from different religious standpoints, use different theoretical frameworks, and profess different political philosophies. Also free to change their minds at any point. How do you tell the difference between an office consultant who has changed her mind for legitimate reasons and one who has changed her mind for money? (144)

This impression of the fundamental squishiness of the ethicist’s stock in trade seems to be reinforced in a quote Elliott takes from biologist-entrepreneur Michael West: “In the field of ethics, there are no ground rules, so it’s just one ethicist opinion versus another ethicist’s opinion. You’re not getting whether someone is right or wrong, because it all depends on who you pick.” (144-145)

Here, it will probably not surprise you to learn that I think these claims are only true when the ethicists are doing it wrong.

What, then, would be involved in doing it right? To start with, what one should ask from an ethicist should be more than just an opinion. One should also ask for an argument to support that opinion, an argument that makes reference to important details like interested parties, potential consequences of the various options for action on the table, the obligations the party making the decisions to the stakeholders, and so forth — not to mention consideration of possible objections to this argument. It is fair, moreover, to ask the ethicist whether the recommended plan of action it is compatible with more than one ethical theory — or, for example, if it only works in the world we are sharing solely with other Kantians.

This would not make auditing the ethical books as easy as auditing the financial statements, but I think it would demonstrate something like rigor and lend itself to meaningful inspection by others. Along the same lines, I think it would be completely reasonable, in the case that an ethicist has gone on record as changing her mind, to ask for the argument that brought her from one position to the other. It would also be fair to ask, what argument or evidence might bring you back again?

Of course, all of this assumes an ethicist arguing in good faith. It’s not clear that what I’ve described as crucial features of sound ethical reasoning couldn’t be mimicked by someone who wanted to appear to be a good ethicist without going to the trouble of actually being one.

And if there’s someone offering you money — maybe a lot of money — for something that looks like good ethical reasoning, is there a chance you could turn from an ethicist arguing in good faith to one who just looks like she is, perhaps without even being aware of it herself?

Elliott pushes us to examine whether the dangers that may lurk when the private-sector interests are willing to put up money for your ethical insight. Have they made a point of asking for your take primarily because your paper-trail of prior ethical argumentation lines us really well with what they would like an ethicist to say to give them cover to do what they already want to do — not because it’s ethical, necessarily, but because it’s profitable or otherwise convenient? You may think your ethical stances are stable because they are well-reasoned (or maybe even right). But how can you be sure that the stability of your stance is not influenced by the size of your consultation paycheck? How can you tell that you have actually been solicited for an honest ethical assessment — one that, potentially, could be at odds with what the corporation soliciting it wants to hear? If you tell that corporation that a certain course of action would be unethical, do you have any power to prevent them from pursuing that course of action? Do you have an incentive to tell the corporation what it wants to hear, not just to pick up your consulting fee, but to keep a seat at the table where you might hope to have a chance of nudging its behavior in a more ethical direction, even if only incrementally?

None of these are easy questions to answer objectively if you’re the ethicist in the scenario.

Indeed, even if money were not part of the equation, the very fact that people at the corporations — or researchers, or physicians, or whoever it is seeking the ethicists’ expertise — are reaching out to ethicists and identifying them as experts with something worthwhile to contribute might itself make it harder for the ethicists to deliver what they think they should. As Elliott argues, the personal relationships may end up creating conflicts of interest that are at least as hard to manage as those that occur when money changes hands. These people asking for our ethical input seem like good folks, motivated at least in part by goals (like helping people with disease) that are noble. We want them to succeed. And we kind of dig that they seem interested in what we have to say. Because we end up liking them as people, we may find it hard to tell them things they don’t want to hear.

And ultimately, Elliott is arguing, barriers to delivering news that people don’t want to hear — whether those barriers come from financial dependence, the professional prestige that comes when your talents are in demand, or developing personal relationships with the people you’re advising — are barriers to being a credible ethicist. Bioethics becomes “the public relations division of modern medicine” (151) rather than carrying on the tradition of gadflies like Socrates. If they were being Socratic gadflies and telling truth to power, Elliott suggests, we would surely be able to find at least a few examples of bioethics who were punished for their candor. Instead, we see the ties between ethicists and the entities they advise growing closer.

This strikes close to home for me, as I aspire to do work in ethics that can have real impacts on the practice of scientific knowledge-building, the training of new scientists, the interaction of scientists with the rest of the world. On the one hand, it seems to help me to understand the details of scientific activity, and the concerns of scientists and scientific trainees. But, if I “go native” in the tribe of science, Elliott seems to be saying that I could end up dropping the ball as far as what it means to make the kind of contribution a proper ethicist should:

Bioethicists have gained recognition largely by carving out roles as trusted advisers. But embracing the role of trusted adviser means forgoing other potential roles, such as that of the critic. It means giving up on pressuring institutions from the outside, in the manner of investigative reporters. As bioethicists seek to become trusted advisers, rather than gadflies or watchdogs, it will not be surprising if they slowly come to resemble the people they are trusted to advise. And when that happens, moral compromise will be unnecessary, because there will be little left to compromise. (170)

This is strong stuff — the kind of stuff which, if taken seriously, I hope can keep me on track to offer honest advice even when it’s not what the people or institutions to whom I’m offering it want to hear. Heeding the warnings of a gadfly like Carl Elliott might just help an ethicist do what she has to do to be able to trust herself.

Crime, punishment, and the way forward: in the wake of Sheri Sangji’s death, what should happen to Patrick Harran?

When bad things happen in an academic laboratory, what should happen to people who bear responsibility for those bad things — even if they didn’t mean for them to happen?

This is the broad question I’ve been thinking about in connection with the prosecution of chemistry professor Patrick Harran and UCLA in connection with the laboratory accident that killed Sheri Sangji. Potentially, Harran could face jail time, and there has been a good bit of discussion (as in these posts at Chemjobber) about whether that’s what he deserves.

I’ll be honest: I find myself uncomfortable weighing Harran’s actions (and inaction) as worthy of jail time or not, let alone assigning the appropriate number of months or years behind bars to punish him for Sheri Sangji’s death. And, other than satisfying our appetite for retribution, I am utterly unsure whether such a penalty in this case would help. I don’t know that it would do much to change the conditions and institutions that ought to be changed in the wake of this accident. (On the matter of changing institutions, read the excellent posts at ChemBark and Chemjobber.)

Sheri Sangji’s death should alert us that things need to change. Conditions in academic labs need to change. Attitudes and behaviors of PIs, students, and technicians need to change. University departments (which are both builders of knowledge and trainers of new scientists) need to change. What kind of resolution of the prosecution of Prof. Harran could bring about the needed changes?

The best way forward should keep lab accidents like the one that killed Sheri Sangji from happening again. Of course, if we’re talking about avoiding such lab accidents, we’re assuming this one was preventable through some combination of proper safety equipment and attire, training, supervision, and the like.

Jailing the PI would certainly get the attention of other PIs and would underline the message that they are responsible for safety in their labs, as well as for addressing deficiencies identified in safety inspections (and maybe even for identifying and addressing the deficiencies themselves). Maybe jailing the PI in this case would also make Sheri Sangji’s family feel that justice had been served.

But, jailing the PI here might also move him, and the larger problem of making research activities reliably non-lethal, out of the sight of the people who really need to be focused on learning the lesson here.

Maybe jail would make him appear like more of the monster; his lab must have been much worse than ours. Or maybe his absence from the academic research milieu might simply mean the other PIs would return their focus to the pressing problems of securing funding, generating data, and cranking out manuscripts. Perhaps their institutions would be stricter about future safety inspections, but the PIs would do what they needed to do to return to the business as usual. Given the extent to which universities rely on external grants secured by such scientific business-as-usual, it’s hard to imagine universities doing much to shake PIs out of this routine.

If we’re interested in justice that actually addresses the dangers of business as usual, I think there is another option we should explore.

I don’t think Prof. Harran should be allowed to continue with the lines of research he was pursuing when the accident in his lab claimed Sheri Sangji’s life. The way he conducted that research — the way he supervised activities and personnel — killed someone employed to advance the research. That’s a big enough strike to bench him and let other PIs play that knowledge-building zone.

Instead, Harran should devote the remainder of his career to creating a scientific culture — at UCLA and beyond — in which the safety of the people performing the experiments (and making the reagents, and fixing the equipment, and cleaning the glassware) is never sacrificed to the goal of getting more and faster results. His mission should be to communicate just how easy it was for a “good PI” to allow lapses in safe procedures, to assume students and staff will figure out how to be safe when using materials or techniques that are new to them, to find tasks more important than supervising lab work, to discourage questions about how to be safe.

This shouldn’t be a new service requirement on Harran in addition to his research and his teaching. This should be the core of his job.

He should not only grapple with the soul-searching a decent person does when he’s allowed conditions that have killed and underling, but also do that soul-searching in a space where the rest of the scientific community can participate and include themselves in the examination. Harran’s presence in this role — his active involvement with his department in this role — means that Sheri Sangji and the circumstances that killed her will not be forgotten.

Since research grants would be unlikely to pay for this new set of professorial professional responsibilities — and since UCLA likely bears some share of responsibility for creating the conditions that killed Sheri Sangji — UCLA should fully fund these new responsibilities of Harran’s position moving forward. As well, UCLA should provide what support is necessary to allow Harran’s colleagues (and students and other personnel in their labs) to adapt their own practices in ways that incorporate his lessons. And, it might have a meaningful impact if professional organizations like the American Chemical Society provided funds for Harran to travel and speak to others running academic labs about how to make them safer.

In short, my hunch is that the best way to achieve progress on safe conditions and practices (not to mention relationships in lab groups that help everyone promote safety) is not to separate Harran from his professional community but to return him to that community with a new mission. His new charge would be to help build a better business-as-usual.

It might not be the science career he envisioned, but I reckon it’s a job that needs doing. Harran now has ample first-hand knowledge of why it matters.

Health care provider and patient/client: situations in which fulfilling your ethical duties might not be a no-brainer.

Thanks in no small part to the invitation of the fantastic Doctor Zen, I was honored this past week to be a participant in the PACE 3rd Annual Biomedical Ethics Conference. The conference brought together an eclectic mix of people who care about bioethics: nurses, counselors, physicians, physicians’ assistants, lawyers, philosophers, scientists, students, professors, and people practicing their professions out “in the world”.*

As good conferences do, this one left me with a head full of issues with which I’m still grappling. So, as bloggers sometimes do, I’m going to put one of those issues out there and invite you to grapple with it, too.

A question that kept coming up was what exactly it means for a health care provider (broadly construed) to fulfill hir duties to hir patient/client.

Of course, the folks in the ballroom could rattle off the standard ethical principles that should guide their decision-making — respect for persons (which includes respect for the autonomy of the patient-client), beneficence, non-maleficence, justice — but sometimes these principles seem to pull in different directions, which means just what one should do when the rubber hits the road is not always obvious.

For example:

1. In some states, health care professionals are “mandatory reporters” of domestic violence — that is, if they encounter a patient who they have reason to believe is a victim of domestic violence, they are obligated by law to report it to the authorities. However, it is sometimes the case that getting the case into the legal system triggers retaliatory violence against the victim by the abuser. Moreover, in the aftermath of reporting, the victim may be less willing (or able) to seek further medical care. Is the best way to do one’s duty to one’s patient always to report? Or are their instances where one better fulfills those duties by not reporting (and if so, what are the foreseeable costs of such a course of action — to that patient, to the health care provider, to other patients, to the larger community)?

2. A patient with a terminal illness may feel that the best way for hir physician to respect hir autonomy would be to assist hir in ending hir life. However, physician-assisted suicide is usually interpreted as clearly counter to the requirements of non-maleficence (“do no harm”) and beneficence. In most of the U.S., it’s also illegal. Can a physician refuse to provide the patient in this situation with the sought-after assistance without being paternalistic?** Is it fair game for the physician’s discussion with the patient here to touch on personal values that it might not be fair for the patient to ask the physician to compromise? Are there foreseeable consequences of what, to the patient, looks like a personal choice that might impact the physician’s relationship with other patients, with hir professional community, or with the larger community?

3. In Texas, the law currently requires that patients seeking abortions must submit to transvaginal ultrasounds first. In other words, the law requires health care provider to subject patient to a medically unnecessary invasive procedure. The alternative is for the patient to carry to term an unwanted pregnancy. Both choices, arguably, subject the patient to violence.

Does the health care provider who is trying to uphold hir obligations to hir patient have an obligation to break the law? If it’s a bad law — here, one whose requirements make it impossible for a health care provider to fulfill hir duties to patients — ought health care providers to put their own skin in the game to change it?

Here’s what I’ve written before about how ethically to challenge bad rules:

If you’re part of a professional community, you’re supposed to abide by the rules set by the commissions and institutions governing your professional community.

If you don’t think they’re good rules, of course, one of the things you should do as a member of that professional community is make a case for changing them. However, in the meantime making yourself an exception to the rules that govern the other members of your professional community is pretty much the textbook definition of an ethical violation.

The gist here is that sneakily violating a bad rule (perhaps even while paying lip service to following it) rather that standing up and explicitly arguing against the bad rule — not just when it’s applied to you but when it’s applied to anyone else in your professional community — is wrong. It does nothing to overturn the bad rule, it involves you in deception, and it prioritizes your interests over everyone else’s.

The particular situation here is tricky, though, given that as I understand it the Texas law is a rule imposed on medical professionals by lawmakers, not a rule that the community of medical professionals created and implemented themselves the better to help them fulfill their duties to their patients. Indeed, it seems pretty clear that the lawmakers were willing to sacrifice duties that are absolutely central in the physician-patient relationship when they imposed this law.

Moreover, I think the way forward is complicated by concerns about how to ensure that patients get care that is helpful, not harmful, to them. If Texas physicians who opposed the mandatory transvaginal ultrasound requirement were to fill the jails to protest the law, who does that leave to deliver ethical care to people on the outside seeking abortions? Is this a place where the professional community as a whole ought to be pushing back against the law rather than leaving it to individual members of that community to push back?

* * * * *

If these examples have common threads, one of them is that what the law requires (or what the law allows) seems not to line up neatly with what our ethics require. Perhaps this speaks to the difficulty of getting laws to capture the tricky balancing act that acting ethically towards one’s patients/clients requires of health care professionals. Or, maybe it speaks to law makers not always being focused on creating an environment in which health care providers can deliver on their ethical duties to their patients/clients (perhaps even disagreeing with professional communities about just what those ethical duties are).

What does this mismatch mean for what patients/clients can legitimately expect from their health care providers? Or for what health care providers can realistically deliver to their patients/clients?

And, if you were a health care provider in one of these situations, what would you do?
_____
*Arguably, however, universities and their denizens are also in the world. We share the same fabric of space-time as the rest of y’all.

**Note that paternalism is likely warranted in a number of circumstances. However, when we’re talking about a patient of sound mind, maybe paternalism shouldn’t be the physician’s go-to stance.

Getting kids interested in math careers may require a hero.

Back when I was a high school math geek, our math team would go to meets that occasionally had tables set up to encourage us to pursue various careers that would make use of our mad math skillz. The one such profession where the level of encouragement far outstripped our teenaged interest was the actuarial field. Indeed, more than the objective boringness of the field (to the extent that we had enough information to evaluate that) it may have been the vehement protests of how not-boring actuarial work and actuaries are (really!) that persuaded us that actuarial work was probably pretty boring.

Recently, I think I have hit upon something that might help actuaries turn this perception around. They need a superhero.

Seriously, if any comic book superhero of note had been an actuary as his cover job, actuarial work would have gotten an automatic boost in the estimation of teen geeks. Journalism? Cool, because that was Superman’s day job. Millionaire-industrialist-playboy-philanthropist? Definitely an acceptable career path, since that was Batman’s day job. Librarian? Cool not just because of the access to all those books and periodicals, but also because it was Batgirl’s day job. High school student? Not cool, exactly, but more tolerable on account of being Spiderman’s day job.

Having a superhero who alternated nights of crime-fighting with days assessing risk would raise the esteem of actuarial science among high school mathletes.

There are details that would need to be worked out, of course.

The name for this superhero? Let’s pencil in The Numerator. (“He always comes out on top!”)

His origin story? Probably it would involve looking up from his calculations and crying, “Egad! Crime does pay!” After which, of course, he would dedicate himself to fighting that crime (else we’re looking at the origin story of a supervillain).*

My guess is that The Numerator is going to be one of those superheros that relies on cool gadgets and knowledge rather than on actual superhuman strength or powers — more like Batman than Spiderman. (Otherwise, we’re looking at him getting his fingers caught in a radioactive adding machine, thereby ending up with the power to shoot calculator tape from his fingers, which … I don’t think so.) His utility belt probably includes actuarial tables and a slide-rule. But maybe he’s also a synesthete who can look at the numbers and smell evil.

His nemeses? Undoubtedly they will be legion — corporate crooks, purveyors of Ponzi schemes — but one of them might be Pay-Day Shark. This supervillain, tricked out in a sharkskin suit, will be happy to give you an advance on your paycheck as long as you’re ready to pay interest and fees that end up being about 400% of the amount you’re borrowing. When you can’t pay, he’ll threaten you will his tank of hungry and ill-tempered (but not laser-sight-equipped) sharks. He may even let his pretties eat one of your limbs. But Pay-Day Shark wants to help you — he’ll loan you a prosthetic limb, for a reasonable fee.

Who can save you from his clutches? The Numerator!

DC Comics? Marvel Comics? American Academy of Actuaries? I think we have something here. Let’s talk.

______
*It possible that linking actuarial science with supervillainy might also make young geeks hold it in higher esteem. Maybe someone should perform a risk-benefit analysis of this … but who?