What do we owe you, and who’s “we” anyway? Obligations of scientists (part 1)

Near the beginning of the month, I asked my readers — those who are scientists and those who are non-scientists alike — to share their impressions about whether scientists have any special duties or obligations to society that non-scientists don’t have. I also asked whether non-scientists have any special duties or obligations to scientists.

If you click through to those linked posts and read the comments (and check out the thoughtful responses at MetaCookBook and Antijenic Drift), you’ll see a wide range of opinions on both of these questions, each with persuasive reasons offered to back them up.

In this post and a few more that will follow (I’m estimating three more, but we’ll see how it goes), I want to take a closer look at some of these responses. I’m also going to develop some of the standard arguments that have been put forward by professional philosophers and others of that ilk that scientists do, in fact, have special duties. Working through these arguments will include getting into specifics about what precisely scientists owe the non-scientists with whom they’re sharing a world, and about the sources of these putative obligations. If we’re going to take these arguments seriously, though, I think we need to think carefully about the corresponding questions: what do individual non-scientists and society as a whole owe to scientists, and what are the sources of these obligations?

First, let’s lay some groundwork for the discussion.

Right off the bat, I must acknowledge the problem of drawing clear lines around who counts as a scientist and who counts as a non-scientist. For the purposes of getting answers to my questions, I used a fairly arbitrary definition:

Who counts as a scientist here? I’m including anyone who has been trained (past the B.A. or B.S. level) in a science, including people who may be currently involved in that training and anyone working in a scientific field (even in the absences of schooling past the B.A. or B.S. level).

There are plenty of people who would count as “scientist” under this definition who would not describe themselves as scientists — or at least as professional scientists. (I am one of those people.) On the other hand, there are some professional scientists who would say lots of the people who meet my criteria, even those who would describe themselves as professional scientists, don’t really count as members of the tribe of science.

There’s not one obvious way to draw the lines here. The world is frequently messy that way.

That said, at least some of the arguments that claim scientists have special duties make particular assumptions about scientific training. These assumptions point to a source of the putative special duties.

But maybe that just means we should be examining claims about people-whose-training-puts-them-into-a-particular-relationship-with-society having special duties, whether or not those people are all scientists, and whether or not all scientists have had training that falls into that category.

Another issue here is getting to the bottom of what it means to have an obligation.

Some obligations we have may be spelled out in writing, explicitly agreed to, with the force of law behind them, but many of our obligations are not. Many flow not from written contracts but from relationships — whether our relationships with individuals, or with professional communities, or with other sorts of communities of various sizes.

Because they flow from relationships, it’s not unreasonable to expect that when we have obligations, the persons, communities, or other entities to whom we have obligations will have some corresponding obligations to us. However, this doesn’t guarantee that the obligations on each side will be perfectly symmetrical in strength or in kind. When my kids were little, my obligations to them were significantly larger than their obligations to me. Further, as our relationships change, so will our obligations. I owe my kids different things now than I did when they were toddlers. I owe my parents different things now than I did when I was a minor living under their roof.

It’s also important to notice that obligations are not like physical laws: having an obligation is no guarantee that one will live up to it and accordingly display a certain kind of behavior. Among other things, this means that how people act is not a perfectly reliable guide to how they ought to act. It also means that someone else’s failure to live up to her obligations to me does not automatically switch off my obligations to her. In some cases it might, but there are other cases where the nature of the relationship means my obligations are still in force. (For example, if my teenage kid falls down on her obligation to treat me with minimal respect, I still have a duty to feed and shelter her.)

That obligations are not like physical laws means there’s likely to be more disagreement around what we’re actually obliged to do. Indeed, some are likely to reject putative obligations out of hand because they are socially constructed. Here, I don’t think we need to appeal to a moral realist to locate objective moral facts that could ground our obligations. I’m happy to bite the bullet. Socially constructed obligations aren’t a problem because they emerge from the social processes that are an inescapable part of sharing a world — including with people who are not exactly like ourselves. These obligations flow from our understandings of the relationships we bear to one another, and they are no less “real” for being socially constructed than are bridges.

One more bit of background to ponder: The questions I posed asked whether scientists and non-scientists have any special duties or obligations to each other. A number of respondents (mostly on the scientist side of the line, as I defined it) suggested that scientists’ duties are not special, but simply duties of the same sort everyone in society has (with perhaps some differences in the fine details).

The main arguments for scientists having special duties tend to turn on scientists being in possession of special powers. This is the scientist as Spider-Man: with great power comes great responsibility. But whether the scientist has special powers may be the kind of thing that looks very different on opposite sides of the scientist-non-scientist divide; the scientists responding to my questions don’t seem to see themselves as very different from other members of society. Moreover, nearly every superhero canon provides ample evidence that power, and the responsibility that accompanies it, can feel like a burden. (One need look no further than seasons 6 and 7 of Buffy the Vampire Slayer to wonder if taking a break from her duty to slay vamps would have made Buffy a more pleasant person with whom to share a world.)

Arguably, scientists can do some things the rest of us can’t. How does that affect the relationship between scientists and non-scientists? What kind of duties could flow from that relationship? These powers, and the corresponding responsibilities, will be the focus of the next post.

______
Posts in this series:

Questions for the non-scientists in the audience.

Questions for the scientists in the audience.

What do we owe you, and who’s “we” anyway? Obligations of scientists (part 1)

Scientists’ powers and ways they shouldn’t use them: Obligations of scientists (part 2)

Don’t be evil: Obligations of scientists (part 3)

How plagiarism hurts knowledge-building: Obligations of scientists (part 4)

What scientists ought to do for non-scientists, and why: Obligations of scientists (part 5)

What do I owe society for my scientific training? Obligations of scientists (part 6)

Are you saying I can’t go home until we cure cancer? Obligations of scientists (part 7)

“Forcing” my kids to be vegetarian.

I’m a vegetarian, which is probably not a total surprise.

I study and teach ethics. I’m uneasy with the idea of animals being killed to fulfill a need of mine I know can be fulfilled other ways. In the interests of sharing a world with more than 7 billion other people, and doing so without being a jerk, I’d rather reduce my toll on our shared resources. And, I never liked the taste of meat.

My kids are also vegetarians, and have been since birth — so they didn’t choose it. I have imposed it on them in a stunning act of maternalism.

OK, it’s actually not that stunning.


Why am I imposing a vegetarian diet on my children? For the curious, here are my reasons for this particular parenting choice:

  1. The family dinner table isn’t a restaurant. The choices are to eat what I’m serving or not eat it. This was the deal, at least when I was growing up, in omnivores’ homes (including the one in which I grew up). I may encourage my offspring to try dishes of which they are skeptical, but I don’t view feeding them as an activity that ought to push my powers of persuasion to their limits, nor do I view it as an opportunity with which they should build the capacity of their free will. I’m cooking, and what I’m serving has no meat. That’s what’s for dinner.
  2. I’m in no position to do good quality control on a meat meal. I haven’t cooked meat in about 27 years, so I’ve pretty much forgotten how. I’m not going to taste a meat dish to adjust the seasoning. My paranoia about food-born pathogens is such that I’d probably cook the heck out of any piece of meat I had to cook … and my concerns about carcinogens are such that I wouldn’t even be doing it in a potentially appealing way like blackening it. Plus, aesthetically, I find meat icky enough to handle (and see, and smell) that actually preparing a meat dinner would cost me my appetite, and possibly my lunch.
  3. Meat is expensive.
  4. Meat production uses a lot of resources … as does raising a child in the U.S. Having opted for the latter, I prefer to opt out of the former. This is not to suggest that I look at other people and do a mental audit of their impact — I swear, I don’t — but I do look at myself that way. Bathing and hydrating my offspring and washing their clothes uses water, getting them places frequently uses gas, and the computer and TV/DVD/computer axis of entertainment (and homework) uses electricity. Their homework uses paper (and we sometimes lean on them to use more paper to show their damn work). Call the vegetarian diet a do-it-yourself partial offset of our other impacts.
  5. Meat consumption is not a requirement for human health. I checked this very early in the game with our pediatrician. My kids’ diet is providing them more than adequate amounts of all the nutrients they need for their physical and cognitive development.
  6. A parent-imposed vegetarian diet enables a satisfying range of (non-lethal) options for teen rebellion. Think of how convenient it would be if, as a teenager, you could defy a parent’s values by simply buying a can of chicken soup, as opposed to having to wrap a car around a tree or to figure out how you can get someone to buy you beer. Yes, this is meant mostly in jest, but consider how many young people do make a transgressive act of challenging their parents’ values as embodied in their diet — whether embracing vegetarianism, choosing to stop keeping Kosher, or what have you.

Have I hemmed in my kids’ ability to exercise their autonomy by raising them vegetarian? Absolutely.

Even at the relatively advanced ages of 14 and 12, they still need us to hem in their autonomy to keep them alive and in reasonably good mental and emotional shape to exercise their autonomy more fully as adults. This is just part of parenting. My “forcing” a vegetarian diet on the kids is of a piece with my “forcing” them to eat meals that aren’t composed entirely of candy, “forcing” them to go to school, to do their homework, to bathe, to wear sunscreen, and to sleep at least a few hours a night. I don’t believe it is an outrageous imposition (as indeed, they seem to LIKE most of what I feed them).

We live in a community where there are many different dietary customs in play, whether for religious, cultural, or ethical reasons, so they have plenty of friends who also don’t eat particular things. (Of course, there are kids with allergies, too.) They have learned how to enquire politely about the available options, to decline graciously, and to graze effectively at potlucks.

My kids haven’t ever begged me for meat (although they occasionally express sadness that restaurants have so many fewer options for vegetarian diners than for meat eaters). They also know that when they are adults, they will be able to make their own decisions about their diets. (Same as with tattoos.) They understand that there are some rules they have in virtue of their being members of a household, but that those are subject to change when they establish their own household.

Occasionally someone brings up the possibility that, having been fed a vegetarian diet from birth, my children won’t have adequate enzymes for the digesting of meat should they try to become meat-eaters later. I have no idea if this concern has good empirical grounding. Anecdotally, I know enough long-term vegetarians who have fallen off the (meat) wagon without developing any inability to scarf down a burger and digest it like a champ that this possibility doesn’t keep me up at night.

I haven’t indoctrinated my kids to believe that meat-eaters are evil, or that they’ll go to hell if animal flesh ever crosses their lips, in large part because I don’t hold those views either. They are simply part of a household that doesn’t eat meat. Given that, what beef could anyone have with it?

_____
An ancestor version of this post was published on my other blog.

Questions for the non-scientists in the audience.

Today in my “Ethics in Science” class, we took up a question that reliably gets my students (a mix of science majors and non-science major) going: Do scientists have special obligations to society that non-scientists don’t have?

Naturally, there are some follow-up questions if you lean towards an affirmative answer to that first question. For example:

  • What specifically are those special obligations?
  • Why do scientists have these particular obligations when non-scientists in their society don’t?
  • How strong are those obligations? (In other words, under what conditions would it be ethically permissible for scientists to fall short of doing what the obligations say they should do?)

I think these are important — and complex — questions, some of which go to the heart of what’s involved in scientists and non-scientists successfully sharing a world. But, it always helps me to hear the voices (and intuitions) of some of the folks besides me who are involved in this sharing-a-world project.

So, for the non-scientists in the audience, I have some questions I hope you will answer in the comments on this post.*

1. Are there special duties or obligations you think scientists have to the non-scientists with whom they’re sharing a world? If yes, what are they?

2. If you think scientists have special duties or obligations to the rest of society, why do they have them? Where did they come from? (If you don’t think scientists have special duties or obligations to the rest of society, why not?

3. What special duties or obligations (if any) do you think non-scientists have to the scientists with whom they’re sharing a world?

Who counts as a non-scientist here? I’m including anyone who has not received scientific training past the B.A. or B.S. level and who is not currently working in a scientific field (even in the absences of schooling past the B.A. or B.S. level).

That means I count as a scientist here (even though I’m not currently employed as a scientist or otherwise involved in scientific knowledge-building).

If you want to say something about these questions but you’re a scientist according to this definition, never fear! You are cordially invited to answer a corresponding set of questions, posed to the scientists with whom non-scientists are sharing a world, on my other blog.
_____
* If you prefer to answer the questions on your own blog, or in some other online space, please drop a link in the comments here, or point me to it via Twitter (@docfreeride) or email (dr.freeride@gmail.com).

How far does the tether of your expertise extend?

Talking about science in the public sphere is tricky, even with someone with a lot of training in a science.

On the one hand, there’s a sense that it would be a very good thing if the general level of understanding of science was significantly higher than it is at present — if you could count on the people in your neighborhood to have a basic grasp of where scientific knowledge comes from, as well as of the big pieces of scientific knowledge directly relevant to the project of getting through their world safely and successfully.

But there seem to be a good many people in our neighborhood who don’t have this relationship with science. (Here, depending on your ‘druthers, you can fill in an explanation in terms of inadequately inspiring science teachers and/or curricula, or kids too distracted by TV or adolescence or whatever to engage with those teachers and/or curricula.) This means that, if these folks aren’t going to go it alone and try to evaluate putative scientific claims they encounter themselves, they need to get help from scientific experts.

But who’s an expert?

It’s well and good to say that a journalism major who never quite finished his degree is less of an authority on matters cosmological than a NASA scientist, but what should we say about engineers or medical doctors with “concerns” about evolutionary theory? Is a social scientist who spent time as an officer on a nuclear submarine an expert on nuclear power? Is an actor or talk show host with an autistic child an expert on the aetiology of autism? How important is all that specialization research scientists do? To some extent, doesn’t all science follow the same rules, thus equipping any scientist to weigh in intelligently about it?

Rather than give you a general answer to that question, I thought it best to lay out the competence I personally am comfortable claiming, in my capacity as a trained scientist.

As someone trained in a science, I am qualified:

  1. to say an awful lot about the research projects I have completed (although perhaps a bit less about them when they were still underway).
  2. to say something about the more or less settled knowledge, and about the live debates, in my research area (assuming, of course, that I have kept up with the literature and professional meetings where discussions of research in this area take place).
  3. to say something about the more or less settled (as opposed to “frontier”) knowledge for my field more generally (again, assuming I have kept up with the literature and the meetings).
  4. perhaps, to weigh in on frontier knowledge in research areas other than my own, if I have been very diligent about keeping up with the literature and the meetings and about communicating with colleagues working in these areas.
  5. to evaluate scientific arguments in areas of science other than my own for logical structure and persuasiveness (though I must be careful to acknowledge that there may be premises of these arguments — pieces of theory or factual claims from observations or experiments that I’m not familiar with — that I’m not qualified to evaluate).
  6. to recognize, and be wary of, logical fallacies and other less obvious pseudo-scientific moves (e.g., I should call shenanigans on claims that weaknesses in theory T1 necessarily count as support for alternative theory T2).
  7. to recognize that experts in fields of science other than my own generally know what the heck they’re talking about.
  8. to trust scientists in fields other than my own to rein in scientists in those fields who don’t know what they are talking about.
  9. to face up to the reality that, as much as I may know about the little piece of the universe I’ve been studying, I don’t know everything (which is part of why it takes a really big community to do science).

This list of my qualifications is an expression of my comfort level more than anything else. I would argue that it’s not elitist — good training and hard work can make a scientist out of almost anyone. But, it recognizes that with as much as there is to know, you can’t be an expert on everything. Knowing how far the tether of your expertise extends — and owning up to that when people look to you as an expert — is part of being a responsible scientist.

_______
An ancestor version of this post was published on my other blog.

The ethics of opting out of vaccination.

At my last visit to urgent care with one of my kids, the doctor who saw us mentioned that there is currently an epidemic of pertussis (whooping cough) in California, one that presents serious danger for the very young children (among others) hanging out in the waiting area. We double-checked that both my kids are current on their pertussis vaccinations (they are). I checked that I was current on my own pertussis vaccination back in December when I got my flu shot.

Sharing a world with vulnerable little kids, it’s just the responsible thing to do.

You’re already on the internet reading about science and health, so it will probably come as no surprise to you that California’s pertussis epidemic is a result of the downturn in vaccination in recent years, nor that this downturn has been driven in large part by parents worried that childhood vaccinations might lead to their kids getting autism, or asthma, or some other chronic disease. Never mind that study after study has failed to uncover evidence of such a link; these parents are weighing the risks and benefits (at least as they understand them) of vaccinating or opting out and trying to make the best decision they can for their children.

The problem is that the other children with which their children are sharing a world get ignored in the calculation.

Of course, parents are accountable to the kids they are raising. They have a duty to do what is best for them, as well as they can determine what that is. They probably also have a duty to put some effort into making a sensible determination of what’s best for their kids (which may involve seeking out expert advice, and evaluating who has the expertise to be offering trustworthy advice).


But parents and kids are also part of a community, and arguably they are accountable to other members of that community. I’d argue that members of a community may have an obligation to share relevant information with each other — and, to avoid spreading misinformation, not to represent themselves as experts when they are not. Moreover, when parents make choices with the potential to impact not only themselves and their kids but also other members of the community, they have a duty to do what is necessary to minimize bad impacts on others. Among other things, this might mean keeping your unvaccinated-by-choice kids isolated from kids who haven’t been vaccinated because of their age, because of compromised immune function, or because they are allergic to a vaccine ingredient. If you’re not willing to do your part for herd immunity, you need to take responsibility for staying out of the herd.

Otherwise, you are a free-rider on the sacrifices of the other members of the community, and you are breaking trust with them.

I know from experience that this claim upsets non-vaccinating parents a lot. They imagine that I am declaring them bad people, guilty of making a conscious choice to hurt others. I am not. However, I do think they are making a choice that has the potential to cause great harm to others. If I didn’t think that pointing out the potential consequences might be valuable to these non-vaccinating parents, at least in helping them understand more fully what they’re choosing, I wouldn’t bother.

So here, let’s take a careful look at my claim that vaccination refuseniks are free-riders.


First, what’s a free-rider?


In the simplest terms, a free-rider is someone who accepts a benefit without paying for it. The free-rider is able to partake of this benefit because others have assumed the costs necessary to bring it about. But if no one was willing to assume those costs (or indeed, in some cases, if there is not a critical mass of people assuming those costs), then that benefit would not be available, either.


Thus, when I claim that people who opt out of vaccination are free-riders on society, what I’m saying is that they are receiving benefits for which they haven’t paid their fair share — and that they receive these benefits only because other members of society have assumed the costs by being vaccinated.


Before we go any further, let’s acknowledge that people who choose to vaccinate and those who do not probably have very different understandings of the risks and benefits, and especially of their magnitudes and likelihoods. Ideally, we’d be starting this discussion about the ethics of opting out of vaccination with some agreement about what the likely outcomes are, what the unlikely outcomes are, what the unfortunate-but-tolerable outcomes are, and what the to-be-avoided-at-all-costs outcomes are.


That’s not likely to happen. People don’t even accept the same facts (regardless of scientific consensus), let alone the same weightings of them in decision making.


But ethical decision making is supposed to help us get along even in a world where people have different values and interests than our own. So, plausibly, we can talk about whether certain kinds of choices fit the pattern of free-riding even if we can’t come to agreement on probabilities and a hierarchy of really bad outcomes.


So, let’s say all the folks in my community are vaccinated against measles except me. Within this community (assuming I’m not wandering off to exotic and unvaccinated lands, and that people from exotic and unvaccinated lands don’t come wandering through), my chances of getting measles are extremely low. Indeed, they are as low as they are because everyone else in the community has been vaccinated against measles — none of my neighbors can serve as a host where the virus can hang out and then get transmitted to me. (By the way, the NIH has a nifty Disease Transmission Simulator that you can play around with to get a feel for how infectious diseases and populations whose members have differing levels of immunity interact.)


I get a benefit (freedom from measles) that I didn’t pay for. The other folks in my community who got the vaccine paid for it.


In fact, it usually doesn’t require that everyone else in the community be vaccinated against measles for me to be reasonably safe from it. Owing to “herd immunity,” measles is unlikely to run through the community if the people without immunity are relatively few and well interspersed with the vaccinated people. This is a good thing, since babies in the U.S. don’t get their first vaccination against measles until 12 months, and some people are unable to get vaccinated even if they’re willing to bear the cost (e.g., because they have compromised immune systems or are allergic to an ingredient of the vaccine). And, in other cases, people may get vaccinated but the vaccines might not be fully effective — if exposed, they might still get the disease. Herd immunity tends to protect these folks from the disease — at least as long as enough of the herd is vaccinated.


If too few members of the herd are vaccinated, even some of those who have borne the costs of being vaccinated (because even very good vaccines can’t deliver 100% protection to 100% of the people who get them), or who would bear those costs were they able (owing to their age or health or access to medical care), may miss out on the benefit. Too many free-riders can spoil things even for those who are paying their fair share.


A standard reply from non-vaccinating parents is that their unvaccinated kids are not free-riders on the vaccinated mass of society because they actually get diseases like chicken pox, pertussis, and measles (and are not counting on avoiding the other diseases against which people are routinely vaccinated). In other words, they argue, they didn’t pay the cost, but they didn’t get the benefit, either.


Does this argument work?


I’m not convinced that it does. First off, even though unvaccinated kids may get a number of diseases that their vaccinated neighbors do not, it is still unlikely that they will catch everything against which we routinely vaccinate. By opting out of vaccination but living in the midst of a herd that is mostly vaccinated, non-vaccinating parents significantly reduce the chances of their kids getting many diseases compared to what the chances would be if they lived in a completely unvaccinated herd. That statistical reduction in disease is a benefit, and the people who got vaccinated are the ones paying for it.


Now, one might reply that unvaccinated kids are actually incurring harm from their vaccinated neighbors, for example if they contract measles from a recently vaccinated kid shedding the live virus from the vaccine. However, the measles virus in the MMR vaccine is an attenuated virus — which is to say, it’s quite likely that unvaccinated kids contacting measles from vaccinated kids will have a milder bout of measles than they might have if they had been exposed to a full-strength measles virus out in the wild.
 A milder case of measles is a benefit, at least when the alternative is a severe case of measles. Again, it’s a benefit that is available because other people bore the cost of being vaccinated.


Indeed, even if they were to catch every single disease against which we vaccinate, unvaccinated kids would still reap further benefits by living in a society with a high vaccination rate. The fact that most members of society are vaccinated means that there is much less chance that epidemic diseases will shut down schools, industries, or government offices, much more chance that hospitals and medical offices will not be completely overwhelmed when outbreaks happen, much more chance that economic productivity will not be crippled and that people will be able to work and pay the taxes that support all manner of public services we take for granted.

The people who vaccinate are assuming the costs that bring us a largely epidemic-free way of life. Those who opt out of vaccinating are taking that benefit for free.


I understand that the decision not to vaccinate is often driven by concerns about what costs those who receive the vaccines might bear, and whether those costs might be worse than the benefits secured by vaccination. Set aside for the moment the issue of whether these concerns are well grounded in fact. Instead, let’s look at the parallel me might draw: 
If I vaccinate my kids, no matter what your views about the etiology of autism and asthma, you are not going to claim that my kids getting their shots raise your kids’ odds of getting autism or asthma. But if you don’t vaccinate your kids, even if I vaccinate mine, your decision does raise my kids’ chance of catching preventable infectious diseases. My decision to vaccinate doesn’t hurt you (and probably helps you in the ways discussed above). Your decision not to vaccinate could well hurt me.


The asymmetry of these choices is pretty unavoidable.


Here, it’s possible that a non-vaccinating parent might reply by saying that it ought to be possible for her to prioritize protecting her kids from whatever harms vaccination might bring to them without being accused of violating a social contract.


The herd immunity thing works for us because of an implicit social contract of sorts: those who are medically able to be vaccinated get vaccinated. Obviously, this is a social contract that views the potential harms of the diseases as more significant than the potential harms of vaccination. I would argue that under such a social contract, we as a society have an obligation to take care of those who end up paying a higher cost to achieve the shared benefit.


But if a significant number of people disagree, and think the potential harms of vaccination outweigh the potential harms of the diseases, shouldn’t they be able to opt out of this social contract?


The only way to do this without being a free-rider is to opt out of the herd altogether — or to ensure that your actions do not bring additional costs to the folks who are abiding by the social contract. If you’re planning on getting those diseases naturally, this would mean taking responsibility for keeping the germs contained and away from the herd (which, after all, contains members who are vulnerable owing to age, medical reasons they could not be vaccinated, or the chance of less than complete immunity from the vaccines). No work, no school, no supermarkets, no playgrounds, no municipal swimming pools, no doctor’s office waiting rooms, nothing while you might be able to transmit the germs. The whole time you’re able to transmit the germs, you need to isolate yourself from the members of society whose default assumption is vaccination. Otherwise, you endanger members of the herd who bore the costs of achieving herd immunity while reaping benefits (of generally disease-free work, school, supermarkets, playgrounds, municipal swimming pools, doctor’s office waiting rooms, and so forth, for which you opted out of paying your fair share).


Since you’ll generally be able to transmit these diseases before the first symptoms appear — even before you know for sure that you’re infected — you will not be able to take regular contact with the vaccinators for granted.


And if you’re traveling to someplace where the diseases whose vaccines you’re opting out of are endemic, you have a duty not to bring the germs back with you to the herd of vaccinators. Does this mean quarantining yourself for some minimum number of days before your return? It probably does. Would this be a terrible inconvenience for you? Probably so, but the 10-month-old who catches the measles you bring back might also be terrible inconvenienced. Or worse.

Here, I don’t think I’m alone in judging the harm of a vaccine refusenik giving an infant pertussis as worse than the harm in making a vaccine refusenik feel bad about violating a social contract.


An alternative, one which would admittedly required some serious logistical work, might be to join a geographically isolated herd of other people opting out of vaccination, and to commit to staying isolated from the vaccinated herd. Indeed, if the unvaccinated herd showed a lower incidence of asthma and autism after a few generations, perhaps the choices of the members of the non-vaccinating herd would be vindicated.


In the meantime, however, opting out of vaccines but sharing a society with those who get vaccinated is taking advantage of benefits that others have paid for and even threatening those benefits. Like it or not, that makes you a free-rider.
* * * * *
An earlier version of this essay originally appeared on my other blog.

Leave the full-sized conditioner, take the ski poles: whose assessment of risks did the TSA consider in new rules for carry-ons?

At Error Statistics Philosophy, D. G. Mayo has an interesting discussion of changes that just went into effect to Transportation Security Administration rules about what air travelers can bring in their carry-on bags. Here’s how the TSA Blog describes the changes:

TSA established a committee to review the prohibited items list based on an overall risk-based security approach. After the review, TSA Administrator John S. Pistole made the decision to start allowing the following items in carry-on bags beginning April 25th:

  • Small Pocket Knives – Small knives with non-locking blades smaller than 2.36 inches and less than 1/2 inch in width will be permitted
  • Small Novelty Bats and Toy Bats
  • Ski Poles
  • Hockey Sticks
  • Lacrosse Sticks
  • Billiard Cues
  • Golf Clubs (Limit Two)

This is part of an overall Risk-Based Security approach, which allows Transportation Security Officers to better focus their efforts on finding higher threat items such as explosives. This decision aligns TSA more closely with International Civil Aviation Organization (ICAO) standards.

These similar items will still remain on the prohibited items list:

  • Razor blades and box cutters will remain prohibited in carry-on luggage.
  • Full-size baseball, softball and cricket bats are prohibited items in carry-on luggage.

As Mayo notes, this particular framing of what does or does not count as a “higher threat item” on a flight has not been warmly embraced by everyone.

Notably, the Flight Attendants Union Coalition, the Coalition of AIrline Pilots Associations, some federal air marshals, and at least one CEO of an airline have gone on record against the rule change. Their objection is two-fold: removing these items from the list of items prohibited in carry-ons is unlikely to actually make screening lines at airports go any faster (since now you have to wait for the passenger arguing that there’s only 3 ounces of toothpaste left in the tube, so it should be allowed and the passenger arguing that her knife’s 2.4 inch blade is close enough to 2.36 inches), and allowing these items in carry-on bags on flights is likely to make those flights more dangerous for the people on them.

But that’s not the way the TSA is thinking about the risks here. Mayo writes:

By putting less focus on these items, Pistole says, airport screeners will be able to focus on looking for bomb components, which present a greater threat to aircraft. Such as:

bottled water, shampoo, cold cream, tooth paste, baby food, perfume, liquid make-up, etc. (over 3.4 oz).

They do have an argument; namely, that while liquids could be used to make explosives sharp objects will not bring down a plane. At least not so long as we can rely on the locked, bullet-proof cockpit door. Not that they’d want to permit any bullets to be around to test… And not that the locked door rule can plausibly be followed 100% of the time on smaller planes, from my experience. …

When the former TSA chief, Kip Hawley, was asked to weigh in, he fully supported Pistole; he regretted that he hadn’t acted to permit the above sports items during his reign service at TSA:

“They ought to let everything on that is sharp and pointy. Battle axes, machetes … bring anything you want that is pointy and sharp because while you may be able to commit an act of violence, you will not be able to take over the plane. It is as simple as that,” he said. (Link is here.)

I burst out laughing when I read this, but he was not joking:

Asked if he was using hyperbole in suggesting that battle axes be allowed on planes, Hawley said he was not.

“I really believe it. What are you going to do when you get on board with a battle ax? And you pull out your battle ax and say I’m taking over the airplane. You may be able to cut one or two people, but pretty soon you would be down in the aisle and the battle ax would be used on you.”

There does seem to be an emphasis on relying on passengers to rise up against ax-wielders, that passengers are angry these days at anyone who starts trouble. But what about the fact that there’s a lot more “air rage” these days? … That creates a genuine risk as well.

Will the availability of battle axes make disputes over the armrest more civil or less? Is the TSA comfortable with whatever happens on a flight so long as it falls short of bringing down the plane? How precisely did the TSA arrive at this particular assessment of risks that makes an 8 ounce bottle of conditioner more of a danger than a hockey stick?

And, perhaps most troubling, if the TSA is putting so much reliance on the vigilance and willingness to mount a response of passengers and flight crews, why does it look like they failed to seek out input from those passengers and flight crews about what kind of in-flight risks they are willing to undertake?

The ethics of naming and shaming.

Lately I’ve been pondering the practice of responding to bad behavior by calling public attention to it.

The most recent impetus for my thinking about it was this tech blogger’s response to behavior that felt unwelcoming at a conference (behavior that seems, in fact, to have run afoul of that conference’s official written policies)*, but there are plenty of other examples one might find of “naming and shaming”: the discussion (on blogs and in other media outlets) of University of Chicago neuroscientist Dario Maestripieri’s comments about female attendees of the Society for Neuroscience meeting, the Office of Research Integrity’s posting of findings of scientific misconduct investigations, the occasional instructor who promises to publicly shame students who cheat in his class, and actually follows through on the promise.

There are many forms “naming-and-shaming” might take, and many types of behavior one might identify as problematic enough that they ought to be pointed out and attended to. But there seems to be a general worry that naming-and-shaming is an unethical tactic. Here, I want to explore that worry.

Presumably, the point of responding to bad behavior is because it’s bad — causing harm to individuals or a community (or both), undermining progress on a project or goal, and so forth. Responding to bad behavior can be useful if it stops bad behavior in progress and/or keeps similarly bad behavior from happening in the future. A response can also be useful in calling attention to the harm the behavior does (i.e., in making clear what’s bad about the behavior). And, depending on the response, it can affirm the commitment of individuals or communities that the behavior in question actual is bad, and that the individuals or communities see themselves as having a real stake in reducing it.

Rules, professional codes, conference harassment policies — these are some ways to specify at the outset what behaviors are not acceptable in the context of the meeting, game, work environment, or disciplinary pursuit. There are plenty of contexts, too, where there is no written-and-posted official enumeration of every type of unacceptable behavior. Sometimes communities make judgments on the fly about particular kinds of behavior. Sometimes, members of communities are not in agreement about these judgments, which might result in a thoughtful conversation within the community to try to come to some agreement, or the emergence of a rift that leads people to realize that the community was not as united as they once thought, or ruling on the “actual” badness or acceptability of the behavior by those within the community who can marshal the power to make such a ruling.

Sharing a world with people who are not you is complicated, after all.

Still, I hope we can agree that there are some behaviors that count as bad behaviors. Assuming we had an unambiguous example of someone engaging in such a behavior, should we respond? How should we respond? Do we have a duty to respond?

I frequently hear people declare that one should respond to bad behavior, but that one should do so privately. The idea here seems to be that letting the bad actor know that the behavior in question was bad, and should be stopped, is enough to ensure that it will be stopped — and that the bad behavior must be a reflection of a gap in the bad actor’s understanding.

If knowing that a behavior is bad (or against the rules) were enough to ensure that those with the relevant knowledge never engage in the behavior, though, it becomes difficult to explain the highly educated researchers who get caught fabricating or falsifying data or images, the legions of undergraduates who commit plagiarism despite detailed instructions on proper citation methods, the politicians who lie. If knowledge that a certain kind of behavior is unacceptable is not sufficient to prevent that behavior, responding effectively to bad behavior must involve more than telling the perpetrator of that behavior, “What you’re doing is bad. Stop it.”

This is where penalties may be helpful in responding to bad behavior — get benched for the rest of the game, or fail the class, or get ejected from the conference, or become ineligible for funding for this many years. A penalty can convey that bad behavior is harmful enough to the endeavor or the community that its perpetrator needs a “time-out”.

Sometimes the application of penalties needs to be private (e.g., when a law like the Family Education Rights and Privacy Act makes applying the penalty publicly illegal). But there are dangers in only dealing with bad behavior privately.

When fabrication, falsification, and plagiarism are “dealt with” privately, it can make it hard for a scientific community to identify papers in the scientific literature that they shouldn’t trust or researchers who might be prone to slipping back into fabricating, falsifying, or plagiarizing if they think no one is watching. (It is worth noting that large ethical lapses are frequently part of an escalating pattern that started with smaller ethical infractions.)

Worse, if bad behavior is dealt with privately, out of view of members of the community who witnessed the bad behavior in question, those members may lose faith in the community’s commitment to calling it out. Keeping penalties (if any) under wraps can convey the message that the bad behavior is actually tolerated, that official policies against it are empty words.

And sometimes, there are instances where the people within an organization or community with the power to impose penalties on bad actors seem disinclined to actually address bad behavior, using the cover of privacy as a way to opt out of penalizing the bad actors or of addressing the bad behavior in any serious way.

What’s a member of the community to do in such circumstances? Given that the bad behavior is bad because it has harmful effects on the community and its members, should those aware of the bad behavior call the community’s attention to it, in the hopes that the community can respond to it (or that the community’s scrutiny will encourage the bad actor to cease the bad behavior)?

Arguably, a community that is harmed by bad behavior has an interest in knowing when that behavior is happening, and who the bad actors are. As well, the community has an interest in stopping the bad behavior, in mitigating the harms it has already caused, and in discouraging further such behavior. Naming-and-shaming bad actors may be an effective way to secure these interests.

I don’t think this means naming-and-shaming is the only possible way to secure these interests, nor that it is always the best way to do so. Sometimes, however, it’s the tool that’s available that seems likely to do the most good.

There’s not a simple algorithm or litmus test that will tell you when shaming bad actors is the best course of action, but there are questions that are worth asking when assessing the options:

  • What are the potential consequences if this piece of bad behavior, which is observable to at least some members of the community, goes unchallenged?
  • What are the potential consequences if this piece of bad behavior, which is observable to at least some members of the community, gets challenged privately? (In particular, what are the potential consequences to the person engaging in the bad behavior? To the person challenging the behavior? To others who have had occasion to observe the behavior, or who might be affected by similar behavior in the future?)
  • What are the potential consequences if this piece of bad behavior, which is observable to at least some members of the community, gets challenged publicly? (In particular, what are the potential consequences to the person engaging in the bad behavior? To the person challenging the behavior? To others who have had occasion to observe the behavior, or who might be affected by similar behavior in the future?)

Challenging bad behavior is not without costs. Depending on your status within the community, challenging a bad actor may harm you more than the bad actor. However, not challenging bad behavior has costs, too. If the community and its members aren’t prepared to deal with bad behavior when it happens, the community has to bear those costs.
_____
* Let me be clear that this post is focused on the broader question of publicly calling out bad behavior rather than on the specific details of Adria Richards’ response to the people behind her at the tech conference, whether she ought to have found their jokes unwelcoming, whether she ought to have responded to them the way she did, or what have you. Since this post is not about whether Adria Richards did everything right (or everything wrong) in that particular instance, I’m going to be quite ruthless in pruning comments that are focused on her particular circumstances or decisions. Indeed, commenters who make any attempt to use the comments here to issue threats of violence against Richards (of the sort she is receiving via social media as I compose this post), or against anyone else, will have their information (including IP address) forwarded to law enforcement.

If you’re looking for my take on the details of the Adria Richards case, I’ll have a post up on my other blog within the next 24 hours.

Some musings on Jonah Lehrer’s $20,000 “meh culpa”.

Remember some months ago when we were talking about how Jonah Lehrer was making stuff up in his “non-fiction” pop science books? This was as big enough deal that his publisher, Houghton Mifflin Harcourt, recalled print copies of Lehrer’s book Imagine, and that the media outlets for which Lehrer wrote went back through his writing for them looking for “irregularities” (like plagiarism — which one hopes is not regular, but once your trust has been abused, hopes are no longer all that durable).

Lehrer’s behavior was clearly out of bounds for anyone hoping for a shred of credibility as a journalist or non-fiction author. However, at the time, I opined in a comment:

At 31, I think Jonah Lehrer has time to redeem himself and earn back trust and stuff like that.

Well, the events of this week stand as evidence that having time to redeem oneself is not a guarantee that one will not instead dig the hole deeper.

You see, Jonah Lehrer was invited to give a talk this week at a “media learning seminar” in Miami, a talk which marked his first real public comments a large group of journalistic peers since his fabrications and plagiarism were exposed — and a talk for which the sponsor of the conference, the Knight Foundation, paid Lehrer an honorarium of $20,000.

At the New York Times “Arts Beat” blog, Jennifer Schuessler describes Lehrer’s talk:

Mr. Lehrer … dived right in with a full-throated mea culpa. “I am the author of a book on creativity that contains several fabricated Bob Dylan quotes,” he told the crowd, which apparently could not be counted on to have followed the intense schadenfreude-laced commentary that accompanied his downfall. “I committed plagiarism on my blog, taking without credit or citation an entire paragraph from the blog of Christian Jarrett. I plagiarized from myself. I lied to a journalist named Michael Moynihan to cover up the Dylan fabrications.”

“My mistakes have caused deep pain to those I care about,” he continued. “I’m constantly remembering all the people I’ve hurt and let down.”

If the introduction had the ring of an Alcoholics Anonymous declaration, before too long Mr. Lehrer was surrendering to the higher power of scientific research, cutting back and forth between his own story and the kind of scientific terms — “confirmation bias,” “anchoring” — he helped popularize. Within minutes he had pivoted from his own “arrogance” and other character flaws to the article on flawed forensic science within the F.B.I. that he was working on when his career began unraveling, at one point likening his own corner-cutting to the overconfidence of F.B.I. scientists who fingered the wrong suspect in the 2004 Madrid bombings.

“If we try to hide our mistakes, as I did, any error can become a catastrophe,” he said, adding: “The only way to prevent big failures is a willingness to consider every little one.”

Not everyone shares the view that Lehrer’s apology constituted a full-throated mea culpa, though. At Slate, Daniel Engber shared this assessment:

Lehrer has been humbled, and yet nearly every bullet in his speech managed to fire in both directions. It was a wild display of self-negation, of humble arrogance and arrogant humility. What are these “standard operating procedures” according to which Lehrer will now do his work? He says he’ll be more scrupulous in his methods—even recording and transcribing interviews(!)—but in the same breath promises that other people will be more scrupulous of him. “I need my critics to tell me what I’ve gotten wrong,” he said, as if to blame his adoring crowds at TED for past offenses. Then he promised that all his future pieces would be fact-checked, which is certainly true but hardly indicative of his “getting better” (as he puts it, in the clammy, familiar rhetoric of self-help).

What remorse Lehrer had to share was couched in elaborate and perplexing disavowals. He tried to explain his behavior as, first of all, a hazard of working in an expert field. Like forensic scientists who misjudge fingerprints and DNA analyses, and whose failings Lehrer elaborated on in his speech, he was blind to his own shortcomings. These two categories of mistake hardly seem analogous—lab errors are sloppiness, making up quotes is willful distortion—yet somehow the story made Lehrer out to be a hapless civil servant, a well-intentioned victim of his wonky and imperfect brain.

(Bold emphasis added.)

At Forbes, Jeff Bercovici noted:

Ever the original thinker, even when he’s plagiarizing from press releases, Lehrer apologized abjectly for his actions but pointedly avoided promising to become a better person. “These flaws are a basic part of me,” he said. “They’re as fundamental to me as the other parts of me I’m not ashamed of.”

Still, Lehrer said he is aiming to return to the world of journalism, and has been spending several hours a day writing. “It’s my hope that someday my transgressions might be forgiven,” he said.

How, then, does he propose to bridge the rather large credibility gap he faces? By the methods of the technocrat, not the ethicist: “What I clearly need is a new set of rules, a stricter set of standard operating procedures,” he said. “If I’m lucky enough to write again, then whatever I write will be fully fact-checked and footnoted. Every conversation will be fully taped and transcribed.”

(Bold emphasis added.)

How do I see Jonah Lehrer’s statement? The title of this post should give you a clue. Like most bloggers, I took five years of Latin.* “Mea culpa” would describe a statement wherein the speaker (in this case, Jonah Lehrer) actually acknowledged that the blame was his for the bad thing of which he was a part. From what I can gather, Lehrer hasn’t quite done that.

Let the record reflect that the “new set of rules” and “stricter set of standard operating procedures” Lehrer described in his talk are not new, nor were they non-standard when Lehrer was falsifying and plagiarizing to build his stories. It’s not that Jonah Lehrer’s unfortunate trajectory shed light on the need for these standards, and now the journalistic community (and we consumers of journalism) can benefit from their creation. Serious journalists were already using these standards.

Jonah Lehrer, however, decided he didn’t need to use them.

This does have a taste of Leona Helmsleyesque “rules are for the little people” to it. And, I think it’s important to note that Lehrer gave the outward appearance of following the rules. He did not stand up and say, “I think these rules are unnecessary to good journalistic practice, and here’s why…” Rather, he quietly excused himself from following them.

But now, Lehrer tells us, he recognizes the importance of the rules.

That’s well and good. However, the rules he’s pointing to — taping and transcribing interviews, fact-checking claims and footnoting sources — seem designed to prevent unwitting mistakes. They could head off misremembering what interviewees said, miscommunicating whose words or insights animate part of a story, getting the facts wrong accidentally. It’s less clear that these rules can head off willful lies and efforts to mislead — which is to say, the kind of misdeeds that got Lehrer into trouble.

Moreover, that he now accepts these rules after being caught lying does not indicate that Jonah Lehrer is now especially sage about journalism. It’s remedial work.

Let’s move on from his endorsement (finally) of standards of journalistic practice to the constellation of cognitive biases and weaknesses of will that Jonah Lehrer seems to be trying to saddle with the responsibility for his lies.

Recognizing cognitive biases is a good thing. It is useful to the extent that it helps us to avoid getting fooled by them. You’ll recall that, knowledge-builders, whether scientists or journalists, are supposed to do their best to avoid being fooled.

But, what Lehrer did is hard to cast in terms of ignoring strong cognitive biases. He made stuff up. He fabricated quotes. He presented other authors’ writing as his own. When confronted about his falsifications, he lied. Did his cognitive biases do all this?

What Jonah Lehrer seems to be sidestepping in his “meh culpa” is the fact that, when he had to make choices about whether to work with the actual facts or instead to make stuff up, about whether to write his own pieces (or at least to properly cite the material from others that he used) or to plagiarize, about whether to be honest about what he’d done when confronted or to lie some more, he decided to be dishonest.

If we’re to believe this was a choice his cognitive biases made for him, then his seem much more powerful (and dangerous) than the garden-variety cognitive biases most grown-up humans have.

It seems to me more plausible that Lehrer’s problem was a weakness of will. It’s not that he didn’t know what he was doing was wrong — he wasn’t fooled by his brain into believing it was OK, or else he wouldn’t have tried to conceal it. Instead, despite recognizing the wrongness of his deeds, he couldn’t muster the effort not to do them.

If Jonah Lehrer cannot recognize this — that it frequently requires conscious effort to do the right thing — it’s hard to believe he’ll be committed to putting that effort into doing the right (journalistic) thing going forward. Verily, given the trust he’s burned with his journalistic colleagues, he can expect that proving himself to be reformed will require extra effort.

But maybe what Lehrer is claiming is something different. Maybe he’s denying that he understood the right thing to do and then opted not to do it because it seemed like too much work. Maybe he’s claiming instead that he just couldn’t resist the temptation (whether of rule-breaking for its own sake or of rule-breaking as the most efficient route to secure the prestige he craved). In other words, maybe he’s saying he was literally powerless, that he could not help committing those misdeeds.

If that’s Lehrer’s claim — and if, in addition, he’s claiming that the piece of his cognitive apparatus that was so vulnerable to temptation that it seized control to make him do wrong is as integral to who Jonah Lehrer is as his cognitive biases are — the whole rehabilitation thing may be a non-starter. If this is how Lehrer understands why he did wrong, he seems to be identifying himself as a wrongdoer with a high probability of reoffending.

If he can parlay that into more five-figure speaker fees, maybe that will be a decent living for Jonah Lehrer, but it will be a big problem for the community of journalists and for the public that trusts journalists as generally reliable sources of information.

Weakness is part of Lehrer, as it is for all of us, but it is not a part he is acknowledging he could control or counteract by concerted effort, or by asking for help from others.

It’s part of him, but not in a way that makes him inclined to actually take responsibility or to acknowledge that he could have done otherwise under the circumstances.

If he couldn’t have done otherwise — and if he might not be able to when faced with similar temptation in the future — then Jonah Lehrer has no business in journalism. Until he can recognize his own agency, and the responsibility that attaches to it, the most he has to offer is one more cautionary tale.
_____
*Fact check: I have absolutely no idea how many other bloggers took five years of Latin. My evidence-free guess is that it’s not just me.

Gender bias: ethical implications of an empirical finding.

By now, you may have seen the recently published study by Ross-Macusin et al. in the Proceedings of the National Academy of Sciences entitled “Science faculty’s subtle gender biases favor male students”, or the nice discussion by Ilana Yurkiewicz of why these findings matter.

Briefly, the study involved having science faculty from research-focused universities rate materials from potential student candidates for a lab manager position. The researchers attached names to the application materials — some of them male names, some of them female names — at random, and examined how the ratings of the materials correlated with the names that were attached to them. What they found was that the same application materials got a higher ranking (i.e., a judgment that the applicant would be more qualified for the job) when the attached name was male than when it was female. Moreover, both male and female faculty ranked the same application more highly when attached to a male name.

It strikes me that there are some ethical implications that flow from this study to which scientists (among others) should attend:

  1. Confidence that your judgments are objective is not a guarantee that your judgments are objective, and your intent to be unbiased may not be enough. The results of this study show a pattern of difference in ratings for which the only plausible explanation is the presence of a male name or a female name for the applicant. The faculty members treated the task they were doing as an objective evaluation of candidates based on prior research experience, faculty recommendations, the applicant’s statement, GRE scores, and so forth — that they were sorting out the well-qualified from the less-well-qualified — but they didn’t do that sorting solely on the basis of the actual experience and qualifications described in the application materials. If they had, the rankings wouldn’t have displayed the gendered split they did. The faculty in the study undoubtedly did not mean to bring gender bias to the evaluative task, but the results show that they did, whether they intended to or not.
  2. If you want to build reliable knowledge about the world, it’s helpful to identify your biases so they don’t end up getting mistaken for objective findings. As I’ve mentioned before, objectivity is hard. One of the hardest things about being objective is that fact that so many of our biases are unconscious — we don’t realize that we have them. If you don’t realize that you have a bias, it’s much harder to keep that bias from creeping in to your knowledge-building, from the way you frame the question you’re exploring to how you interpret data and draw conclusions from them. The biases you know about are easier to keep on a short leash.
  3. If a methodologically sound study finds that science faculty have a particular kind of bias, and if you are science faculty, you probably should assume that you might also have that bias. If you happen to have good independent evidence that you do not display the particular bias in question, that’s great — one less unconscious bias that might be messing with your objectivity. However, in the absence of such good independent evidence, the safest assumption to make is that you’re vulnerable to the bias too — even if you don’t feel like you are.
  4. If you doubt the methodologically soundness of a study finding that science faculty have a particular kind of bias, it is your responsibility to identify the methodological flaws. Ideally, you’d also want to communicate with the authors of the study, and with other researchers in the field, about the flaws you’ve identified in the study methodology. This is how scientific communities work together to build a reliable body of knowledge we all can use. And, a responsible scientist doesn’t reject the conclusions of a study just because they don’t match one’s hunches about how things are. The evidence is how scientists know anything.
  5. If there’s reason to believe you have a particular kind of bias, there’s reason to examine what kinds of judgments of yours it might influence beyond the narrow scope of the experimental study. Could gender bias influence whose data in your lab you trust the most? Which researchers in your field you take most seriously? Which theories or discoveries are taken to be important, and which others are taken to be not-so-important? If so, you have to be honest with yourself and recognize the potential for this bias to interfere with your interaction with the phenomena, and with your interaction with other scientists to tackle scientific questions and build knowledge. If you’re committed to building reliable knowledge, you need to find ways to expose the operation of this bias, or to counteract its effects. (Also, to the extent that this bias might play a role in the distribution of rewards like jobs or grants in scientific careers, being honest with yourself probably means acknowledging that the scientific community does not operate as a perfect meritocracy.)

Each of these acknowledgments looks small on its own, but I will not pretend that that makes them easy. I trust that this won’t be a deal-breaker. Scientists do lots of hard things, and people committed to building reliable knowledge about the world should be ready to take on pieces of self-knowledge relevant to that knowledge-building. Even when they hurt.

Dueling narratives: what’s the job market like for scientists and is a Ph.D. worth it?

At the very end of August, Slate posted an essay by Daniel Lametti taking up, yet again, what the value of a science Ph.D. is in a world where the pool of careers for science Ph.D.s in academia and industry is (maybe) shrinking. Lametti, who is finishing up a Ph.D. in neuroscience, expresses optimism that the outlook is not so bleak, reading the tea leaves of some of the available survey data to conclude that unemployment is not much of a problem for science Ph.D.s. Moreover, he points to the rewards of the learning that happens in a Ph.D. program as something that might be values in its own right rather than as a mere instrument to make a living later. (This latter argument will no doubt sound familiar.)

Of course, Chemjobber had to rain on the parade of this youthful optimism. (In the blogging biz, we call that “due diligence”.) Chemjobber critiques Lametti’s reading of the survey data (and points out some important limitations with those data), questions his assertion that a science Ph.D. is a sterling credential to get you into all manner of non-laboratory jobs, reiterates that the opportunity costs of spending years in a Ph.D. program are non-neglible, and reminds us that unemployed Ph.D. scientists do exist.

Beryl Benderly mounts similar challenges to Lametti’s take on the job market at the Science Careers blog.

You’ve seen this disagreement before. And, I reckon, you’re likely to see it again.

But this time, I feel like I’m starting to notice what may be driving these dueling narratives about how things are for science Ph.D.s. It’s not just an inability to pin down the facts about the job markets, or the employment trajectories of those science Ph.D.s. In the end, it’s not even a deep disagreement about what may be valuable in economic or non-economic ways about the training one receives in a science Ph.D. program.

Where one narrative focuses on the overall trends within STEM fields, the other focuses on individual experiences. And, it strikes me that part of what drives the dueling narratives is what feels like a tension between voicing an individual view it may be helpful to adopt for one’s own well-being and acknowledging the existence of systemic forces that tend to create unhelpful outcomes.

Of course, part of the problem in these discussions may be that we humans have a hard time generally reconciling overall trends with individual experiences. Even if it were a true fact that the employment outlook was very, very good for people in your field with Ph.D.s, if you have one of those Ph.D.s and you can’t find a job with it, the employment situation is not good for you. Similarly, if you’re a person who can find happiness (or at least satisfaction) in pretty much whatever situation you’re thrown into, a generally grim job market in your field may not big you very much.

But I think the narratives keep missing each other because of something other than not being able to reconcile the pooled labor data with our own anecdata. I think, at their core, the two narratives are trying to do different things.

* * *

I’ve written before about some of what I found valuable in my chemistry Ph.D. program, including the opportunity to learn how scientific knowledge is made by actually making some. That’s not to say that the experience is without its challenges, and it’s hard for me to imagine taking on those challenges without a burning curiosity, a drive to go deeper than sitting in a classroom and learning the science that others have built.

It can feel a bit like a calling — like what I imagine people learning how to be artists or musicians must feel. And, if you come to this calling in a time where you know the job prospects at the other end are anything but certain, you pretty much have to do the gut-check that I imagine artists and musicians do, too:

Am I brave enough to try this, even though I know there’s a non-negligible chance that I won’t be able to make a career out of it? Is it worth it to devote these years of toil and study, with long hours and low salary, to immersing myself in this world, even knowing I might not get to stay in it?

A couple quick caveats here: I suspect it’s much easier to play music or make art “on the side” after you get home from the job that pays for your food but doesn’t feed your soul than it is to do science on the side. (Maybe this points to the need for community science workspaces?) And, it’s by no means clear that those embarking on Ph.D. training in a scientific field are generally presented with realistic expectations about the job market for Ph.D.s in their field.

Despite the fact that my undergraduate professors talked up a supposed shortage of Ph.D. chemists (one that was not reflected in the labor statistics less than a year later), I somehow came to my own Ph.D. training with the attitude that it was an open question whether I’d be able to get a job as a chemist in academia or industry or a national lab. I knew I was going to leave my graduate program with a Ph.D., and I knew I was going to work.

The rent needed to be paid, and I was well acclimated to a diet that alternated between lentils and ramen noodles, so I didn’t see myself holding out for a dream job with a really high salary and luxe benefits. A career was something I wanted, but the more pressing need was a paycheck.

Verily, by the time I completed my chemistry Ph.D., this was a very pressing need. It’s true that students in a chemistry Ph.D. program are “paid to go to school,” but we weren’t paid much. I kept my head, and credit card balance, mostly above water by being a cyclist rather than a driver, saving money for registration, insurance, parking permits, and gas that my car-owning classmates had to pay. But it took two veterinary emergencies, one knee surgery, and ultimately the binding and microfilming fee I had to pay when I submitted the final version of my dissertation to completely wipe out my savings.

I was ready to teach remedial arithmetic at a local business college for $12 an hour (and significantly less than 40 hours a week) if it came to that. Ph.D. chemist or not, I needed to pay the bills.

Ultimately, I did line up a postdoctoral position, though I didn’t end up taking it because I had my epiphany about needing to become a philosopher. When I was hunting for postdocs, though, I knew that there was still no guarantee of a tenure track job, or a gig at a national lab, or a job in industry at the end of the postdoc. I knew plenty of postdocs who were still struggling to find a permanent job. Even before my philosophy epiphany, I was thinking through other jobs I was probably qualified to do that I wouldn’t hate — because I kind of assumed it would be hard, and that the economy wouldn’t feel like it owed me anything, and that I might be lucky, but I also might not be. Seeing lots of really good people have really bad luck on the job market can do that to a person.

My individual take on the situation had everything to do with keeping me from losing it. It’s healthy to be able to recognize that bad luck is not the same as the universe (or even your chosen professional community) rendering the judgment that you suck. It’s healthy to be able to weather the bad luck rather than be crushed by it.

But, it’s probably also healthy to recognize when there may be systemic forces making it a lot harder than it needs to be to join a professional community for the long haul.

* * *

Indeed, the discussion of the community-level issues in scientific fields is frequently much less optimistic than the individual-level pep-talks people give themselves or each other.

What can you say about a profession that asks people who want to join it to sink as much as a decade into graduate school, and maybe another decade into postdoctoral positions (jobs defined as not permanent) just to meet the training prerequisite for desirable permanent jobs that may not exist in sufficient numbers to accommodate all the people who sacrificed maybe two decades at relatively low salaries for their level of education, who likely had to uproot and change their geographical location at least once, and who succeeded at the research tasks they were asked to take on during that training? And what can you say about that profession when the people asked to embark on this gamble aren’t given anything like a realistic estimate of their likelihood of success?

Much of what people do say frames this as a problem of supply and demand. There are just too many qualified candidates for the available positions, at least from the point of view of the candidates. From the point of view of a hiring department or corporation, the excess of available workers may seem like less of a problem, driving wages downward and making it easier to sell job candidates on positions in “geographically unattractive” locations.

Things might get better for the job seeker with a Ph.D. if the supply of science Ph.D.s were adjusted downward, but this would disrupt another labor pool, graduate students working to generate data for PIs in their graduate labs. Given the “productivity” expectations on those PIs, imposed by institutions and granting agencies, reducing student throughput in Ph.D. programs is likely to make things harder for those lucky enough to have secured tenure track positions in the first place.

The narrative about the community-level issues takes on a different tone depending on who’s telling it, and with which end of the power gradient they identify. Do Ph.D. programs depend on presenting a misleading picture of job prospects and quality of life for Ph.D. holders to create the big pools of student labor on which they depend? Do PIs and administrators running training programs encourage the (mistaken) belief that the academic job market is a perfect meritocracy, and that each new Ph.D.’s failure will be seen as hers alone? Are graduate students themselves to blame for not considering the employment data before embarking on their Ph.D. programs? Are they being spoiled brats when they should recognize that their unemployment numbers are much, much lower than for the population as a whole, that most employed people have nothing like tenure to protect their jobs, and indeed that most people don’t have jobs that have anything to do with their passions?

So the wrangling continues over whether things are generally good or generally bad for Ph.D. scientists, over whether the right basis for evaluating this is the life Ph.D. programs promise when they recruit students (which maybe they are only promising to the very best — or the very lucky) or the life most people (including large numbers of people who never finished college, or high school) can expect, over whether this is a problem that ought to be addressed or simply how things are.

* * *

The narratives here feel like they’re in conflict because they’re meant to do different things.

The individual-level narrative is intended to buoy the spirits of the student facing adversity, to find some glimmers of victory that can’t be taken away even by a grim employment market. It treats the background conditions as fixed, or at least as something the individual cannot change; what she can control is her reaction to them.

It’s pretty much the Iliad, but with lab coats.

The community-level narrative instead strives for a more accurate accounting of what all the individual trajectories add up to, focusing not on who has experienced personal growth but on who is employed. Here too, there is a striking assumption that The Way Things Are is a stable feature of the system, not something individual action could change — or that individual members of the community should feel any responsibility for changing.

And this is where I think there’s a need for another narrative, one with the potential to move us beyond the disagreement and disgruntlement we see each time the other two collide.

Professional communities, after all, are made up of individuals. People, not the economy, make hiring decisions. Members of professional communities make decisions about how they’re going to treat each other, and in particular about how they will treat the most vulnerable members of their community.

Graduate students are not receiving a mere service or commodity from their Ph.D. programs (“Would you like to supersize that scientific education?”). They are entering a relationship resembling an apprenticeship with the members of the professional community they’re trying to join. Arguably, this relationship means that the professional community has some responsibility for the ongoing well-being of those new Ph.D.s.

Here, I don’t think this is a responsibility to infantilize new Ph.D.s, to cover them with bubble-wrap or to create for them a sparkly artificial economy full of rainbows and unicorns. But they probably have a duty to provide help when they can.

Maybe this help would come in the form of showing compassion, rather than claiming that the people who deserve to be scientists will survive the rigors of the job market and that those who don’t weren’t meant to be in science. Maybe it would come by examining one’s own involvement in a system that defines success too narrowly, or that treats Ph.D. students as a consumable resource, or that fails to help those students cultivate a broad enough set of skills to ensure that they can find some gainful employment. Maybe it would come from professional communities finding ways to include as real members people they have trained but who have not been able to find employment in that profession.

Individuals make the communities. The aggregate of the decisions the communities make create the economic conditions and the quality of life issues. Treating current conditions — including current ways of recruiting students or describing the careers and lives they ought to expect at the other end of their training — as fixed for all time it a way of ignoring how individuals and institutions are responsible for those conditions. And, it doesn’t do anything to help change them.

It’s useful to have discussions of how to navigate the waters of The Way Things Are. It’s also useful to try to get accurate data about the topology of those waters. But these discussions shouldn’t distract us from serious discussions of The Way Things Could Be — and of how scientific communities can get there from here.