The ethics of opting out of vaccination.

At my last visit to urgent care with one of my kids, the doctor who saw us mentioned that there is currently an epidemic of pertussis (whooping cough) in California, one that presents serious danger for the very young children (among others) hanging out in the waiting area. We double-checked that both my kids are current on their pertussis vaccinations (they are). I checked that I was current on my own pertussis vaccination back in December when I got my flu shot.

Sharing a world with vulnerable little kids, it’s just the responsible thing to do.

You’re already on the internet reading about science and health, so it will probably come as no surprise to you that California’s pertussis epidemic is a result of the downturn in vaccination in recent years, nor that this downturn has been driven in large part by parents worried that childhood vaccinations might lead to their kids getting autism, or asthma, or some other chronic disease. Never mind that study after study has failed to uncover evidence of such a link; these parents are weighing the risks and benefits (at least as they understand them) of vaccinating or opting out and trying to make the best decision they can for their children.

The problem is that the other children with which their children are sharing a world get ignored in the calculation.

Of course, parents are accountable to the kids they are raising. They have a duty to do what is best for them, as well as they can determine what that is. They probably also have a duty to put some effort into making a sensible determination of what’s best for their kids (which may involve seeking out expert advice, and evaluating who has the expertise to be offering trustworthy advice).


But parents and kids are also part of a community, and arguably they are accountable to other members of that community. I’d argue that members of a community may have an obligation to share relevant information with each other — and, to avoid spreading misinformation, not to represent themselves as experts when they are not. Moreover, when parents make choices with the potential to impact not only themselves and their kids but also other members of the community, they have a duty to do what is necessary to minimize bad impacts on others. Among other things, this might mean keeping your unvaccinated-by-choice kids isolated from kids who haven’t been vaccinated because of their age, because of compromised immune function, or because they are allergic to a vaccine ingredient. If you’re not willing to do your part for herd immunity, you need to take responsibility for staying out of the herd.

Otherwise, you are a free-rider on the sacrifices of the other members of the community, and you are breaking trust with them.

I know from experience that this claim upsets non-vaccinating parents a lot. They imagine that I am declaring them bad people, guilty of making a conscious choice to hurt others. I am not. However, I do think they are making a choice that has the potential to cause great harm to others. If I didn’t think that pointing out the potential consequences might be valuable to these non-vaccinating parents, at least in helping them understand more fully what they’re choosing, I wouldn’t bother.

So here, let’s take a careful look at my claim that vaccination refuseniks are free-riders.


First, what’s a free-rider?


In the simplest terms, a free-rider is someone who accepts a benefit without paying for it. The free-rider is able to partake of this benefit because others have assumed the costs necessary to bring it about. But if no one was willing to assume those costs (or indeed, in some cases, if there is not a critical mass of people assuming those costs), then that benefit would not be available, either.


Thus, when I claim that people who opt out of vaccination are free-riders on society, what I’m saying is that they are receiving benefits for which they haven’t paid their fair share — and that they receive these benefits only because other members of society have assumed the costs by being vaccinated.


Before we go any further, let’s acknowledge that people who choose to vaccinate and those who do not probably have very different understandings of the risks and benefits, and especially of their magnitudes and likelihoods. Ideally, we’d be starting this discussion about the ethics of opting out of vaccination with some agreement about what the likely outcomes are, what the unlikely outcomes are, what the unfortunate-but-tolerable outcomes are, and what the to-be-avoided-at-all-costs outcomes are.


That’s not likely to happen. People don’t even accept the same facts (regardless of scientific consensus), let alone the same weightings of them in decision making.


But ethical decision making is supposed to help us get along even in a world where people have different values and interests than our own. So, plausibly, we can talk about whether certain kinds of choices fit the pattern of free-riding even if we can’t come to agreement on probabilities and a hierarchy of really bad outcomes.


So, let’s say all the folks in my community are vaccinated against measles except me. Within this community (assuming I’m not wandering off to exotic and unvaccinated lands, and that people from exotic and unvaccinated lands don’t come wandering through), my chances of getting measles are extremely low. Indeed, they are as low as they are because everyone else in the community has been vaccinated against measles — none of my neighbors can serve as a host where the virus can hang out and then get transmitted to me. (By the way, the NIH has a nifty Disease Transmission Simulator that you can play around with to get a feel for how infectious diseases and populations whose members have differing levels of immunity interact.)


I get a benefit (freedom from measles) that I didn’t pay for. The other folks in my community who got the vaccine paid for it.


In fact, it usually doesn’t require that everyone else in the community be vaccinated against measles for me to be reasonably safe from it. Owing to “herd immunity,” measles is unlikely to run through the community if the people without immunity are relatively few and well interspersed with the vaccinated people. This is a good thing, since babies in the U.S. don’t get their first vaccination against measles until 12 months, and some people are unable to get vaccinated even if they’re willing to bear the cost (e.g., because they have compromised immune systems or are allergic to an ingredient of the vaccine). And, in other cases, people may get vaccinated but the vaccines might not be fully effective — if exposed, they might still get the disease. Herd immunity tends to protect these folks from the disease — at least as long as enough of the herd is vaccinated.


If too few members of the herd are vaccinated, even some of those who have borne the costs of being vaccinated (because even very good vaccines can’t deliver 100% protection to 100% of the people who get them), or who would bear those costs were they able (owing to their age or health or access to medical care), may miss out on the benefit. Too many free-riders can spoil things even for those who are paying their fair share.


A standard reply from non-vaccinating parents is that their unvaccinated kids are not free-riders on the vaccinated mass of society because they actually get diseases like chicken pox, pertussis, and measles (and are not counting on avoiding the other diseases against which people are routinely vaccinated). In other words, they argue, they didn’t pay the cost, but they didn’t get the benefit, either.


Does this argument work?


I’m not convinced that it does. First off, even though unvaccinated kids may get a number of diseases that their vaccinated neighbors do not, it is still unlikely that they will catch everything against which we routinely vaccinate. By opting out of vaccination but living in the midst of a herd that is mostly vaccinated, non-vaccinating parents significantly reduce the chances of their kids getting many diseases compared to what the chances would be if they lived in a completely unvaccinated herd. That statistical reduction in disease is a benefit, and the people who got vaccinated are the ones paying for it.


Now, one might reply that unvaccinated kids are actually incurring harm from their vaccinated neighbors, for example if they contract measles from a recently vaccinated kid shedding the live virus from the vaccine. However, the measles virus in the MMR vaccine is an attenuated virus — which is to say, it’s quite likely that unvaccinated kids contacting measles from vaccinated kids will have a milder bout of measles than they might have if they had been exposed to a full-strength measles virus out in the wild.
 A milder case of measles is a benefit, at least when the alternative is a severe case of measles. Again, it’s a benefit that is available because other people bore the cost of being vaccinated.


Indeed, even if they were to catch every single disease against which we vaccinate, unvaccinated kids would still reap further benefits by living in a society with a high vaccination rate. The fact that most members of society are vaccinated means that there is much less chance that epidemic diseases will shut down schools, industries, or government offices, much more chance that hospitals and medical offices will not be completely overwhelmed when outbreaks happen, much more chance that economic productivity will not be crippled and that people will be able to work and pay the taxes that support all manner of public services we take for granted.

The people who vaccinate are assuming the costs that bring us a largely epidemic-free way of life. Those who opt out of vaccinating are taking that benefit for free.


I understand that the decision not to vaccinate is often driven by concerns about what costs those who receive the vaccines might bear, and whether those costs might be worse than the benefits secured by vaccination. Set aside for the moment the issue of whether these concerns are well grounded in fact. Instead, let’s look at the parallel me might draw: 
If I vaccinate my kids, no matter what your views about the etiology of autism and asthma, you are not going to claim that my kids getting their shots raise your kids’ odds of getting autism or asthma. But if you don’t vaccinate your kids, even if I vaccinate mine, your decision does raise my kids’ chance of catching preventable infectious diseases. My decision to vaccinate doesn’t hurt you (and probably helps you in the ways discussed above). Your decision not to vaccinate could well hurt me.


The asymmetry of these choices is pretty unavoidable.


Here, it’s possible that a non-vaccinating parent might reply by saying that it ought to be possible for her to prioritize protecting her kids from whatever harms vaccination might bring to them without being accused of violating a social contract.


The herd immunity thing works for us because of an implicit social contract of sorts: those who are medically able to be vaccinated get vaccinated. Obviously, this is a social contract that views the potential harms of the diseases as more significant than the potential harms of vaccination. I would argue that under such a social contract, we as a society have an obligation to take care of those who end up paying a higher cost to achieve the shared benefit.


But if a significant number of people disagree, and think the potential harms of vaccination outweigh the potential harms of the diseases, shouldn’t they be able to opt out of this social contract?


The only way to do this without being a free-rider is to opt out of the herd altogether — or to ensure that your actions do not bring additional costs to the folks who are abiding by the social contract. If you’re planning on getting those diseases naturally, this would mean taking responsibility for keeping the germs contained and away from the herd (which, after all, contains members who are vulnerable owing to age, medical reasons they could not be vaccinated, or the chance of less than complete immunity from the vaccines). No work, no school, no supermarkets, no playgrounds, no municipal swimming pools, no doctor’s office waiting rooms, nothing while you might be able to transmit the germs. The whole time you’re able to transmit the germs, you need to isolate yourself from the members of society whose default assumption is vaccination. Otherwise, you endanger members of the herd who bore the costs of achieving herd immunity while reaping benefits (of generally disease-free work, school, supermarkets, playgrounds, municipal swimming pools, doctor’s office waiting rooms, and so forth, for which you opted out of paying your fair share).


Since you’ll generally be able to transmit these diseases before the first symptoms appear — even before you know for sure that you’re infected — you will not be able to take regular contact with the vaccinators for granted.


And if you’re traveling to someplace where the diseases whose vaccines you’re opting out of are endemic, you have a duty not to bring the germs back with you to the herd of vaccinators. Does this mean quarantining yourself for some minimum number of days before your return? It probably does. Would this be a terrible inconvenience for you? Probably so, but the 10-month-old who catches the measles you bring back might also be terrible inconvenienced. Or worse.

Here, I don’t think I’m alone in judging the harm of a vaccine refusenik giving an infant pertussis as worse than the harm in making a vaccine refusenik feel bad about violating a social contract.


An alternative, one which would admittedly required some serious logistical work, might be to join a geographically isolated herd of other people opting out of vaccination, and to commit to staying isolated from the vaccinated herd. Indeed, if the unvaccinated herd showed a lower incidence of asthma and autism after a few generations, perhaps the choices of the members of the non-vaccinating herd would be vindicated.


In the meantime, however, opting out of vaccines but sharing a society with those who get vaccinated is taking advantage of benefits that others have paid for and even threatening those benefits. Like it or not, that makes you a free-rider.
* * * * *
An earlier version of this essay originally appeared on my other blog.

Strategies to address questionable statistical practices.

If you have not yet read all you want to read about the wrongdoing of social psychologist Diederik Stapel, you may be interested in reading the 2012 Tilburg Report (PDF) on the matter. The full title of the English translation is “Flawed science: the fraudulent research practices of social psychologist Diederik Stapel” (in Dutch, “Falende wetenschap: De fruaduleuze onderzoekspraktijken van social-psycholoog Diederik Stapel”), and it’s 104 pages long, which might make it beach reading for the right kind of person.

If you’re not quite up to the whole report, Error Statistics Philosophy has a nice discussion of some of the highlights. In that post, D. G. Mayo writes:

The authors of the Report say they never anticipated giving a laundry list of “undesirable conduct” by which researchers can flout pretty obvious requirements for the responsible practice of science. It was an accidental byproduct of the investigation of one case (Diederik Stapel, social psychology) that they walked into a culture of “verification bias”. Maybe that’s why I find it so telling. It’s as if they could scarcely believe their ears when people they interviewed “defended the serious and less serious violations of proper scientific method with the words: that is what I have learned in practice; everyone in my research environment does the same, and so does everyone we talk to at international conferences” (Report 48). …

I would place techniques for ‘verification bias’ under the general umbrella of techniques for squelching stringent criticism and repressing severe tests. These gambits make it so easy to find apparent support for one’s pet theory or hypotheses, as to count as no evidence at all (see some from their list ). Any field that regularly proceeds this way I would call a pseudoscience, or non-science, following Popper. “Observations or experiments can be accepted as supporting a theory (or a hypothesis, or a scientific assertion) only if these observations or experiments are severe tests of the theory.”

You’d imagine this would raise the stakes pretty significantly for the researcher who could be teetering on the edge of verification bias: fall off that cliff and what you’re doing is no longer worthy of the name scientific knowledge-building.

Psychology, after all, is one of those fields given a hard time by people in “hard sciences,” which are popularly reckoned to be more objective, more revealing of actual structures and mechanisms in the world — more science-y. Fair or not, this might mean that psychologist have something to prove about their hardheadedness as researchers, about the stringency of their methods. Some peer pressure within the field to live up to such standards would obviously be a good thing — and certainly, it would be a better thing for the scientific respectability of psychology than an “everyone is doing it” excuse for less stringent methods.

Plus, isn’t psychology a field whose practitioners should have a grip on the various cognitive biases to which we humans fall prey? Shouldn’t psychologists understand better than most the wisdom of putting structures in place (whether embodied in methodology or in social interactions) to counteract those cognitive biases?

Remember that part of Stapel’s M.O. was keeping current with the social psychology literature so he could formulate hypotheses that fit very comfortably with researchers’ expectations of how the phenomena they studied behaved. Then, fabricating the expected results for his “investigations” of these hypotheses, Stapel caught peer reviewers being credulous rather than appropriately skeptical.

Short of trying to reproduce the experiments Stapel described themselves, how could peer reviewers avoid being fooled? Mayo has a suggestion:

Rather than report on believability, researchers need to report the properties of the methods they used: What was their capacity to have identified, avoided, admitted verification bias? The role of probability here would not be to quantify the degree of confidence or believability in a hypothesis, given the background theory or most intuitively plausible paradigms, but rather to check how severely probed or well-tested a hypothesis is– whether the assessment is formal, quasi-formal or informal. Was a good job done in scrutinizing flaws…or a terrible one?  Or was there just a bit of data massaging and cherry picking to support the desired conclusion? As a matter of routine, researchers should tell us.

I’m no social psychologist, but this strikes me as a good concrete step that could help peer reviewers make better evaluations — and that should help scientists who don’t want to fool themselves (let alone their scientific peers) to be clearer about what they really know and how well they really know it.

The continuum between outright fraud and “sloppy science”: inside the frauds of Diederik Stapel (part 5).

It’s time for one last look at the excellent article by Yudhijit Bhattacharjee in the New York Times Magazine (published April 26, 2013) on social psychologist and scientific fraudster Diederik Stapel. We’ve already examined strategy Stapel pursued to fabricate persuasive “results”, the particular harms Stapel’s misconduct did to the graduate students he was training, and the apprehensions of the students and colleagues who suspected fraud was afoot about the prospect of blowing the whistle on Stapel. To close, let’s look at some of the uncomfortable lessons the Stapel case has for his scientific community — and perhaps for other scientific communities as well.

Bhattacharjee writes:

At the end of November, the universities unveiled their final report at a joint news conference: Stapel had committed fraud in at least 55 of his papers, as well as in 10 Ph.D. dissertations written by his students. The students were not culpable, even though their work was now tarnished. The field of psychology was indicted, too, with a finding that Stapel’s fraud went undetected for so long because of “a general culture of careless, selective and uncritical handling of research and data.” If Stapel was solely to blame for making stuff up, the report stated, his peers, journal editors and reviewers of the field’s top journals were to blame for letting him get away with it. The committees identified several practices as “sloppy science” — misuse of statistics, ignoring of data that do not conform to a desired hypothesis and the pursuit of a compelling story no matter how scientifically unsupported it may be.

The adjective “sloppy” seems charitable. Several psychologists I spoke to admitted that each of these more common practices was as deliberate as any of Stapel’s wholesale fabrications. Each was a choice made by the scientist every time he or she came to a fork in the road of experimental research — one way pointing to the truth, however dull and unsatisfying, and the other beckoning the researcher toward a rosier and more notable result that could be patently false or only partly true. What may be most troubling about the research culture the committees describe in their report are the plentiful opportunities and incentives for fraud. “The cookie jar was on the table without a lid” is how Stapel put it to me once. Those who suspect a colleague of fraud may be inclined to keep mum because of the potential costs of whistle-blowing.

The key to why Stapel got away with his fabrications for so long lies in his keen understanding of the sociology of his field. “I didn’t do strange stuff, I never said let’s do an experiment to show that the earth is flat,” he said. “I always checked — this may be by a cunning manipulative mind — that the experiment was reasonable, that it followed from the research that had come before, that it was just this extra step that everybody was waiting for.” He always read the research literature extensively to generate his hypotheses. “So that it was believable and could be argued that this was the only logical thing you would find,” he said. “Everybody wants you to be novel and creative, but you also need to be truthful and likely. You need to be able to say that this is completely new and exciting, but it’s very likely given what we know so far.”

Fraud like Stapel’s — brazen and careless in hindsight — might represent a lesser threat to the integrity of science than the massaging of data and selective reporting of experiments. The young professor who backed the two student whistle-blowers told me that tweaking results — like stopping data collection once the results confirm a hypothesis — is a common practice. “I could certainly see that if you do it in more subtle ways, it’s more difficult to detect,” Ap Dijksterhuis, one of the Netherlands’ best known psychologists, told me. He added that the field was making a sustained effort to remedy the problems that have been brought to light by Stapel’s fraud.

(Bold emphasis added.)

If the writers of this report are correct, the field of psychology failed in multiple ways here. First, they were insufficiently skeptical — both of Stapel’s purported findings and of their own preconceptions — to nip Stapel’s fabrications in the bud. And, they were themselves routinely engaging in practices that were bound to mislead.

Maybe these practices don’t rise to the level of outright fabrication. However, neither do they rise to the level of rigorous and intellectually honest scientific methodology.

There could be a number of explanations for these questionable methodological choices.

Possibly some of the psychologists engaging in this “sloppy science” lack a good understanding of statistics or of what counts as a properly rigorous test of one’s hypothesis. Essentially, this is an explanation of faulty methodology on the basis of ignorance. However, it’s likely that this is culpable ignorance — that psychology researchers have a positive duty to learn what they ought to know about statistics and hypothesis testing, and to avail themselves of available resources to ensure that they aren’t ignorant in this particular way.

I don’t know if efforts to improve statistics education are a part of the “sustained effort to remedy the problems that have been brought to light by Stapel’s fraud,” but I think they should be.

Another explanation for the lax methodology decried by the report is alluded to in the quoted passage: perhaps psychology researchers let the strength of their own intuitions about what they were going to see in their research results drive their methodology. Perhaps they unconsciously drifted away from methodological rigor and toward cherry-picking and misuse of statistics and the like because they knew in their hearts what the “right” answer would be. Given this kind of conviction, of course they would reject methods that didn’t yield the “right” answer in favor of those that did.

Here, too, the explanation does not provide an excuse. The scientist’s brief is not to take strong intuitions as true, but to look for evidence — especially evidence that could demonstrate that the intuitions are wrong. A good scientist should be on the alert for instances where she is being fooled by her intuitions. Rigorous methodology is one of the tools at her disposal to avoid being fooled. Organized skepticism from her fellow scientists is another.

From here, the explanations drift into waters where the researchers are even more culpable for their sloppiness. If you understand how to test hypotheses properly, and if you’re alert enough to the seductive power of your intuitions, it seems like the other reason you might engage in “sloppy science” is to make your results look less ambiguous, more certain, more persuasive than they really are, either to your fellow scientists or to others (administrators evaluating your tenure or promotion case? the public?). Knowingly providing a misleading picture of how good your results are is lying. It may be a lie of a smaller magnitude than Diederik Stapel’s full-scale fabrications, but it’s still dishonest.

And of course, there are plenty of reasons scientists (like other human beings) might try to rationalize a little lie as being not that bad. Maybe you really needed more persuasive preliminary data than you got to land the grant without which you won’t be able to support graduate students. Maybe you needed to make your conclusions look stronger to satisfy the notoriously difficult peer reviewers at the journal to which you submitted your manuscript. Maybe you are on the verge of getting credit for a paradigm-shaking insight in your field (if only you can put up the empirical results to support it), or of beating a competing research group to the finish line for an important discovery (if only you can persuade your peers that the results you have establish that discovery).

But maybe all these excuses prioritize scientific scorekeeping to the detriment of scientific knowledge-building.

Science is supposed to be an activity aimed at building a reliable body of knowledge about the world. You can’t reconcile this with lying, whether to yourself or to your fellow scientists. This means that scientists who are committed to the task must refrain from the little lies, and that they must take serious conscious steps to ensure that they don’t lie to themselves. Anything else runs the risk of derailing the whole project.

C.K. Gunsalus on responsible — and prudent — whistleblowing.

In my last post, I considered why, despite good reasons to believe that social psychologist Diederik Stapel’s purported results were too good to be true, the scientific colleagues and students who were suspicious of his work were reluctant to pursue these suspicions. Questioning the integrity of a member of your professional community is hard, and blowing the whistle on misconduct and misbehavior can be downright dangerous.

In her excellent article “How to Blow the Whistle and Still Have a Career Afterwards”, C. K. Gunsalus describes some of the challenges that come from less than warm community attitudes towards members who point out wrongdoing:

[Whistleblowers pay a high price] due to our visceral cultural dislike of tattletales. While in theory we believe the wrong-doing should be reported, our feelings about practice are more ambivalent. …

Perhaps some of this ambivalence is rooted in fear of becoming oneself the target of maliciously motivated false charges filed by a disgruntled student or former colleague. While this concern is probably overblown, it seems not far from the surface in many discussions of scientific integrity. (p. 52)

I suspect that much of this is a matter of empathy — or, more precisely, of who it is within our professional community with whom we empathize. Maybe we have an easier time empathizing with the folks who seem to be trying to get along, rather than those who seem to be looking for trouble. Or maybe we have more empathy for our colleagues, with whom we share experiences and responsibilities and the expectation of longterm durable bonds, than we have for our students.

But perhaps distaste for a tattletale is more closely connected to our distaste for the labor involved in properly investigating allegations of wrongdoing and then, if wrongdoing is established, addressing it. It would certainly be easier to assume the charges are baseless, and sometimes disinclination to investigate takes the form of finding reasons not to believe the person raising the concerns.

Still, if the psychology of scientists cannot permit them to take allegations of misbehavior seriously, there is no plausible way for science to be self-correcting. Gunsalus writes:

[E]very story has at least two sides, and a problem often looks quite different when both are in hand than when only one perspective is in view. The knowledge that many charges are misplaced or result from misunderstandings reinforces ingrained hesitancies against encouraging charges without careful consideration.

On the other hand, serious problems do occur where the right and best thing for all is a thorough examination of the problem. In most instances, this examination cannot occur without someone calling the problem to attention. Early, thorough review of potential problems is in the interest of every research organization, and conduct that leads to it should be encouraged. (p. 53)

(Bold emphasis added.)

Gunsalus’s article (which you should read in full) takes account of negative attitudes towards whistleblowers despite the importance of rooting out misconduct and lays out a sensible strategy for bringing wrongdoing to light without losing your membership in your professional community. She lays out “rules for responsible whistleblowing”:

  1. Consider alternative explanations (especially that you may be wrong).
  2. In light of #1, ask questions, do not make charges.
  3. Figure out what documentation supports your concerns and where it is.
  4. Separate your personal and professional concerns.
  5. Assess your goals.
  6. Seek advice and listen to it.

and her “step-by-step procedures for responsible whistleblowing”:

  1. Review your concern with someone you trust.
  2. Listen to what that person tells you.
  3. Get a second opinion and take that seriously, too.
  4. If you decide to initiate formal proceedings, seek strength in numbers.
  5. Find the right place to file charges; study the procedures.
  6. Report your concerns.
  7. Ask questions; keep notes.
  8. Cultivate patience!

The focus is very much on moving beyond hunches to establish clear evidence — and on avoiding self-deception. The potential whistleblower must hope that those to whom he or she is bringing concerns are themselves as committed to looking at the available evidence and avoiding self-deception.

Sometimes this is the situation, as it seems to have been in the Stapel case. In other cases, though, whistleblowers have done everything Gunsalus recommends and still found themselves without the support of their community. This is not just a bad thing for the whistleblowers. It is also a bad thing for the scientific community and the reliability of the shared body of knowledge it tries to build.
_____
C. K. Gunsalus, “How to Blow the Whistle and Still Have a Career Afterwards,” Science and Engineering Ethics, 4(1) 1998, 51-64.