Evaluating scientific claims (or, do we have to take the scientist’s word for it?)

Recently, we’ve noted that a public composed mostly of non-scientists may find itself asked to trust scientists, in large part because members of that public are not usually in a position to make all their own scientific knowledge. This is not a problem unique to non-scientists, though — once scientists reach the end of the tether of their expertise, they end up having to approach the knowledge claims of scientists in other fields with some mixture of trust and skepticism. (It’s reasonable to ask what the right mixture of trust and skepticism would be in particular circumstances, but there’s not a handy formula with which to calculate this.)

Are we in a position where, outside our own narrow area of expertise, we either have to commit to agnosticism or take someone else’s word for things? If we’re not able to directly evaluate the data, does that mean we have no good way to evaluate the credibility of the scientist pointing to the data to make a claim?

This raises an interesting question for science journalism, not so much about what role it should play as what role it could play.

If only a trained scientist could evaluate the credibility of scientific claims (and then perhaps only in the particular scientific field in which one was trained), this might reduce science journalism to a mere matter of publishing press releases, or of reporting on scientists’ social events, sense of style, and the like. Alternatively, if the public looked to science journalists not just to communicate the knowledge claims various scientists are putting forward but also to do some evaluative work on our behalf — sorting out credible claims and credible scientists from the crowd — we might imagine that good science journalism demands extensive scientific training (and that we probably need a separate science reporter for each specialized area of science to be covered).

In an era where media outlets are more likely to cut the science desk than expand it, pinning our hopes on legions of science-Ph.D.-earning reporters on the science beat might be a bad idea.

I don’t think our prospects for evaluating scientific credibility are quite that bad.

Scientific knowledge is built on empirical data, and the details of the data (what sort of data is relevant to the question at hand, what kind of data can we actually collect, what techniques are better or worse for collecting the data, how we distinguish data from noise, etc.) can vary quite a lot in different scientific disciplines, and in different areas of research within those disciplines. However, there are commonalities in the basic patterns of reasoning that scientists in all fields use to compare their theories with their data. Some of these patterns of reasoning may be rather sophisticated, perhaps even non-intuitive. (I’m guessing certain kinds of probabilistic or statistical reasoning might fit this category.) But others will be the patterns of reasoning that get highlighted when “the scientific method” is taught.

In other words, even if I can’t evaluate someone else’s raw data to tell you directly what it means, I can evaluate the way that data is used to support or refute claims. I can recognize logical fallacies and distinguish them from instances of valid reasoning. Moreover, this is the kind of thing that a non-scientist who is good at critical thinking (whether a journalist or a member of the public consuming a news story) could evaluate as well.

One way to judge scientific credibility (or lack thereof) is to scope out the logical structure of the arguments a scientist is putting up for consideration. It is possible to judge whether arguments have the right kind of relationship to the empirical data without wallowing in that data oneself. Credible scientists can lay out:

  • Here’s my hypothesis.
  • Here’s what you’d expect to observe if the hypothesis is true. Here, on the other hand, is what you’d expect to observe if the hypothesis is false.
  • Here’s what we actually observed (and here are the steps we took to control the other variables).
  • Here’s what we can say (and with what degree of certainty) about the hypothesis in the light of these results.
  • Here’s the next study we’d like to do to be even more sure.

And, not only will the logical connections between the data and what is inferred from them look plausible to the science writer who is hip to the scientific method, but they ought to look plausible to other scientists — even to scientists who might prefer different hypotheses, or different experimental approaches. If what makes something good science is its epistemology — the process by which data are used to generate and/or support knowledge claims — then even scientists who may disagree with those knowledge claims should still be able to recognize the patterns of reasoning involved as properly scientific. This suggests a couple more things we might ask credible scientists to display:

  • Here are the results of which we’re aware (published and unpublished) that might undermine our findings.
  • Here’s how we have taken their criticisms (or implied criticisms) seriously in evaluating our own results.

If the patterns of reasoning are properly scientific, why wouldn’t all the scientists agree about the knowledge claims themselves? Perhaps they’re taking different sets of data into account, or they disagree about certain of the assumptions made in framing the question. The important thing to notice here is that scientists can disagree with each other about experimental results and scientific conclusions without thinking that the other guy is a bad scientist. The hope is that, in the fullness of time, more data and dialogue will resolve the disagreements. But good, smart, honest scientists can disagree.

This is not to say that there aren’t folks in lab coats whose thinking is sloppy. Indeed, catching sloppy thinking is the kind of thing you’d hope a good general understanding of science would help someone (like a scientific colleague, or a science journalist) to do. At that point, of course, it’s good to have backup — other scientists who can give you their read on the pattern of reasoning, for example. And, to the extent that a scientist — especially one talking “on the record” about the science (whether to a reporter or to other scientists or to scientifically literate members of the public) — displays sloppy thinking, that would tend to undermine his or her credibility.

There are other kinds of evaluation you can probably make of a scientist’s credibility without being an expert in his or her field. Examining a scientific paper to see if the sources cited make the claims that they are purported to make by the paper citing them is one way to assess credibility. Determining whether a scientist might be biased by an employer or a funding source may be harder. But there, I suspect many of the scientists themselves are aware of these concerns and will go the extra mile to establish their credibility by taking the possibility that they are seeing what they want to see very seriously and testing their hypotheses fairly stringently so they can answer possible objections.

It’s harder still to get a good read on the credibility of scientists who present evidence and interpretations with the right sort of logical structure but who have, in fact, fabricated or falsified that evidence. Being wary of results that seem too good to be true is probably a good strategy here. Also, once a scientist is caught in such misconduct, it’s entirely appropriate not to trust another word that comes from his or her mouth.

One of the things fans of science have tended to like is that it’s a route to knowledge that is, at least potentially, open to any of us. It draws on empirical data we can get at through our senses and on our powers of rational thinking. As it happens, the empirical data have gotten pretty complicated, and there’s usually a good bit of technology between the thing in the world we’re trying to observe and the sense organs we’re using to observe it. However, those powers of rational thinking are still at the center of how the scientific knowledge gets built. Those powers need careful cultivation, but to at least a first approximation they may be enough to help us tell the people doing good science from the cranks.

What a scientist knows about science (or, the limits of expertise).

In a world where scientific knowledge might be useful in guiding decisions we make individually and collectively, one reason non-scientists might want to listen to scientists is that scientists are presumed to have the expertise to sort reliable knowledge claims from snake oil. If you’re not in the position to make your own scientific knowledge, your best bet might be to have a scientific knowledge builder tell you what counts as good science.

But, can members of the public depend on any scientist off the street (or out of the lab) to vet all the putative scientific claims for credibility?

Here, we have to grapple with the relationship between Science and particular scientific disciplines — and especially with the question of whether there is enough of a common core between different areas of science that scientists trained in one area can be trusted to recognize the strengths and weaknesses of work in another scientific area. How important is all that specialization research scientists do? Can we trust that, to some extent, all science follows the same rules, thus equipping any scientist to weigh in intelligently about any given piece of it?

It’s hard to give you a general answer to that question. Instead, as a starting point for discussion, let me lay out the competence I personally am comfortable claiming, in my capacity as a trained scientist.

As someone trained in a science, I am qualified:

  1. to say an awful lot about the research projects I have completed (although perhaps a bit less about them when they were still underway).
  2. to say something about the more or less settled knowledge, and about the live debates, in my research area (assuming, of course, that I have kept up with the literature and professional meetings where discussions of research in this area take place).
  3. to say something about the more or less settled (as opposed to “frontier”) knowledge for my field more generally (again, assuming I have kept up with the literature and the meetings).
  4. perhaps, to weigh in on frontier knowledge in research areas other than my own, if I have been very diligent about keeping up with the literature and the meetings and about communicating with colleagues working in these areas.
  5. to evaluate scientific arguments in areas of science other than my own for logical structure and persuasiveness (though I must be careful to acknowledge that there may be premises of these arguments — pieces of theory or factual claims from observations or experiments that I’m not familiar with — that I’m not qualified to evaluate).
  6. to recognize, and be wary of, logical fallacies and other less obvious pseudo-scientific moves (e.g., I should call shenanigans on claims that weaknesses in theory T1 count as support for alternative theory T2).
  7. to recognize that experts in fields of science other than my own generally know what the heck they’re talking about.
  8. to trust scientists in fields other than my own to rein in scientists in those fields who don’t know what they are talking about.
  9. to face up to the reality that, as much as I may know about the little piece of the universe I’ve been studying, I don’t know everything (which is part of why it takes a really big community to do science).

This list of my qualifications is an expression of my comfort level more than anything else. It’s not elitist — good training and hard work can make a scientist out of almost anyone. But, it recognizes that with as much as there is to know, you can’t be an expert on everything. Knowing how far the tether of your expertise extends is part of being a responsible scientist.

So, what kind of help can a scientist give the public in evaluating what is presented as scientific knowledge? What kind of trouble can a scientist encounter in trying to sort out the good from the bad science for the public? Does the help scientists offer here always help?

What the chlorite-iodide reaction taught me.

Since 2011 is the International Year of Chemistry, the good folks at CENtral Science are organizing a blog carnival on the theme, “Your favorite chemical reaction”.

My favorite chemical reaction is the chlorite-iodide reaction, and it’s my favorite because of the life lessons it has taught me.

The reaction has overall stoichiometry:
ClO2 + 4 I + 4 H+ = 2 I2 + Cl + H2O
Written out that way, as a simple set of reactants and products, it doesn’t look that exciting, but when the reaction is run in a continuous flow stirred tank reactor (CSTR), where reactions are flowed in and products are removed, it can exhibit oscillatory behavior. The oscillations in the concentrations of iodine (I2) and iodide (I) can be tracked experimentally, the former by measuring UV absorbance at 460 nm, the latter by measuring the potential of an ion-specific electrode.

An early study of the kinetics of this reaction determined that it “is catalyzed by the iodine product, and the autocatalysis is inhibited by iodide ion.” (Kern and Kim 1965, 5309) In 1985, Epstein and Kustin proposed the first mechanism for this reaction to account for the oscillatory behavior, one that includes 13 elementary steps and 12 chemical species. Two years later, Citri and Epstein proposed an improved model mechanism with 8 elementary mechanistic steps and 10 chemical species. The Citri-Epstein model proposes a different set of elementary steps to describe the oxidation of iodide by chlorite. In addition, it eliminates the intermediate IClO2, “whose existence has been called into question elsewhere.” (Citri and Epstein 1987, 6035) The resulting model mechanism seemed to produce better agreement between predicted and measured concentrations of iodide and iodine than that given by the earlier model.

The chlorite-iodide reaction also happens to have been the reaction at the center of most of my research for my Ph.D. in chemistry.

Here are some of the lessons I learned working with the chlorite-iodide reaction:

  1. Experimental tractability matters, at least when you’re doing experiments. The general thrust of my research was to work out clever ways to perform empirical tests of proposed mechanisms for oscillating chemical reactions, but the chlorite-iodide reaction was not the first reaction I worked with. I started out trying to make some clever measurements on another reaction, the minimal bromate oscillator (MBO). However, after maybe six months of fighting to set up the conditions where the MBO would give me oscillations, I had to make my peace with the idea that its “small” region in phase-space with oscillatory behavior was really, really small. Luckily, in my reading of the relevant literature on the experimental and theoretical approaches we were taking, I had come across a similar inorganic chemical oscillator with an “ample” oscillatory region, one which promised to make my time in the lab exponentially less frustrating. That’s right, the chlorite-iodide reaction was my rebound system, but we stayed together and made it work.
  2. When your original research project gets stuck, it’s good to have a detailed plan for how to move forward when you talk to the boss. My advisor was really keen for that minimal bromate oscillator that was making my life in the lab a nightmare. So, when I met with him to tell him I wanted to break up with the MBO and take up with the chlorite-iodide reaction, I had to make the case for the new system. I came armed with the articles that described its substantial oscillatory region, and the articles that described the MBO’s tiny one. I prepared some calculations describing how much more precise our pump-rates would need to be to find MBO oscillations, and catalogues that listed the prices of the new equipment we would need. I brought the articles proposing mechanisms for the chlorite-iodide reaction so I could display the virtues of their elementary mechanistic steps from the point of view of the kind of experimental probing we had in mind. Because I did my homework and was able to make a persuasive case, the boss was happy to let me start working with the chlorite-iodide system right away, and to kiss the minimal bromate oscillator goodbye forever.
  3. Experimental tractability is relative, not absolute (and Materials and Methods often leave stuff out). The chlorite-iodide reaction was certainly easier to work with — within a week, I found oscillations where the literature said I would — but it was not completely smooth sailing. There were pumps that didn’t perform as they should, which meant I was taking them apart and swapping out components. There were days when I couldn’t get any reliable measurements because the pH meter I used with my iodide-specific electrode had been left on for too many hours in a row. And, there were little details I discovered in setting up experimental runs day in and day out that were not fully discussed in the “materials and methods” section of the published papers describing the chlorite-iodide reaction. Reproducibility is hard
  4. Reactions happen in three-dimensional space, not just in reaction space. One of the experimental challenges of the chlorite-iodide reaction is that, to find the dynamical behavior you’re looking for, you have to stir the reactants in the tank reactor at the right speed. Stirring much faster or much slower will change the dynamics of the reaction, as will using a reactor with significantly different internal geometry. (“Dimples” protruding into the cylindrical space inside the reactor are supposed to help you mix the reactants more effectively, rather than giving them the opportunity to hang out unmixed by the walls.) Appropriate stirring speed was not one of the parameters spelled out by the papers whose descriptions of the reaction I was using to get started, nor was reactor geometry. I had to do experiments to work out the stirring speed that (with the geometry of the reaction vessel we had on hand) produced the same behavior as these other papers were reporting. Once I found that stir-speed, I kept that constant for my experimental runs. Also, I made detailed measurements of the reactor we were using, which turned out to be a really good thing when that reactor broke. I was able to take those measurements to the glass-blower’s shop and get replacements (plural) made.
  5. Time well spent in setting things up is frequently rewarded with good data. It was absolutely worth it to spend a couple hours at the beginning of each run calibrating pump flow-rates and checking out the iodide-selective electrode performance with standard solutions, since this let me apply the experimental conditions I wanted to and make accurate measurements. Did I mention that reproducibility is hard?
  6. Qualitative measurements require patience, too. Among other things, I was interested in mapping the edges of regions in phase-space where the chlorite-iodide reaction displayed different kinds of behavior. On one edge, there was a bifurcation where you would find steady state behavior (i.e., stable concentrations of reaction species) that, coming up on the bifurcation point, became tiny-amplitude oscillations that grew. On the other edge, the oscillations had attained their maximum amplitude, but their period (that is, the lag between oscillatory peaks) grew longer and longer until there weren’t any more peaks and the reaction settled into another steady state. The thing was, it was hard to know when you were set up with conditions where the period of oscillation was just really, really long (sometimes around 20 minutes between peaks, if memory serves) or when you had found the steady state. You had to be patient. While I was exploring that edge of the reaction in phase-space, I started thinking maybe that was a good metaphor for certain aspects of graduate school.
  7. You probably can’t measure everything you’d want to measure, but sometimes measuring one more thing can help a lot. As I mentioned above, the Citri-Epstein mechanism for the chlorite-iodide reaction posited ten chemical species in the various steps of the reaction. In a perfect world, you’d want to be able to measure each of those species simultaneously over time as the reaction proceeded. But, as one learns pretty quickly in grad school, this is not a perfect world. When I started with this reaction, published papers were reporting simultaneous dynamical measurements of only two of those species (iodide and iodine). Chloride is one of the hypothesized intermediates, and there are chloride-specific electrodes on the market. However, the membrane in a chloride-specific electrode also reacts with … iodide. Other intermediate species might be measured by various chemical assays if the progress of the reaction could be halted in the samples being assayed. By the end of my graduate research, I had figured out a way to use a flow-through cuvette and a seat-of-the-pants spectral deconvolution technique to measure the time-series of one additional species in the reaction, the chlorite ion (ClO2). This was enough to do some evaluation of the proposed mechanism that was not possible without it.

Later on, when I became a philosopher of science, this work gave me some insights into the circumstances in which chemists are happy to be instrumentalists (e.g., recognizing that the fact that a proposed reaction mechanism was consistent with the observed kinetics of the reaction was no guarantee that this was the actual mechanism by which the reaction proceeded) and the circumstances in which they lean towards being realists (by finding ways to distinguish better proposed mechanisms from worse ones). But back when I was actually getting glassware dirty running the chlorite-iodide reaction, this reaction helped me learn how to be a scientist.

_____

Works cited:
Citri, Ofra, and Irving R. Epstein (1987) “Dynamical Behavior in the Chlorite-Iodide Reaction: A Simplified Mechanism”, Journal of Physical Chemistry 91: 6034-6040.

Epstein, Irving R., and Kenneth Kustin (1985) “A Mechanism for Dynamical Behavior in the Oscillatory Chlorite-Iodide Reaction”, Journal of Physical Chemistry 89: 2275-2282.

Kern, David M., and Chang-Hwan Kim (1965) “Iodine Catalysis in the Chlorite-Iodide Reaction”, Journal of the American Chemical Society 87(23): 5309-5313.

Scientific credibility: is it who you are, or how you do it?

Part of the appeal of science is that it’s a methodical quest for a reliable picture of how our world works. Creativity and insight is crucial at various junctures in this quest, but careful work and clear reasoning does much of the heavy lifting. Among other things, this means that the grade-schooler’s ambition to be a scientist someday is significantly more attainable than the ambition to be a Grammy-winning recording artist, a pro-athlete, an astronaut, or the President of the United States.

Scientific methodology, rather than being a closely guarded trade secret, is a freely available resource.

Because of this, there is a sense that it doesn’t matter too much who is using that scientific methodology. Rather, what matters is what scientists discover by way of the methodology.
Continue reading

What about Dalibor Sames? The Bengü Sezen fraud and the responsibilities of the PI in the training of new scientists.

Unless you are a chemist or a habitual follower of scientific misconduct stories, it’s possible that you missed the saga of Bengü Sezen.

From 2000 to 2005, Sezen was a graduate student in chemistry at Columbia University, working in the laboratory of then-Assistant Professor Dalibor Sames. She appeared to be a talented scientist in training, and during her graduate studies was lead author on three papers published in the Journal of the American Chemical Society. Columbia University conferred upon her a Ph.D. in chemistry (with distinction).

But, as it turns out, her published results were not reproducible, an issue raised by chemists at Columbia and elsewhere as early as 2002. Further, the results were irreproducible for very good reason: as reported by Chemical & Engineering News, investigations by Columbia University and by the U.S. Department of Health & Human Services (which is home to the Office of Research Integrity) revealed

a massive and sustained effort by Sezen over the course of more than a decade to dope experiments, manipulate and falsify NMR and elemental analysis research data, and create fictitious people and organizations to vouch for the reproducibility of her results.

In the wake of the investigations, Sames has retracted the papers coauthored with Sezen (Sezen refused to retract them on the grounds that she stood by the work), and Columbia has revoked the Ph.D. it granted Sezen.

The evidence from the investigations supports the hypothesis that Bengü Sezen was a liar masquerading as a chemist, that she claimed to have done experiments that she hadn’t, to have obtained NMR spectra that she created (in part) with correction fluid, to have built molecules that she didn’t build. She committed fraud that introduced not just mistakes but lies into the scientific literature.

But she didn’t — she couldn’t — do this alone. She didn’t commit her fraud as a principal investigator (PI). Rather she did it as a scientific trainee, a graduate student working under the supervision of Dalibor Sames (who is currently an Associate Professor at Columbia). It’s worth examining what responsibility Sames bears for what happened here.
Continue reading

Every diet has a body-count: in the garden with the vegetarian killing snails.

When the demand of my job and my family life allow, I try to take advantage of the fact that I live in California by maintaining a vegetable garden. One of the less pleasant aspects of vegetable gardening is that, every winter and spring, it requires me to embark on a program of snail and slug eradication — which is to say, I hunt for snails and slugs in my garden and I kill them.

As it happens, I’m a vegetarian and an ethicist. I’m not sure I’d describe myself as an “ethical vegetarian” — that suggests that one’s primary reason for eating a vegetarian diet is a concern with animal suffering, and while I do care about animal suffering, my diet has as much to do with broader environmental concerns (and not wanting to use more resources than needed to be fed, especially when others are going hungry) and aesthetics (I never liked the taste of meat). Still, given my diet and my profession, one might well ask, how ethical is it for me to be killing the slugs and snails in my garden?

Continue reading

Environmental impacts of what we eat: the difficulty of apples-to-apples comparisons.

When we think about food, how often do we think about what it’s going to do for us (in terms of nutrition, taste, satiety), and how often do we focus on what was required to get it to our tables?

Back when I was a wee chemistry student learning how to solve problems in thermodynamics, my teachers described the importance for any given problem of identifying the system and the surroundings. The system was the piece of the world that was the focus of the the problem to be solved — on the page or the chalkboard (I’m old), it was everything inside the dotted line you drew to enclose it. The surroundings were outside that dotted line — everything else.

Those dotted lines we drew were very effective in separating the components that would get our attention from everything else — exactly what we needed to do in order to get our homework problems done on a deadline. But it strikes me that sometimes we can forget that what we’ve relegated to surroundings still exists out there in the world, and indeed might be really important for other questions that matter, too.

In recent years, there seems to be growing public awareness of food as something that doesn’t magically pop into existence at the supermarket or the restaurant kitchen. People now seem to recall that there are agricultural processes that produce food — and to have some understanding that these processes have impacts on other pieces of the world. The environmental impacts, especially, are on our minds. However, figuring out just what the impacts are is challenging, and this makes it hard for us to evaluate our choices with comparisons that are really apples-to-apples.
Continue reading

Doing fun chemistry.

You may have noticed by now that the Scientific American Blog Network is having something of a Chemistry Day.

Reading about chemistry is fun, but I reckon it’s even more fun to do some chemistry. So, if you find yourself with a few moments and the need to fill them with chemical fun, here are a few ideas:

Make your own acid-base indicator:

With red cabbage and hot water, you can make a solution that will let you tell acids, bases, and neutral-pH substances apart.

Spend the afternoon classifying the substances in your refrigerator or pantry! Audition alternatives to vinegar and baking soda for your papier mache volcano!

Dye some eggs:

Gather up some plant matter and see what colors you can develop on eggshells.

One interesting thing you might observe is that empty eggshells and eggshells with eggs in them interact differently with the plant pigments. Ponder the chemistry behind this difference … perhaps with the aid of some cabbage-water indicator.

Play around with paper chromatography:

Grab some markers (black and brown markers work especially well), lay down some filter paper (or a paper towel or a piece of a coffee filter), and just add water to observe the pretty effects created when some components of ink preferentially interact with water while others preferentially interact with the paper.

If you like, play around with other solvents (like alcohol, or oil) and see what happens.

Make some mayonnaise:

Even just making canonical mayonnaise is a matter of getting oil and water to play well together, making use of an emulsifier.

But things get interesting when you change up the components, substituting non-traditional sources of oil or of emulsifier. What happens, for example, when an avocado gets in on the action?

Try your hand at spherifying a potable:

Molecular gastronomy isn’t just for TV chefs anymore. If you have a decent kitchen scale and food-grade chemicals (which you can find from a number of online sources), you can turn potables into edibles by way of reactions that create a “shell” of a membrane.

Sometimes you can control the mixture well enough to create little spherical coffee caviar or berry-juice beads. Sometimes you end up with V-8 vermicelli. Either way, it’s chemistry that you can eat.

Building knowledge (and stuff) ethically: the principles of “Green Chemistry”.

Like other scientific disciplines, chemistry is in the business of building knowledge. In addition to knowledge, chemistry sometimes also builds stuff — molecules which didn’t exist until people figured out ways to make them.

Scientists (among others) tend to assume that knowledge is a good thing. There are instances where you might question this assumption — maybe when the knowledge is being used for some evil purpose, or when the knowledge has been built on your dime without giving you much practical benefit, or when the knowledge could give you practical benefit except that it’s priced out of your reach.

Even setting these worries aside, we should recognize that there are real costs involved in building knowledge. These costs mean that it’s not a sure thing that more knowledge is always better. Rather, we may want to evaluate whether building a particular piece of knowledge (or a particular new compound) is worth the cost.

In chemistry, these costs aren’t just a matter of the chemist’s time, or of the costs of the lab facilities and equipment. Some of these costs are directly connected to the chemical reagents being brought together in reactions that transform the starting materials into something new. These chemical reagents (in solid, liquid, or gas phase, pure or in mixtures or in solutions) all come from somewhere. The “somewhere” could be a source in nature, or a reaction conducted in the laboratory, or a reaction process conducted on a large scale in a factory.

Getting a reasonably pure chemical substance in the jar means sorting out the other stuff hanging around with that substance — impurities, leftover reactants from the reaction that makes the desired substance, “side-products” of the reaction that makes the desired substance. (A side-product is a lot like a side-effect, in that it’s produced by the reaction but it’s not the thing you’re actually trying to produce.) When you’re isolating the substance you’re after, that other stuff has to go somewhere. If there’s not a particular way to collect the other stuff and put it to some other use, that other stuff becomes chemical waste.

There’s a sense in which all waste is chemical waste, since everything in our world is made up of chemicals. The thing to watch with waste products from chemical reactions is whether these waste products will engage in further chemical reactions wherever you end up storing them. Or, if you’re not careful about how you store them, they might get into our air or water, or into plants and animals, where they might have undesired or unforeseen effects.

In recent years, chemists have been working harder to recognize that the chemicals they work with come from someplace, that the ones they generate in the course of their experiments need to end up someplace, and to think about more sustainable ways to build chemical compounds and chemical knowledge. A good place to see this thinking is in The Twelve Principles of Green Chemistry (here as set out by Anastas, P. T.; Warner, J. C. Green Chemistry: Theory and Practice, Oxford University Press: New York, 1998, p.30.):

  1. Prevention
    It is better to prevent waste than to treat or clean up waste after it has been created.
  2. Atom Economy Synthetic methods should be designed to maximize the incorporation of all materials used in the process into the final product.
  3. Less Hazardous Chemical Syntheses Wherever practicable, synthetic methods should be designed to use and generate substances that possess little or no toxicity to human health and the environment.
  4. Designing Safer Chemicals Chemical products should be designed to effect their desired function while minimizing their toxicity.
  5. Safer Solvents and Auxiliaries The use of auxiliary substances (e.g., solvents, separation agents, etc.) should be made unnecessary wherever possible and innocuous when used.
  6. Design for Energy Efficiency Energy requirements of chemical processes should be recognized for their environmental and economic impacts and should be minimized. If possible, synthetic methods should be conducted at ambient temperature and pressure.
  7. Use of Renewable Feedstocks A raw material or feedstock should be renewable rather than depleting whenever technically and economically practicable.
  8. Reduce Derivatives Unnecessary derivatization (use of blocking groups, protection/ deprotection, temporary modification of physical/chemical processes) should be minimized or avoided if possible, because such steps require additional reagents and can generate waste.
  9. Catalysis Catalytic reagents (as selective as possible) are superior to stoichiometric reagents.
  10. Design for Degradation Chemical products should be designed so that at the end of their function they break down into innocuous degradation products and do not persist in the environment.
  11. Real-time analysis for Pollution Prevention Analytical methodologies need to be further developed to allow for real-time, in-process monitoring and control prior to the formation of hazardous substances.
  12. Inherently Safer Chemistry for Accident Prevention Substances and the form of a substance used in a chemical process should be chosen to minimize the potential for chemical accidents, including releases, explosions, and fires.

At first blush, these might look like principles developed by a group of chemists who just returned from an Earth Day celebration, what with their focus on avoiding hazardous waste and toxicity, favoring renewable resources over non-renewable ones, and striving for energy efficiency. Certainly, thoroughgoing embrace of “Green Chemistry” principles might result in less environmental impact due to extraction of starting materials, storage (or escape) of wastes, and so forth.

But these principles can also do a lot to serve the interests of chemists themselves.

For example, a reaction that can be conducted at ambient temperature and pressure requires less fancy equipment (i.e., equipment to maintain temperature and/or pressure at non-ambient conditions). It’s not just more energy efficient, it’s less of a hassle for the experimenter. Safer solvents are better for the environment and the public at large, but it’s usually the chemists working with the solvents who are at immediate risk when solvents are extremely flammable or corrosive or carcinogenic. And generating less hazardous waste means paying for the disposal of less hazardous waste — which means that there’s also an economic benefit to being more environmentally friendly.

What I find really striking about these principles of “Green Chemistry” is the optimism they convey that chemists are smart enough to figure out new and better ways to produce the compounds they want to produce. The challenge is to rethink the old strategies for making the compound of interest, strategies that might have relied on large amounts of non-renewable starting materials and generated lots of waste products at each intermediate step. Chemistry is a science that focuses on transformations, but part of its beauty is that there are multiple paths that might get us from staring materials to a particular product. “Green Chemistry” challenges its practitioners to use the existing knowledge base to find out what is possible, and to build new knowledge about these possibilities as chemists build new molecules.

And, to the extent that chemistry is in the business of finding new knowledge (rather than relying on established chemical knowledge as a master cook book), these twelve principles seem almost obvious. Given the choice, would you ever want to make a new compound for an application that had the desired function but maximized toxicity? Would you choose a synthetic strategy that generated more waste rather than less (and whose waste was less likely to break down into innocuous compounds rather than more)? Would you opt to perform the separation with a solvent that was more likely to explode if a less explosive solvent would do the trick? Probably not. Of course, you’d be on the lookout for a better way to solve the chemical problem — where “better” takes into account things like cost, experimental tractability, risks to the experimenter, and risks to the broader public (including our shared environment).

This is not to say that adhering to the principles of “Green Chemistry” would be sufficient to be an ethical chemist. Conceivable, one could follow all these principles and still fabricate, falsify, or plagiarize, for example. But in explicitly recognizing some of the costs associated with building chemical knowledge, and pushing chemists to minimize those costs, the principles of “Green Chemistry” do seem to honor chemists’ obligations to the welfare of the people with whom they are sharing a world.