Ada Lovelace and the Luddites.

Today is Ada Lovelace Day.

If you are not a regular reader of my other blog, you may not know that I am a tremendous Luddite. I prefer hand-drawn histograms and flowcharts to anything I can make with a graphics program. I prefer LPs to CDs. (What’s an LP? Ask your grandparents.) I find it soothing to use log tables (and I know how to interpolate). I’d rather use a spiral-bound book of street maps than Google to find my way around.

Obviously, my status as a Luddite should not be taken to mean I am against all technological advances across the board (as here I am, typing on a computer, preparing a post that will be published using blogging software on the internet). Rather, I am suspicious of technological advances that seem to arise without much thought about how they influence the experience of the humans interacting with them, and of “improvements” that would require me to sink a bunch of time into learning new commands or operating instructions while producing at best a marginal improvement over the outcome I get from the technology I already know.

That is to say, my own inclination is to view technologies not as ends in themselves but as tools which, depending on how they are deployed, can enhance our lives or can make them harder.

The original Luddites were part of a workers’ movement in England in the early 19th century. The technologies these Luddites were against included the mechanical knitting machines and looms that shifted textile production from the hands of skilled knitters and weavers to a relatively unskilled labor force tending to the machines. In the current economic climate, it’s not too hard to see what the Luddites were worried about: even if the Industrial Revolution technologies didn’t result in an overall decrease in jobs (since you’d need workers to tend the machines), there would be no reason to assume that the owners of textile factories would be interested in retraining the skilled knitters and weavers already in existence to be the machine-tenders. And net stability (even increase) in the number of jobs can be cold comfort when your job goes away.

Continue reading

Drawing the line between science and pseudo-science.

Recently, we’ve been discussing strategies for distinguishing sound science from attractively packaged snake-oil. It’s worth noting that a fair number of scientists (and of non-scientists who are reasonably science-literate) are of the view that this is not a hard call to make — that astrology, alternative therapies, ESP, and the other usual suspects fall on the wrong side of some bright line that divides what is scientific from what is not — the clear line of demarcation that (scientists seem to assume) Karl Popper pointed out years ago, and that keeps the borders of science secure.


While I think a fair amount of non-science is so far from the presumptive border that we are well within our rights to just point at it and laugh, as a philosopher of science I need to go on the record as saying that right at the boundary, things are not so sharp. But before we get into how real science (and real non-science) might depart from Sir Karl’s image of things, I think it’s important to look more closely at the distinction he’s trying to draw.


A central part of Karl Popper’s project is figuring out how to draw the line between science and pseudo-science. He could have pitched this as figuring out how to draw the line between science and non-science (which seems like less a term of abuse than “pseudo-science”). Why set the project up this way? Partly, I think, he wanted to compare science to non-science-that-looks-a-lot-like-science (in other words, pseudo-science) so that he could work out precisely what is missing from the latter. He doesn’t think we should dismiss pseudo-science as utterly useless, uninteresting, or false. It’s just not science.

Of course, Popper wouldn’t be going to the trouble of trying to spell out what separates science from non-science if he didn’t think there was something special on the science side of the line. He seems committed to the idea that scientific methodology is well-suited — perhaps uniquely so — for building reliable knowledge and for avoiding false beliefs. Indeed, under the assumption that science has this kind of power, one of the problems with pseudo-science is that it gets an unfair credibility boost by so cleverly mimicking the surface appearance of science.


The big difference Popper identifies between science and pseudo-science is a difference in attitude. While a pseudo-science is set up to look for evidence that supports its claims, Popper says, a science is set up to challenge its claims and look for evidence that might prove it false. In other words, pseudo-science seeks confirmations and science seeks falsifications.


There is a corresponding difference that Popper sees in the form of the claims made by sciences and pseudo-sciences: Scientific claims are falsifiable — that is, they are claims where you could set out what observable outcomes would be impossible if the claim were true — while pseudo-scientific claims fit with any imaginable set of observable outcomes. What this means is that you could do a test that shows a scientific claim to be false, but no conceivable test could show a pseudo-scientific claim to be false. Sciences are testable, pseudo-sciences are not.


So, Popper has this picture of the scientific attitude that involves taking risks: making bold claims, then gathering all the evidence you can think of that might knock them down. If they stand up to your attempts to falsify them, the claims are still in play. But, you keep that hard-headed attitude and keep you eyes open for further evidence that could falsify the claims. If you decide not to watch for such evidence — deciding, in effect, that because the claim hasn’t been falsified in however many attempts you’ve made to falsify it, it must be true — you’ve crossed the line to pseudo-science.


This sets up the central asymmetry in Popper’s picture of what we can know. We can find evidence to establish with certainty that a claim is false. However, we can never (owing to the problem of induction) find evidence to establish with certainty that a claim is true. So the scientist realizes that her best hypotheses and theories are always tentative — some piece of future evidence could conceivably show them false — while the pseudo-scientist is sure as sure as can be that her theories have been proven true. (Of course, they haven’t been — problem of induction again.)


So, why does this difference between science and pseudo-science matter? As Popper notes, the difference is not a matter of scientific theories always being true and pseudo-scientific theories always being false. The important difference seems to be in which approach gives better logical justification for knowledge claims. A pseudo-science may make you feel like you’ve got a good picture of how the world works, but you could well be wrong about it. If a scientific picture of the world is wrong, that hard-headed scientific attitude means the chances are good that we’ll find out we’re wrong — one of those tests of our hypotheses will turn up the data that falsifies them — and switch to a different picture.

A few details are important to watch here. The first is the distinction between a claim that is falsifiable and a claim that has been falsified. Popper says that scientific claims are falsifiable and pseudo-scientific claims are not. A claim that has been falsified (demonstrated to be false) is obviously a falsifiable claim (because, by golly, it’s been falsified). Once a claim has been falsified, Popper says the right thing to do is let it go and move on to a different falsifiable claim. However, it’s not that the claim shouldn’t have been a part of science in the first place.
So, the claim that the planets travel in circular orbits wasn’t an inherently unscientific claim. Indeed, because it could be falsified by observations, it is just the kind of claim scientists should work with. But, once the observations show that this claim is false, scientists retire it and replace it with a different falsifiable claim.


This detail is important! Popper isn’t saying that science never makes false claims! What he’s saying is that the scientific attitude is aimed at locating and removing the false claims — something that doesn’t happen in pseudo-sciences.


Another note on “falsifiability” — the fact that many attempts to falsify a claim have failed does not mean that the claim is unfalsifiable. Nor, for that matter, would the fact that the claim is true make it unfalsifiable. A claim is falsifiable if there are certain observations we could make that would tell us the claim is false — certain observable ways the world could not be if the claim were true. So, the claim that Mars moves in an elliptical orbit around the sun could be falsified by observations of Mars moving in an orbit that deviated at all from an elliptical shape.


Another important detail is just what scientists mean by “theory”. A theory is simply a scientific account (or description, or story) about a system or a piece of the world. Typically, a theory will contain a number of hypotheses about what kind of entities are part of the system and how those entities behave. (The hypothesized behaviors are sometimes described as the “laws” governing the system.) The important thing to note is that theories can be rather speculative or extremely well tested — either way, they’re still theories.


Some people talk as though there’s a certain threshold a theory crosses to become a fact, or truth, or something more-certain-than-a-theory. This is a misleading way of talking. Unless Popper is completely wrong that the scientist’s acceptance of a theory is always tentative (and this is one piece of Popper’s account that most scientists whole-heartedly endorse), then even the theory with the best evidential support is still a theory. Indeed, even if a theory happened to be completely true, it would still be a theory! (Why? You could never be absolutely certain that some future observation might not falsify the theory. In other words, on the basis of the evidence, you can’t be 100% sure that the theory is true.)


So, for example, dismissing Darwin’s theory as “just a theory” as if that were a strike against it is misunderstanding what science is up to. Of course there is some uncertainty; there is with all scientific theories. Of course there are certain claims the theory makes that might turn out to be false; but the fact that there is evidence we could conceivably get to demonstrate these claims are false is a scientific virtue, not a sign that the theory is unscientific.


By contrast, “Creation Science” and “Intelligent Design Theory” don’t make falsifiable claims (at least, this is what many people think; Larry Laudan* disputes this but points out different reasons these theories don’t count as scientific). There’s no conceivable evidence we could locate that could demonstrate the claims of these theories are false. Thus, these theories just aren’t scientific. Certainly, their proponents point to all sorts of evidence that fits well with these theories, but they never make any serious efforts to look for evidence that could prove the theories false. Their acceptance of these theories isn’t a matter of having proof that the theories are true, or even a matter of these theories having successfully withstood many serious attempts to falsify them. Rather, it’s a matter of faith.


None of this means Darwin’s theory is necessarily true and “Creation Science” is necessarily false. But it does mean (in the Popperian view that most scientists endorse) that Darwin’s theory is scientific and “Creation Science” is not.


______

*See Laudan, “Science at the Bar — Causes for Concern”, in Robert T. Pennock and Michael Ruse, But Is It Science?

* * * * *

If you enjoyed this post, consider contributing a few bucks to a project in my Giving Page in the Science Bloggers for Students 2011 challenge. Supporting science education in public school classrooms will help young people get a better handle on what kind of attitude and methodology makes science science — and on all the cool things science can show us about our world.

Introducing DonorsChoose Science Bloggers for Students 2011 (with a wag of the finger for Stephen Colbert).

I’m putting Stephen Colbert on notice

Now that that’s out of the way …

In the science-y sectors of the blogosphere, folks frequently bemoan the sorry state of the public’s scientific literacy and engagement. People fret about whether our children are learning what they should about science, math, and critical reasoning. Netizens speculate on the destination of the handbasket in which we seem to be riding.

In light of the big problems that seem insurmountable, we should welcome the opportunity to do something small that can have an immediate impact.

This year, from October 2 through October 22, a number of science bloggers, whether networked, loosely affiliated, or proudly independent, will be teaming up with DonorsChoose in Science Bloggers for Students, a philanthropic throwdown for public schools.

DonorsChoose is a site where public school teachers from around the U.S. submit requests for specific needs in their classrooms — from books to science kits, overhead projectors to notebook paper, computer software to field trips — that they can’t meet with the funds they get from their schools (or from donations from their students’ families). Then donors choose which projects they’d like to fund and then kick in the money, whether it’s a little or a lot, to help a proposal become a reality.

Over the last few several, bloggers have rallied their readers to contribute what they can to help fund classroom proposals through DonorsChoose, especially proposals for projects around math and science, raising hundreds of thousands of dollars, funding hundreds of classroom projects, and impacting thousands of students.

Which is great. But there are a whole lot of classrooms out there that still need help.

As economic experts scan the horizon for hopeful signs and note the harbingers of economic recovery, we should not forget that school budgets are still hurting (and are worse, in many cases, than they were last school year, since one-time lumps of stimulus money are gone now). Indeed, public school teachers have been scraping for resources since long before Wall Street’s financial crisis started. Theirs is a less dramatic crisis than a bank failure, but it’s here and it’s real and we can’t afford to wait around for lawmakers on the federal or state level to fix it.

The kids in these classrooms haven’t been making foolish investments. They’ve just been coming to school, expecting to be taught what they need to learn, hoping that learning will be fun. They’re our future scientists, doctors, teachers, decision-makers, care-providers, and neighbors. To create the scientifically literate world we want to live in, let’s help give these kids the education they deserve.

One classroom project at a time, we can make things better for these kids. Joining forces with each other people, even small contributions can make a big difference.

The challenge this year runs October 2 through October 22. We’re overlapping with Earth Science Week (October 9-15, 2011) and National Chemistry Week (October 16-22, 2011), a nice chance for earth science and chemistry fans to add a little philanthropy to their celebrations. There are a bunch of Scientific American bloggers mounting challenges this year (check out some of their challenge pages on our leaderboard), as well as bloggers from other networks (which you can see represented on the challenge’s motherboard). And, since today is the official kick-off, there is plenty of time for other bloggers and their readers to enter the fray!




How It Works:
Follow the links above to your chosen blogger’s challenge on the DonorsChoose website.

Pick a project from the slate the blogger has selected. Or more than one project, if you just can’t choose. (Or, if you really can’t choose, just go with the “Give to the most urgent project” option at the top of the page.)

Donate.

(If you’re the loyal reader of multiple participating blogs and you don’t want to play favorites, you can, of course, donate to multiple challenges! But you’re also allowed to play favorites.)

Sit back and watch the challenges inch towards their goals, and check the leaderboards to see how many students will be impacted by your generosity.

Even if you can’t make a donation, you can still help!

Spread the word about these challenges using web 2.0 social media modalities. Link your favorite blogger’s challenge page on your MySpace page, or put up a link on Facebook, or FriendFeed, or LiveJournal (or Friendster, or Xanga, or …). Tweet about it on Twitter (with the #scibloggers4students hashtag). Share it on Google +. Sharing your enthusiasm for this cause may inspire some of your contacts who do have a little money to get involved and give.

Here’s the permalink to my giving page.

Thanks in advance for your generosity.

Evaluating scientific claims (or, do we have to take the scientist’s word for it?)

Recently, we’ve noted that a public composed mostly of non-scientists may find itself asked to trust scientists, in large part because members of that public are not usually in a position to make all their own scientific knowledge. This is not a problem unique to non-scientists, though — once scientists reach the end of the tether of their expertise, they end up having to approach the knowledge claims of scientists in other fields with some mixture of trust and skepticism. (It’s reasonable to ask what the right mixture of trust and skepticism would be in particular circumstances, but there’s not a handy formula with which to calculate this.)

Are we in a position where, outside our own narrow area of expertise, we either have to commit to agnosticism or take someone else’s word for things? If we’re not able to directly evaluate the data, does that mean we have no good way to evaluate the credibility of the scientist pointing to the data to make a claim?

This raises an interesting question for science journalism, not so much about what role it should play as what role it could play.

If only a trained scientist could evaluate the credibility of scientific claims (and then perhaps only in the particular scientific field in which one was trained), this might reduce science journalism to a mere matter of publishing press releases, or of reporting on scientists’ social events, sense of style, and the like. Alternatively, if the public looked to science journalists not just to communicate the knowledge claims various scientists are putting forward but also to do some evaluative work on our behalf — sorting out credible claims and credible scientists from the crowd — we might imagine that good science journalism demands extensive scientific training (and that we probably need a separate science reporter for each specialized area of science to be covered).

In an era where media outlets are more likely to cut the science desk than expand it, pinning our hopes on legions of science-Ph.D.-earning reporters on the science beat might be a bad idea.

I don’t think our prospects for evaluating scientific credibility are quite that bad.

Scientific knowledge is built on empirical data, and the details of the data (what sort of data is relevant to the question at hand, what kind of data can we actually collect, what techniques are better or worse for collecting the data, how we distinguish data from noise, etc.) can vary quite a lot in different scientific disciplines, and in different areas of research within those disciplines. However, there are commonalities in the basic patterns of reasoning that scientists in all fields use to compare their theories with their data. Some of these patterns of reasoning may be rather sophisticated, perhaps even non-intuitive. (I’m guessing certain kinds of probabilistic or statistical reasoning might fit this category.) But others will be the patterns of reasoning that get highlighted when “the scientific method” is taught.

In other words, even if I can’t evaluate someone else’s raw data to tell you directly what it means, I can evaluate the way that data is used to support or refute claims. I can recognize logical fallacies and distinguish them from instances of valid reasoning. Moreover, this is the kind of thing that a non-scientist who is good at critical thinking (whether a journalist or a member of the public consuming a news story) could evaluate as well.

One way to judge scientific credibility (or lack thereof) is to scope out the logical structure of the arguments a scientist is putting up for consideration. It is possible to judge whether arguments have the right kind of relationship to the empirical data without wallowing in that data oneself. Credible scientists can lay out:

  • Here’s my hypothesis.
  • Here’s what you’d expect to observe if the hypothesis is true. Here, on the other hand, is what you’d expect to observe if the hypothesis is false.
  • Here’s what we actually observed (and here are the steps we took to control the other variables).
  • Here’s what we can say (and with what degree of certainty) about the hypothesis in the light of these results.
  • Here’s the next study we’d like to do to be even more sure.

And, not only will the logical connections between the data and what is inferred from them look plausible to the science writer who is hip to the scientific method, but they ought to look plausible to other scientists — even to scientists who might prefer different hypotheses, or different experimental approaches. If what makes something good science is its epistemology — the process by which data are used to generate and/or support knowledge claims — then even scientists who may disagree with those knowledge claims should still be able to recognize the patterns of reasoning involved as properly scientific. This suggests a couple more things we might ask credible scientists to display:

  • Here are the results of which we’re aware (published and unpublished) that might undermine our findings.
  • Here’s how we have taken their criticisms (or implied criticisms) seriously in evaluating our own results.

If the patterns of reasoning are properly scientific, why wouldn’t all the scientists agree about the knowledge claims themselves? Perhaps they’re taking different sets of data into account, or they disagree about certain of the assumptions made in framing the question. The important thing to notice here is that scientists can disagree with each other about experimental results and scientific conclusions without thinking that the other guy is a bad scientist. The hope is that, in the fullness of time, more data and dialogue will resolve the disagreements. But good, smart, honest scientists can disagree.

This is not to say that there aren’t folks in lab coats whose thinking is sloppy. Indeed, catching sloppy thinking is the kind of thing you’d hope a good general understanding of science would help someone (like a scientific colleague, or a science journalist) to do. At that point, of course, it’s good to have backup — other scientists who can give you their read on the pattern of reasoning, for example. And, to the extent that a scientist — especially one talking “on the record” about the science (whether to a reporter or to other scientists or to scientifically literate members of the public) — displays sloppy thinking, that would tend to undermine his or her credibility.

There are other kinds of evaluation you can probably make of a scientist’s credibility without being an expert in his or her field. Examining a scientific paper to see if the sources cited make the claims that they are purported to make by the paper citing them is one way to assess credibility. Determining whether a scientist might be biased by an employer or a funding source may be harder. But there, I suspect many of the scientists themselves are aware of these concerns and will go the extra mile to establish their credibility by taking the possibility that they are seeing what they want to see very seriously and testing their hypotheses fairly stringently so they can answer possible objections.

It’s harder still to get a good read on the credibility of scientists who present evidence and interpretations with the right sort of logical structure but who have, in fact, fabricated or falsified that evidence. Being wary of results that seem too good to be true is probably a good strategy here. Also, once a scientist is caught in such misconduct, it’s entirely appropriate not to trust another word that comes from his or her mouth.

One of the things fans of science have tended to like is that it’s a route to knowledge that is, at least potentially, open to any of us. It draws on empirical data we can get at through our senses and on our powers of rational thinking. As it happens, the empirical data have gotten pretty complicated, and there’s usually a good bit of technology between the thing in the world we’re trying to observe and the sense organs we’re using to observe it. However, those powers of rational thinking are still at the center of how the scientific knowledge gets built. Those powers need careful cultivation, but to at least a first approximation they may be enough to help us tell the people doing good science from the cranks.

What a scientist knows about science (or, the limits of expertise).

In a world where scientific knowledge might be useful in guiding decisions we make individually and collectively, one reason non-scientists might want to listen to scientists is that scientists are presumed to have the expertise to sort reliable knowledge claims from snake oil. If you’re not in the position to make your own scientific knowledge, your best bet might be to have a scientific knowledge builder tell you what counts as good science.

But, can members of the public depend on any scientist off the street (or out of the lab) to vet all the putative scientific claims for credibility?

Here, we have to grapple with the relationship between Science and particular scientific disciplines — and especially with the question of whether there is enough of a common core between different areas of science that scientists trained in one area can be trusted to recognize the strengths and weaknesses of work in another scientific area. How important is all that specialization research scientists do? Can we trust that, to some extent, all science follows the same rules, thus equipping any scientist to weigh in intelligently about any given piece of it?

It’s hard to give you a general answer to that question. Instead, as a starting point for discussion, let me lay out the competence I personally am comfortable claiming, in my capacity as a trained scientist.

As someone trained in a science, I am qualified:

  1. to say an awful lot about the research projects I have completed (although perhaps a bit less about them when they were still underway).
  2. to say something about the more or less settled knowledge, and about the live debates, in my research area (assuming, of course, that I have kept up with the literature and professional meetings where discussions of research in this area take place).
  3. to say something about the more or less settled (as opposed to “frontier”) knowledge for my field more generally (again, assuming I have kept up with the literature and the meetings).
  4. perhaps, to weigh in on frontier knowledge in research areas other than my own, if I have been very diligent about keeping up with the literature and the meetings and about communicating with colleagues working in these areas.
  5. to evaluate scientific arguments in areas of science other than my own for logical structure and persuasiveness (though I must be careful to acknowledge that there may be premises of these arguments — pieces of theory or factual claims from observations or experiments that I’m not familiar with — that I’m not qualified to evaluate).
  6. to recognize, and be wary of, logical fallacies and other less obvious pseudo-scientific moves (e.g., I should call shenanigans on claims that weaknesses in theory T1 count as support for alternative theory T2).
  7. to recognize that experts in fields of science other than my own generally know what the heck they’re talking about.
  8. to trust scientists in fields other than my own to rein in scientists in those fields who don’t know what they are talking about.
  9. to face up to the reality that, as much as I may know about the little piece of the universe I’ve been studying, I don’t know everything (which is part of why it takes a really big community to do science).

This list of my qualifications is an expression of my comfort level more than anything else. It’s not elitist — good training and hard work can make a scientist out of almost anyone. But, it recognizes that with as much as there is to know, you can’t be an expert on everything. Knowing how far the tether of your expertise extends is part of being a responsible scientist.

So, what kind of help can a scientist give the public in evaluating what is presented as scientific knowledge? What kind of trouble can a scientist encounter in trying to sort out the good from the bad science for the public? Does the help scientists offer here always help?

What the chlorite-iodide reaction taught me.

Since 2011 is the International Year of Chemistry, the good folks at CENtral Science are organizing a blog carnival on the theme, “Your favorite chemical reaction”.

My favorite chemical reaction is the chlorite-iodide reaction, and it’s my favorite because of the life lessons it has taught me.

The reaction has overall stoichiometry:
ClO2 + 4 I + 4 H+ = 2 I2 + Cl + H2O
Written out that way, as a simple set of reactants and products, it doesn’t look that exciting, but when the reaction is run in a continuous flow stirred tank reactor (CSTR), where reactions are flowed in and products are removed, it can exhibit oscillatory behavior. The oscillations in the concentrations of iodine (I2) and iodide (I) can be tracked experimentally, the former by measuring UV absorbance at 460 nm, the latter by measuring the potential of an ion-specific electrode.

An early study of the kinetics of this reaction determined that it “is catalyzed by the iodine product, and the autocatalysis is inhibited by iodide ion.” (Kern and Kim 1965, 5309) In 1985, Epstein and Kustin proposed the first mechanism for this reaction to account for the oscillatory behavior, one that includes 13 elementary steps and 12 chemical species. Two years later, Citri and Epstein proposed an improved model mechanism with 8 elementary mechanistic steps and 10 chemical species. The Citri-Epstein model proposes a different set of elementary steps to describe the oxidation of iodide by chlorite. In addition, it eliminates the intermediate IClO2, “whose existence has been called into question elsewhere.” (Citri and Epstein 1987, 6035) The resulting model mechanism seemed to produce better agreement between predicted and measured concentrations of iodide and iodine than that given by the earlier model.

The chlorite-iodide reaction also happens to have been the reaction at the center of most of my research for my Ph.D. in chemistry.

Here are some of the lessons I learned working with the chlorite-iodide reaction:

  1. Experimental tractability matters, at least when you’re doing experiments. The general thrust of my research was to work out clever ways to perform empirical tests of proposed mechanisms for oscillating chemical reactions, but the chlorite-iodide reaction was not the first reaction I worked with. I started out trying to make some clever measurements on another reaction, the minimal bromate oscillator (MBO). However, after maybe six months of fighting to set up the conditions where the MBO would give me oscillations, I had to make my peace with the idea that its “small” region in phase-space with oscillatory behavior was really, really small. Luckily, in my reading of the relevant literature on the experimental and theoretical approaches we were taking, I had come across a similar inorganic chemical oscillator with an “ample” oscillatory region, one which promised to make my time in the lab exponentially less frustrating. That’s right, the chlorite-iodide reaction was my rebound system, but we stayed together and made it work.
  2. When your original research project gets stuck, it’s good to have a detailed plan for how to move forward when you talk to the boss. My advisor was really keen for that minimal bromate oscillator that was making my life in the lab a nightmare. So, when I met with him to tell him I wanted to break up with the MBO and take up with the chlorite-iodide reaction, I had to make the case for the new system. I came armed with the articles that described its substantial oscillatory region, and the articles that described the MBO’s tiny one. I prepared some calculations describing how much more precise our pump-rates would need to be to find MBO oscillations, and catalogues that listed the prices of the new equipment we would need. I brought the articles proposing mechanisms for the chlorite-iodide reaction so I could display the virtues of their elementary mechanistic steps from the point of view of the kind of experimental probing we had in mind. Because I did my homework and was able to make a persuasive case, the boss was happy to let me start working with the chlorite-iodide system right away, and to kiss the minimal bromate oscillator goodbye forever.
  3. Experimental tractability is relative, not absolute (and Materials and Methods often leave stuff out). The chlorite-iodide reaction was certainly easier to work with — within a week, I found oscillations where the literature said I would — but it was not completely smooth sailing. There were pumps that didn’t perform as they should, which meant I was taking them apart and swapping out components. There were days when I couldn’t get any reliable measurements because the pH meter I used with my iodide-specific electrode had been left on for too many hours in a row. And, there were little details I discovered in setting up experimental runs day in and day out that were not fully discussed in the “materials and methods” section of the published papers describing the chlorite-iodide reaction. Reproducibility is hard
  4. Reactions happen in three-dimensional space, not just in reaction space. One of the experimental challenges of the chlorite-iodide reaction is that, to find the dynamical behavior you’re looking for, you have to stir the reactants in the tank reactor at the right speed. Stirring much faster or much slower will change the dynamics of the reaction, as will using a reactor with significantly different internal geometry. (“Dimples” protruding into the cylindrical space inside the reactor are supposed to help you mix the reactants more effectively, rather than giving them the opportunity to hang out unmixed by the walls.) Appropriate stirring speed was not one of the parameters spelled out by the papers whose descriptions of the reaction I was using to get started, nor was reactor geometry. I had to do experiments to work out the stirring speed that (with the geometry of the reaction vessel we had on hand) produced the same behavior as these other papers were reporting. Once I found that stir-speed, I kept that constant for my experimental runs. Also, I made detailed measurements of the reactor we were using, which turned out to be a really good thing when that reactor broke. I was able to take those measurements to the glass-blower’s shop and get replacements (plural) made.
  5. Time well spent in setting things up is frequently rewarded with good data. It was absolutely worth it to spend a couple hours at the beginning of each run calibrating pump flow-rates and checking out the iodide-selective electrode performance with standard solutions, since this let me apply the experimental conditions I wanted to and make accurate measurements. Did I mention that reproducibility is hard?
  6. Qualitative measurements require patience, too. Among other things, I was interested in mapping the edges of regions in phase-space where the chlorite-iodide reaction displayed different kinds of behavior. On one edge, there was a bifurcation where you would find steady state behavior (i.e., stable concentrations of reaction species) that, coming up on the bifurcation point, became tiny-amplitude oscillations that grew. On the other edge, the oscillations had attained their maximum amplitude, but their period (that is, the lag between oscillatory peaks) grew longer and longer until there weren’t any more peaks and the reaction settled into another steady state. The thing was, it was hard to know when you were set up with conditions where the period of oscillation was just really, really long (sometimes around 20 minutes between peaks, if memory serves) or when you had found the steady state. You had to be patient. While I was exploring that edge of the reaction in phase-space, I started thinking maybe that was a good metaphor for certain aspects of graduate school.
  7. You probably can’t measure everything you’d want to measure, but sometimes measuring one more thing can help a lot. As I mentioned above, the Citri-Epstein mechanism for the chlorite-iodide reaction posited ten chemical species in the various steps of the reaction. In a perfect world, you’d want to be able to measure each of those species simultaneously over time as the reaction proceeded. But, as one learns pretty quickly in grad school, this is not a perfect world. When I started with this reaction, published papers were reporting simultaneous dynamical measurements of only two of those species (iodide and iodine). Chloride is one of the hypothesized intermediates, and there are chloride-specific electrodes on the market. However, the membrane in a chloride-specific electrode also reacts with … iodide. Other intermediate species might be measured by various chemical assays if the progress of the reaction could be halted in the samples being assayed. By the end of my graduate research, I had figured out a way to use a flow-through cuvette and a seat-of-the-pants spectral deconvolution technique to measure the time-series of one additional species in the reaction, the chlorite ion (ClO2). This was enough to do some evaluation of the proposed mechanism that was not possible without it.

Later on, when I became a philosopher of science, this work gave me some insights into the circumstances in which chemists are happy to be instrumentalists (e.g., recognizing that the fact that a proposed reaction mechanism was consistent with the observed kinetics of the reaction was no guarantee that this was the actual mechanism by which the reaction proceeded) and the circumstances in which they lean towards being realists (by finding ways to distinguish better proposed mechanisms from worse ones). But back when I was actually getting glassware dirty running the chlorite-iodide reaction, this reaction helped me learn how to be a scientist.

_____

Works cited:
Citri, Ofra, and Irving R. Epstein (1987) “Dynamical Behavior in the Chlorite-Iodide Reaction: A Simplified Mechanism”, Journal of Physical Chemistry 91: 6034-6040.

Epstein, Irving R., and Kenneth Kustin (1985) “A Mechanism for Dynamical Behavior in the Oscillatory Chlorite-Iodide Reaction”, Journal of Physical Chemistry 89: 2275-2282.

Kern, David M., and Chang-Hwan Kim (1965) “Iodine Catalysis in the Chlorite-Iodide Reaction”, Journal of the American Chemical Society 87(23): 5309-5313.

Trust me, I’m a scientist.

In an earlier post, I described an ideal of the tribe of science that the focus of scientific discourse should be squarely on the content — the hypotheses scientists are working with, the empirical data they have amassed, the experimental strategies they have developed for getting more information about our world — rather than on the particular details of the people involved in this discourse. This ideal is what sociologist of science Robert K. Merton* described as the “norm of universalism”.

Ideals, being ideals, can be hard to live up to. Anonymous peer review of scientific journal articles notwithstanding, there are conversations in the tribe of science where it seems to matter a lot who is talking, not just what she’s saying about the science. Some scientists were trained by pioneers in their fields, or hired to work in prestigious and well-funded university departments. Some have published surprising results that have set in motion major changes in the scientific understanding of a particular phenomenon, or have won Nobel Prizes.

The rest can feel like anonymous members in a sea of scientists, doing the day to day labor of advancing our knowledge without benefit of any star power within the community. Indeed, probably lots of scientists prefer the task of making the knowledge, having no special need to have their names widely known within their fields and piled with accolades.

But there’s a peculiar consequence of the idea that scientists are all in the knowledge-buiding trenches together, focused on the common task rather than on self-agrandizement. When scientists are happily ensconced in the tribe of science, very few of them take themselves to be stars. But when the larger society, made up mostly of non-scientists, encounters a scientist — any scientist — that larger society might take him to be a star.

Merton touched on this issue when he described another norm of the tribe of science, disinterestedness. One way to think about the norm of disinterestedness is that scientists aren’t doing science primarily to get the big bucks, or fame, or attractive dates. Merton’s description of this community value is a bit more subtle. He notes that disinterestedness is different from altruism, and that scientists needn’t be saints.

The best way to understand disinterestedness might be to think of how a scientist working within her tribe is different from an expert out in the world dealing with laypeople. The expert, knowing more than the layperson, could exploit the layperson’s ignorance or his tendency to trust the judgment of the expert. The expert, in other words, could put one over on the layperson for her own benefit. This is how snake oil gets sold.

The scientist working within the tribe of science can expect no such advantage. Thus, trying to put one over on other scientists is a strategy that shouldn’t get you far. By necessity, the knowledge claims you advance are going to be useful primarily in terms of what they add to the shared body of scientific knowledge, if only because your being accountable to the other scientists in the tribe means that there is no value added to the claims from using them to play your scientific peers for chumps.

Merton described situations in which the bona fides of the tribe of science were used in the service of non-scientific ends:

Science realizes its claims. However, its authority can be and is appropriated for interested purposes, precisely because the laity is often in no position to distinguish spurious from genuine claims to such authority. The presumably scientific pronouncements of totalitarian spokesmen on race or economy or history are for the uninstructed laity of the same order as newspaper reports of an expanding universe or wave mechanics. In both instances, they cannot be checked by the man-in-the-street and in both instances, they may run counter to common sense. If anything, the myths will seem more plausible and are certainly more comprehensible to the general public than accredited scientific theories, since they are closer to common-sense experience and to cultural bias. Partly as a result of scientific achievements, therefore, the population at large becomes susceptible to new mysticisms expressed in apparently scientific terms. The borrowed prestige of science bestows prestige on the unscientific doctrine. (p. 277))

(Bold emphasis added)

The success of science — the concentrated expertise of the tribe — means that those outside of it may take “scientific” claims at face value. Unable to make an independent evaluation of their credibility, lay people can easily fall prey to a wolf in scientist’s clothing, to a huckster assumed to be committed first and foremost to the facts (as scientists try to be) who is actually distorting them to look after his own ends.

This presents a serious challenge for non-scientists — and for scientists, too.

If the non-scientist can’t determine whether a purportedly scientific claim is a good one — whether, for example, it is supported by the empirical evidence — the non-scientist has to choose between accepting that claim on the authority of someone who claims to be a scientist (which in itself raises another evaluative problem for the non-scientist — what kind of credentials do you need to see from the guy wearing the lab coat to believe that he’s a proper scientist?), or setting aside all putative scientific claims and remaining agnostic about them. You trust that the “Science” label on a claim tells you something about its quality, or you recognize that it conveys even less useful information to you than a label that says, “Now with Jojoba!”

If late-night infomercials and commercial websites are any indication, there are not strong labeling laws covering what can be labeled as “Science”, at least in a sales pitch aimed at the public at large.** This leaves open the possibility that the claims made by the guy in the white lab coat that he’s saying are backed by Science would not be recognized by other scientists as backed by science.

The problem this presents for scientists is two-fold.

On the one hand, scientists are trying to get along in a larger society where some of what they discover in their day jobs (building knowledge) could end up being relevant to how that larger society makes decisions. If we want our governments to set sensible policy as far as tackling disease outbreaks, or building infrastructure that won’t crumble in floods, or ensuring that natural resources are utilized sustainably, it would be good for that policy to be informed by the best relevant knowledge we have on the subject. Policy makers, in other words, want to be able to rely on science — something that scientists want, too (since usually they are working as hard as they are to build the knowledge so that the knowledge can be put to good use). But that can be hard to do if some members of the tribe of science go rogue, trading on their scientific credibility to sell something as science that is not.

Even if policy makers have some reasonable way to tell the people slapping the Science label on claims that aren’t scientific, there will be problems in a democratic society where the public at large can’t reliably tell scientists from purveyors of snake-oil.

In such situations, the public at large may worry that anyone with scientific credentials could be playing them for suckers. Scientists who they don’t already know by reputation may be presumed to be looking out for their own interests rather than to be advancing scientific knowledge.

A public distrustful of scientists’ good intentions or trustworthiness in interactions with non-scientists will convey that distrust to the people making policy for them.

This means that scientists have a strong interest in identifying the members of the tribe of science who go rogue and try to abuse the public’s trust. People presenting themselves as scientists while selling unscientific claims are diluting the brand of Science. They undermine the reputation science has for building reliable knowledge. They undercut the claim other scientists make that, in their capacity as scientists, they hold themselves accountable to the way the world really is — to the facts, no matter how inconvenient they may be.

Indeed, if the tribe of science can’t make the case that it is serious about the task of building reliable knowledge about the world and using that knowledge to achieve good things for the public, the larger public may decide that putting up public monies to support scientific research is a bad idea. This, in turn, could lead to a world where most of the scientific knowledge is built with private money, by private industry — in which case, we might have to get most of our scientific knowledge from companies that actually are trying to sell us something.

_____
*Robert K. Merton, “The Normative Structure of Science,” in The Sociology of Science: Theoretical and Empirical Investigations. University of Chicago Press (1979), 267-278.

**There are, however, rules that require the sellers of certain kinds of products to state clearly when they are making claims that have not been evaluated by the Food and Drug administration.

Scientific credibility: is it who you are, or how you do it?

Part of the appeal of science is that it’s a methodical quest for a reliable picture of how our world works. Creativity and insight is crucial at various junctures in this quest, but careful work and clear reasoning does much of the heavy lifting. Among other things, this means that the grade-schooler’s ambition to be a scientist someday is significantly more attainable than the ambition to be a Grammy-winning recording artist, a pro-athlete, an astronaut, or the President of the United States.

Scientific methodology, rather than being a closely guarded trade secret, is a freely available resource.

Because of this, there is a sense that it doesn’t matter too much who is using that scientific methodology. Rather, what matters is what scientists discover by way of the methodology.
Continue reading

What about Dalibor Sames? The Bengü Sezen fraud and the responsibilities of the PI in the training of new scientists.

Unless you are a chemist or a habitual follower of scientific misconduct stories, it’s possible that you missed the saga of Bengü Sezen.

From 2000 to 2005, Sezen was a graduate student in chemistry at Columbia University, working in the laboratory of then-Assistant Professor Dalibor Sames. She appeared to be a talented scientist in training, and during her graduate studies was lead author on three papers published in the Journal of the American Chemical Society. Columbia University conferred upon her a Ph.D. in chemistry (with distinction).

But, as it turns out, her published results were not reproducible, an issue raised by chemists at Columbia and elsewhere as early as 2002. Further, the results were irreproducible for very good reason: as reported by Chemical & Engineering News, investigations by Columbia University and by the U.S. Department of Health & Human Services (which is home to the Office of Research Integrity) revealed

a massive and sustained effort by Sezen over the course of more than a decade to dope experiments, manipulate and falsify NMR and elemental analysis research data, and create fictitious people and organizations to vouch for the reproducibility of her results.

In the wake of the investigations, Sames has retracted the papers coauthored with Sezen (Sezen refused to retract them on the grounds that she stood by the work), and Columbia has revoked the Ph.D. it granted Sezen.

The evidence from the investigations supports the hypothesis that Bengü Sezen was a liar masquerading as a chemist, that she claimed to have done experiments that she hadn’t, to have obtained NMR spectra that she created (in part) with correction fluid, to have built molecules that she didn’t build. She committed fraud that introduced not just mistakes but lies into the scientific literature.

But she didn’t — she couldn’t — do this alone. She didn’t commit her fraud as a principal investigator (PI). Rather she did it as a scientific trainee, a graduate student working under the supervision of Dalibor Sames (who is currently an Associate Professor at Columbia). It’s worth examining what responsibility Sames bears for what happened here.
Continue reading