Grappling with the angry-making history of human subjects research, because we need to.

Teaching about the history of scientific research with human subjects bums me out.

Indeed, I get fairly regular indications from students in my “Ethics in Science” course that reading about and discussing the Nazi medical experiments and the U.S. Public Health Service’s Tuskegee syphilis experiment leaves them feeling grumpy, too.

Their grumpiness varies a bit depending on how they see themselves in relation to the researchers whose ethical transgressions are being inspected. Some of the science majors who identify strongly with the research community seem to get a little defensive, pressing me to see if these two big awful examples of human subject research aren’t clear anomalies, the work of obvious monsters. (This is one reason I generally point out that, when it comes to historical examples of ethically problematic research with human subjects, the bench is deep: the U.S. government’s syphilis experiments in Guatemala, the MIT Radioactivity Center’s studies on kids with mental disabilities in a residential school, the harms done to Henrietta Lacks and to the family members that survived her by scientists working with HeLa cells, the National Cancer Institute and Gates Foundation funded studies of cervical cancer screening in India — to name just a few.) Some of the non-science majors in the class seem to look at their classmates who are science majors with a bit of suspicion.

Although I’ve been covering this material with my students since Spring of 2003, it was only a few years ago that I noticed that there was a strong correlation between my really bad mood and the point in the semester when we were covering the history of human subjects research. Indeed, I’ve come to realize that this is no mere correlation but a causal connection.

The harm that researchers have done to human subjects in order to build scientific knowledge in many of these historically notable cases makes me deeply unhappy. These cases involve scientists losing their ethical bearings and then defending indefensible actions as having been all in the service of science. It leaves me grumpy about the scientific community of which these researchers were a part (rather than being obviously marked as monsters or rogues). It leaves me grumpy about humanity.

In other contexts, my grumpiness might be no big deal to anyone but me. But in the context of my “Ethics in Science” course, I need to keep pessimism on a short leash. It’s kind of pointless to talk about what we ought to do if you’re feeling like people are going to be as evil as they can get away with being.

It’s important to talk about the Nazi doctors and the Tuskegee syphilis experiment so my students can see where formal statements about ethical constraints on human subject research (in particular, the Nuremberg Code and the Belmont Report) come from, what actual (rather than imagined) harms they are reactions to. To the extent that official rules and regulations are driven by very bad situations that the scientific community or the larger human community want to avoid repeating, history matters.

History also matters if scientists want to understand the attitudes of publics towards scientists in general and towards scientists conducting research with human subjects in particular. Newly-minted researchers who would never even dream of crossing the ethical lines the Nazi doctors or the Tuskegee syphilis researchers crossed may feel it deeply unfair that potential human subjects don’t default to trusting them. But that’s not how trust works. Ignoring the history of human subjects research means ignoring very real harms and violations of trust that have not faded from the collective memories of the populations that were harmed. Insisting that it’s not fair doesn’t magically earn scientists trust.

Grappling with that history, though, might help scientists repair trust and ensure that the research they conduct is actually worthy of trust.

It’s history that lets us start noticing patterns in the instances where human subjects research took a turn for the unethical. Frequently we see researchers working with human subjects who that don’t see as fully human, or whose humanity seems less important than the piece of knowledge the researchers have decided to build. Or we see researchers who believe they are approaching questions “from the standpoint of pure science,” overestimating their own objectivity and good judgment.

This kind of behavior does not endear scientists to publics. Nor does it help researchers develop appropriate epistemic humility, a recognition that their objectivity is not an individual trait but rather a collective achievement of scientists engaging seriously with each other as they engage with the world they are trying to know. Nor does it help them build empathy.

I teach about the history of human subjects research because it is important to understand where the distrust between scientists and publics has come from. I teach about this history because it is crucial to understanding where current rules and regulations come from.

I teach about this history because I fully believe that scientists can — and must — do better.

And, because the ethical failings of past human subject research were hardly ever the fault of monsters, we ought to grapple with this history so we can identify the places where individual human weakness, biases, blind-spots are likely to lead to ethical problems down the road. We need to build systems and social mechanisms to be accountable to human subjects (and to publics), to prioritize their interests, never to lose sight of their humanity.

We can — and must — do better. But this requires that we seriously examine the ways that scientists have fallen short — even the ways that they have done evil. We owe it to future human subjects of research to learn from the ways scientists have failed past human subjects, to apply these lessons, to build something better.

Careers (not just jobs) for Ph.D.s outside the academy.

A week ago I was in Boston for the 2013 annual meeting of the History of Science Society. Immediately after the session in which I was a speaker, I attended a session (Sa31 in this program) called “Happiness beyond the Professoriate — Advising and Embracing Careers Outside the Academy.” The discussion there was specifically pitched at people working in the history of science (whether earning their Ph.D.s or advising those who are), but much of it struck me as broadly applicable to people in other fields — not just fields like philosophy, but also science, technology, engineering, and mathematics (STEM) fields.

The discourse in the session was framed in terms of recognizing, and communicating, that getting a job just like your advisor’s (i.e., as a faculty member at a research university with a Ph.D. program in your field — or, loosening it slightly, as permanent faculty at a college or university, even one not primarily focused on research or on training new members of the profession at the Ph.D. level) shouldn’t be a necessary condition for maintaining your professional identity and place in the professional community. Make no mistake, people in one’s discipline (including those training new members of the profession at the Ph.D. level) frequently do discount people as no longer really members of the profession for failing to succeed in the One True Career Path, but the panel asserted that they shouldn’t.

And, they provided plenty of compelling reasons why the “One True Career Path” approach is problematic. Chief among these, at least in fields like history, is that this approach feeds the creation and growth of armies of adjunct faculty, hoping that someday they will become regular faculty, and in the meantime working for very low wages relative to the amount of work they do (and relative to their training and expertise), experiencing serious job insecurity (sometimes not finding out whether they’ll have classes to teach until the academic term is actually underway), and enduring all manner of employer shenanigans (like having their teaching loads reduced to 50% of full time so the universities employing them are not required by law to provide health care coverage). Worse, insistence on One True Career Path fails to acknowledge that happiness is important.

Panelist Jim Grossman noted that the very language of “alternative careers” reinforces this problematic view by building in the assumption that there is a default career path. Speaking of “alternatives” instead might challenge the assumption that all options other than the default are lesser options.

Grossman identified other bits of vocabulary that ought to be excised from these discussions. He argued against speaking of “the job market” when one really means “the academic job market”. Otherwise, the suggestion is that you can’t really consider those other jobs without exiting the profession. Talking about “job placement,” he said, might have made sense back in the day when the chair of a hiring department called the chair of another department to say, “Send us your best man!” rather than conducting an actual job search. Those days are long gone.

And Grossman had lots to say about why we should stop talking about “overproduction of Ph.D.s.”

Ph.D.s, he noted, are earned by people, not produced like widgets on a factory line. Describing the number of new Ph.D.-holders each year as overproduction is claiming that there are too many — but again, this is too many relative to a specific kind of career trajectory assumed implicitly to be the only one worth pursuing. There are many sectors in the career landscape that could benefit from the talents of these Ph.D.-holders, so why are we not describing the current situation as one of “underconsumption of Ph.D.s”? Finally, the “overproduction of Ph.D.s.” locution doesn’t seem helpful in a context where these seems to be no good way to stop departments from “producing” as many Ph.D.s as they want to. If market forces were enough to address this imbalance, we wouldn’t have armies of adjuncts.

Someone in the discussion pointed out that STEM fields have for some time had similar issues of Ph.D. supply and demand, suggesting that they might be ahead of the curve in developing useful responses which other disciplines could borrow. However, the situation in STEM fields differs in that industrial career paths have been treated as legitimate (and as not removing you from the profession). And, more generally, society seems to take the skills and qualities of mind developed during a STEM Ph.D. as useful and broadly applicable, while those developed during a history or philosophy Ph.D. are assumed to be hopelessly esoteric. However, it was noted that while STEM fields don’t generate the same armies of adjuncts as humanities field, they do have what might be described as the “endless postdoc” problem.

Given that structural stagnation of the academic job market is real (and has been reality for something like 40 years in the history of science), panelist Lynn Nyhart observed that it would be foolish for Ph.D. students not to consider — and prepare for — other kinds of jobs. As well, Nyhart argues that as long as faculty take on graduate students, they have a responsibility to help them find jobs.

Despite profession that they are essentially clueless about career paths other than academia, advisors do have resources they can draw upon in helping their graduate students. Among these is the network of Ph.D. alumni from their graduate program, as well as the network of classmates from their own Ph.D. training. Chances are that a number of people in these networks are doing a wide range of different things with their Ph.D.s — and that they could provide valuable information and contacts. (Also, keeping in contact with these folks recognizes that they are still valued members of your professional community, rather than treating them as dead to you if they did not pursue the One True Career Path.)

Nyhart also recommended Versatilephd.com, especially the PhD Career Finder tab, as a valuable resource for exploring the different kinds of work for which Ph.D.s in various fields can serve as preparation. Some of the good stuff on the site is premium content, but if your university subscribes to the site your access to that premium content may already be paid for.

Nyhart noted that preparing Ph.D. students for a wide range of careers doesn’t require lowering discipline-specific standards, nor changing the curriculum — although, as Grossman pointed out, it might mean thinking more creatively about what skills, qualities of mind, and experiences existing courses impart. After all, skills that are good training for a career in academia — being a good teacher, an effective committee member, an excellent researcher, a persuasive writer, a productive collaborator — are skills that are portable to other kinds of careers.

David Attis, who has a Ph.D. in history of science and has been working in the private sector for about a decade, mentioned some practical skills worth cultivating for Ph.D.s pursuing private sector careers. These include having a tight two-minute explanation of your thesis geared to a non-specialist audience, being able to demonstrate your facility in approaching and solving non-academic problems, and being able to work on the timescale of business, not thesis writing (i.e., five hours to write a two-page memo is far too slow). Attis said that private sector employers are looking for people who can work well on teams and who can be flexible in contexts beyond teaching and research.

I found the discussion in this session incredibly useful, and I hope some of the important issues raised there will find their way to the graduate advisors and Ph.D. students who weren’t in the room for it, no matter what their academic discipline.

Building a scientific method around the ideal of objectivity.

While modern science seems committed to the idea that seeking verifiable facts that are accessible to anyone is a good strategy for building a reliable picture of the world as it really is, historically, these two ideas have not always gone together. Peter Machamer describes a historical moment when these two senses of objectivity were coupled in his article, “The Concept of the Individual and the Idea(l) of Method in Seventeenth-Century Natural Philosophy.” [1]

Prior to the emergence of a scientific method that stressed objectivity, Machamer says, most people thought knowledge came from divine inspiration (whether written in holy books or transmitted by religious authorities) or from ancient sources that were only shared with initiates (think alchemy, stone masonry, and healing arts here). Knowledge, in other words, was a scarce resource that not everyone could get his or her hands (or brains) on. To the extent that a person found the world intelligible at all, it was probably based on the story that someone else in a special position of authority was telling.

How did this change? Machamer argues that it changed when people started to think of themselves as individuals. The erosion of feudalism, the reformation and counter-reformation, European voyages to the New World (which included encounters with plants, animals, and people previously unknown in the Old World), and the shift from a geocentric to a heliocentric view of the cosmos all contributed to this shift by calling old knowledge and old sources of authority into question. As the old sources of knowledge became less credible (or at least less monopolistic), the individual came to be seen as a new source of knowledge.

Machamer describes two key aspects of individuality at work. One is what he calls the “Epistemic I.” This is the recognition that an individual can gain knowledge and ideas directly from his or her own interactions with the world, and that these interactions depend on senses and powers of reason that all humans have (or could have, given the opportunity to develop them). This recognition casts knowledge (and the ability to get it) as universal and democratic. The power to build knowledge is not concentrated in the hands (or eyes) of just the elite — this power is our birthright as human beings.

The other side of individuality here is what Machamer calls the “Entrepreneurial I.” This is the belief that an individual’s insights deserve credit and recognition, perhaps even payment. This recognition casts the individual who has it as a leader, or a teacher — definitely, as a special human worth listening to.

Pause for a moment to notice that this tension is still present in science. For all the commitment to science as an enterprise that builds knowledge from observations of the world that others must be able to make (which is the whole point of reproducibility), scientists also compete for prestige and career capital based on which individual was the first to observe (and report observing) a particular detail that anyone could see. Seeing something new is not effortless (as we’ve discussed in the last two posts), but there’s still an uneasy coexistence between the idea of scientific knowledge-building as within the powers of normal human beings and the idea of scientific knowledge-building as the activity of special human beings with uniquely powerful insights and empirical capacities.

The two “I”s that Machamer describes came together as thinkers in the 1600s tried to work out a reliable method by which individuals could replace discredited sources of “knowledge” and expand on what remained to produce their own knowledge. Lots of “natural philosophers” (what we would call scientists today) set out to formulate just such a method. The paradox here is that each thinker was selling (often literally) a way of knowing that was supposed to work for everyone, while simultaneously presenting himself as the only one clever enough to have found it.

Looking for a method that anyone could use to get the facts about the world, the thinkers Machamer describes recognized that they needed to formulate a clear set of procedures that was broadly applicable to the different kinds of phenomena in the world about which people wanted to build knowledge, that was teachable (rather than being a method that only the person who came up with it could use), and that was able to bring about consensus and halt controversy. However, in the 1600s there were many candidates for this method on offer, which meant that there was a good bit of controversy about the question of which method was the method.

Among the contenders for the method, the Baconian method involved cataloguing many experiences of phenomena, then figuring out how to classify them. The Galilean method involved representing the phenomena in terms of mechanical models (and even going so far as to build the corresponding machine). The Hobbesian model focused on analyzing compositions and divisions of substances in order to distinguish causes from effects. And these were just three contenders in a crowded field. If there was a common thread in these many methods, it was describing or representing the phenomena of interest in spatial terms. In the seventeenth century, as now, seeing is believing.

In a historical moment when people were considering the accessibility and the power of knowledge through experience, it became clear to the natural philosophers trying to develop an appropriate method that such knowledge also required control. To get knowledge, it was not enough to have just any experience -– you had to have the right kind of experiences. This meant that the methods under development had to give guidance on how to track empirical data and then analyze it. As well, these methods had to invent the concept of a controlled experiment.

Whether it was in a published dialogue or an experiment conducted in a public space before witnesses, the natural philosophers developing knowledge-building methods recognized the importance of demonstration. Machamer writes:

Demonstration … consists in laying a phenomenon before oneself and others. This “laying out” exhibits the structure of the phenomenon, exhibits its true nature. What is laid out provides an experience for those seeing it. It carries informational certainty that causes assent.” (94)

Interestingly, there seems to have been an assumption that once people hit on the appropriate procedure for gathering empirical facts about the phenomena, these facts would be sufficient to produce agreement among those who observed them. The ideal method was supposed to head off controversy. Disagreements were either a sign that you were using the wrong method, or that you were using the right method incorrectly. As Machamer describes it:

[T]he doctrines of method all held that disputes or controversies are due to ignorance. Controversies are stupid and accomplish nothing. Only those who cannot reason properly will find it necessary to dispute. Obviously, as noted, the ideal of universality and consensus contrasts starkly with the increasing number of disputes that engage these scientific entrepreneurs, and with the entrepreneurial claims of each that he alone has found the true method.

Ultimately, what stemmed the proliferations of competing methods was a professionalization of science, in which the practitioners essentially agreed to be guided by a shared method. The hope was that the method the scientific profession agreed upon would be the one that allowed scientists to harness human senses and intellect to best discover what the world is really like. Within this context, scientists might still disagree about the details of the method, but they took it that such agreements ought to be resolved in such a way that the resulting methodology better approximated this ideal method.

The adoption of shared methodology and the efforts to minimize controversy are echoed in Bruce Bower’s [2] discussion of how the ideal of objectivity has been manifested in scientific practices. He writes:

Researchers began to standardize their instruments, clarify basic concepts, and write in an impersonal style so that their peers in other countries and even in future centuries could understand them. Enlightenment-influenced scholars thus came to regard facts no longer as malleable observations but as unbreakable nuggets of reality. Imagination represented a dangerous, wild force that substituted personal fantasies for a sober, objective grasp of nature. (361)

What the seventeenth century natural philosophers Machamer describes were striving for is clearly recognizable to us as objectivity -– both in the form of an objective method for producing knowledge and in the form of a body of knowledge that gives a reliable picture of how the world really is. The objective scientific method they sought was supposed to produce knowledge we could all agree upon and to head off controversy.

As you might imagine, the project of building reliable knowledge about the world has pushed scientists in the direction of also building experimental and observational techniques that are more standardized and require less individual judgment across observers. But an interesting side-effect of this focus on objective knowledge as a goal of science is the extent to which scientific reports can make it look like no human observers were involved in making the knowledge being reported. The passive voice of scientific papers — these procedures were performed, these results were observed — does more than just suggest that the particular individuals that performed the procedures and observed the results are interchangeable with other individuals (who, scientists trust, would, upon performing the same procedures, see the same results for themselves). The passive voice can actually erase the human labor involved in making knowledge about the world.

This seems like a dangerous move when objectivity is not an easy goal to achieve, but rather one that requires concerted teamwork along with one’s objective method.
_____________

[1] “The Concept of the Individual and the Idea(l) of Method in Seventeenth-Century Natural Philosophy,” in Peter Machamer, Marcello Pera, and Aristides Baltas (eds.), Scientific Controversies: Philosophical and Historical Perspectives. Oxford University Press, 2000.

[2] Bruce Bower, “Objective Visions,” Science News. 5 December 1998: Vol. 154, pp. 360-362

The challenges of objectivity: lessons from anatomy.

In the last post, we talked about objectivity as a scientific ideal aimed at building a reliable picture of what the world is actually like. We also noted that this goal travels closely with the notion of objectivity as what anyone applying the appropriate methodology could see. But, as we saw, it takes a great deal of scientific training to learn to see what anyone could see.

The problem of how to see what is really there is not a new one for scientists. In her book The Scientific Renaissance: 1450-1630 [1], Marie Boas Hall describes how this issue presented itself to Renaissance anatomists. These anatomists endeavored to learn about the parts of the human body that could be detected with the naked eye and the help of a scalpel.

You might think that the subject matter of anatomy would be more straightforward for scientists to “see” than the cells Fred Grinnell describes [2] (discussed in the last post), which require preparation and staining and the twiddling of knobs on microscopes. However, the most straightforward route to gross anatomical knowledge -– dissections of cadavers -– had its own challenges. For one thing, cadavers (especially human cadavers) were often in short supply. When they were available, anatomists hardly ever performed solitary dissections of them. Rather, dissections were performed, quite literally, for an audience of scientific students, generally with a surgeon doing the cutting while a professor stood nearby and read aloud from an anatomical textbook describing the organs, muscles, or bones encountered at each stage of the dissection process. The hope was that the features described in the text would match the features being revealed by the surgeon doing the dissecting, but there were doubtless instances where the audio track (as it were) was not quite in sync with the visual. Also, as a practical matter, before the invention of refrigeration dissections were seasonal, performed in the winter rather than the warmer months to retard the cadaver’s decomposition. This put limits on how much anatomical study a person could cram into any given year.

In these conditions, most of the scientists who studied anatomy logged many more hours watching dissections than performing dissections themselves. In other words, they were getting information about the systems of interest by seeing rather than by doing -– and they weren’t always seeing those dissections from the good seats. Thus, we shouldn’t be surprised that anatomists greeted the invention of the printing press by producing a number of dissection guides and anatomy textbooks.

What’s the value of a good textbook? It shares detailed information compiled by another scientist, sometimes over the course of years of study, yet you can consume that information in a more timely fashion. If it has diagrams, it can give you a clearer view of what there is to observe (albeit through someone else’s eyes) than you may be able to get from the cheap seats at a dissection. And, if you should be so lucky as to get your own specimens for study, a good textbook can guide your examination of the new material before you, helping you deal with the specimen in a way that lets you see more of what there is to see (including spatial relations and points of attachment) rather than messing it up with sloppy dissection technique.

Among the most widely used anatomy texts in the Renaissance were “uncorrupted” translations of On the Use of the Parts and Anatomical Procedures by the ancient Greek anatomist Galen, and the groundbreaking new text On the Fabric of the Human Body (published in 1543) by Vesalius. The revival of Galen fit into a pattern of Renaissance celebration of the wisdom of the ancients rather than setting out to build “new” knowledge, and Hall describes the attitude of Renaissance anatomists toward his work as “Galen-worship.” Had Galen been alive during the Renaissance, he might well have been irritated at the extent to which his discussions of anatomy -– based on dissections of animals, not human cadavers –- were taken to be authoritative. Galen himself, as an advocate of empiricism, would have urged other anatomists to “dissect with a fresh eye,” attentive to what the book of nature (as written on the bodies of creatures to be dissected) could teach them.

As it turns out, this may be the kind of thing that’s easier to urge than to do. Hall asks,

[W]hat scientific apprentice has not, many times since the sixteenth century, preferred to trust the authoritative text rather than his own unskilled eye? (137)

Once again, it requires training to be able to see what there is to see. And surely someone who has written textbooks on the subject (even centuries before) has more training in how to see than does the novice leaning on the textbook.

Of course, the textbook becomes part of the training in how to see, which can, ironically, make it harder to be sure that what you are seeing is an accurate reflection of the world, not just of the expectations you bring to your observations of it.

The illustrations in the newer anatomy texts made it seem less urgent to anatomy students that they observe (or participate in) actual dissections for themselves. As the technique for mass-produced illustrations got better (especially with the shift from woodcuts to engravings), the illustrators could include much more detail in their images. Paradoxically, this could be a problem, as the illustrator was usually someone other than the scientist who wrote the book, and the author and illustrator were not always in close communication as the images were produced. Given a visual representation of what there is to observe and a description of what there is to observe in the text, which would a student trust more?

Bruce Bower discusses this sort of problem in his article “Objective Visions,” [3] describing the procedures used by Dutch anatomist Berhard Albinus in the mid-1700s to create an image of the human skeleton. Bower writes:

Albinus carefully cleans, reassembles, and props up a complete male skeleton; checks the position of each bone in comparison with observations of an extremely skinny man hired to stand naked next to the skeleton; he calculates the exact spot at which an artist must sit to view the skeleton’s proportions accurately; and he covers engraving plates with cross-hatched grids so that images can be drawn square-by-square and thus be reproduced more reliably. (360)

Here, it sounds like Albinus is trying hard to create an image that accurately conveys what there is to see about the skeleton and its spatial relations. The methodology seems designed to make the image-creation faithful to the particulars of the actual specimen — in a word, objective. But, Bower continues:

After all that excruciating attention to detail, the eminent anatomist announces that his atlas portrays not a real skeleton, but an idealized version. Albinus has dictated alterations to the artist. The scrupulously assembled model is only a spingboard for insights into a more “perfect” representation of the human skeleton, visible only to someone with Albinus’ anatomical acumen. (360)

Here, Albinus was trying to abstract away from the peculiarities of the particular skeleton he had staged as a model for observation in order to describe what he saw as the real thing. This is a decidedly Platonist move. Plato’s view was that the stuff of our world consists largely of imperfect material instantiations of immaterial ideal forms -– and that science makes the observations it does of many examples of material stuff to get a handle on those ideal forms.

If you know the allegory of the cave, however, you know that Plato didn’t put much faith in feeble human sense organs as a route to grasping the forms. The very imperfection of those material instantiations that our sense organs apprehend would be bound to mislead us about the forms. Instead, Plato thought we’d need to use the mind to grasp the forms.

This is a crucial juncture where Aristotle parted ways with Plato. Aristotle still thought that there was something like the forms, but he rejected Plato’s full-strength rationalism in favor of an empirical approach to grasping them. If you wanted to get a handle on the form of “horse,” for example, Aristotle thought the thing to do was to examine lots of actual specimens of horse and to identify the essence they all have in common. The Aristotelian approach probably feels more sensible to modern scientists than the Platonist alternative, but note that we’re still talking about arriving at a description of “horse-ness” that transcends the observable features of any particular horse.

Whether you’re a Platonist, an Aristotelian, or something else, it seems pretty clear that scientists do decide that some features of the systems they’re studying are crucial and others are not. They distinguish what they take to be background from what they take to be the thing they’re observing. Rather than presenting every single squiggle in their visual field, they abstract away to present the piece of the world they’re interested in talking about.

And this is where the collaboration between anatomist and illustrator gets ticklish. What happens if the engraver is abstracting away from the observed particulars differently than the anatomist would? As Hall notes, the engravings in Renaissance anatomy texts were not always accurate representations of the texts. (Nor, for that matter, did the textual descriptions always get the anatomical features right — Renaissance anatomists, Vesalius included, managed to repeat some anatomical mistakes that went back to Galen, likely because they “saw” their specimens through a lens of expectations shaped by what Galen said they were going to see.)

On top of this, the fact that artists like Leonardo Da Vinci studied anatomy to improve their artistic representations of the human form spilled back to influence Renaissance scientific illustrators. These illustrators, as much as their artist contemporaries, may have looked beyond the spatial relations between bones or muscles or internal organs for hidden beauty in their subjects. While this resulted in striking illustrations, it also meant that their engravings were not always accurate representations of the cadavers that were officially their subjects.

These factors conspired to produce visually arresting anatomy texts that exerted an influence on how the anatomy students using them understood the subject, even when these students went beyond the texts to perform their own dissections. Hall writes,

[I]t is often quite easy to “see” what a textbook or manual says should be seen. (141)

Indeed, faced with a conflict between the evidence of one’s eyes pointed at a cadaver and the evidence of one’s eyes pointed at an anatomical diagram, one might easily conclude that the cadaver in question was a weird variant while the diagram captured the “standard” configuration.

Bower’s article describes efforts scientists made to come up with visual representations that were less subjective. Bower writes:

Scientists of the 19th century rapidly adopted a new generation of devices that rendered images in an automatic fashion. For instance, the boxy contraption known as the camera obscura projected images of a specimen, such as a bone or a plant, onto a surface where a researcher could trace its form onto a piece of paper. Photography soon took over and further diminished human involvement in image-making. … Researchers explicitly equated the manual representation of items in the natural world with a moral code of self-restraint. … A blurry photograph of a star or ragged edges on a slide of tumor tissues were deemed preferable to tidy, idealized portraits. (361)

Our naïve picture of objectivity may encourage us that seeing is believing, and that mechanically captured images are more reliable than those rendered by the hand of a (subjective) human, but it’s important to remember that pictures -– even photographs -– have points of view, depend on choices made about the conditions of their creation, and can be used as arguments to support one particular way of seeing the world over another.

In the next post, we’ll look at how Seventeenth Century “natural philosophers” labored to establish a general-use method for building reliable knowledge about the world, and at how the notion of objectivity was connected to these efforts, and to the recognizable features of “the scientific method” that resulted.
_____________

[1] Marie Boas Hall, The Scientific Renaissance: 1450-1630. Dover, 1994.

[2] Frederick Grinnell, The Scientific Attitude. Guilford Press, 1992.

[3] Bruce Bower, “Objective Visions,” Science News. 5 December 1998: Vol. 154, pp. 360-362

The ideal of objectivity.

In trying to figure out what ethics ought to guide scientists in their activities, we’re really asking a question about what values scientists are committed to. Arguably, something that a scientist values may not be valued as much (if at all) by the average person in that scientist’s society.

Objectivity is a value – perhaps one of the values that scientists and non-scientists most strongly associate with science. So, it’s worth thinking about how scientists understand that value, some of the challenges in meeting the ideal it sets, and some of the historical journey that was involved in objectivity becoming a central scientific value in the first place. I’ll be splitting this discussion into three posts. This post sets the stage and considers how modern scientific practitioners describe objectivity. The next post will look at objectivity (and its challenges) in the context of work being done by Renaissance anatomists. The third post will examine how the notion of objectivity was connected to the efforts of Seventeenth Century “natural philosophers” to establish a method for building reliable knowledge about the world.

First, what do we mean by objectivity?

In everyday discussions of ethics, being objective usually means applying the rules fairly and treating everyone the same rather than showing favoritism to one party or another. Is this what scientists have in mind when they voice their commitment to objectivity? Perhaps in part. It could be connected to applying “the rules” of science (i.e., the scientific method) fairly and not letting bias creep into the production of scientific knowledge.

This seems close to the characterization of good scientific practice that we see in the National Academy of Science and National Research Council document, “The Nature of Science.” [1] This document describes science as an activity in which hypotheses undergo rigorous tests, whereby researchers compare the predictions of the hypotheses to verifiable facts determined by observation and experiment, and findings and corrections are announced in refereed scientific publications. It states, “Although [science’s] goal is to approach true explanations as closely as possible, its investigators claim no final or permanent explanatory truths.” (38)

Note that rigorous facts, verification of those facts (or the information necessary to verify them), correction of mistakes, and reliable reports of findings all depend on honesty – you can’t perform these activities by making up your results, or presenting them in a deceptive way, for example. So being objective in the sense of following good scientific methodology requires a commitment not to mislead.

But here, in “The Nature of Science,” we see hints that there are two closely related, yet distinct, meanings of “objective”. One is what anyone applying the appropriate methodology could see. The other is a picture of what the world is really like. Getting a true picture of the world (or aiming for such a picture) means seeking objectivity in the second sense -– finding the true facts. Seeking out the observational data that other scientists could verify -– the first sense of objectivity -– is closely tied to the experimental method scientists use and their strategies for reporting their results. Presumably, applying objective methodology would be a good strategy for generating an accurate (and thus objective) picture of the world.

But we should note a tension here that’s at least as old as the tension between Plato and his student Aristotle. What exactly are the facts about the world that anyone could see? Are sense organs like eyes all we need to see them? If such facts really exist, are they enough to help us build a true picture of the world?

In the chapter “Making Observations” from his book The Scientific Attitude [2], Fred Grinnell discusses some of the challenges of seeing what there is to see. He argues that, especially in the realms science tries to probe, seeing what’s out there is not automatic. Rather, we have to learn to see the facts that are there for anyone to observe.

Grinnell describes the difficulty students have seeing cells under a light microscope, a difficulty that persists even after students work out how to use the microscope to adjust the focus. He writes:

The students’ inability to see the cells was not a technical problem. There can be technical problems, of course -– as when one takes an unstained tissue section and places it under a microscope. Under these conditions it is possible to tell that something is “there,” but not precisely what. As discussed in any histology textbook, the reason is that there are few visual features of unstained tissue sections that our eyes can discriminate. As the students were studying stained specimens, however, sufficient details of the field were observable that could have permitted them to distinguish among different cells and between cells and the noncellular elements of the tissue. Thus, for these students, the cells were visible but unseen. (10-11)

Grinnell’s example suggests that seeing cells, for example, requires more than putting your eye to the eyepiece of a microscope focused on a stained sample of cells. Rather, you need to be able to recognize those bits of your visual field as belonging to a particular kind of object -– and, you may even need to have something like the concept of a cell to be able to identify what you are seeing as cells. At the very least, this suggests that we should amend our gloss of objective as “what anyone could see” to something more like “what anyone could see given a particular conceptual background and some training with the necessary scientific measuring devices.”

But Grinnell makes even this seem too optimistic. He notes that “seeing things one way means not seeing them another way,” which implies that there are multiple ways to interpret any given piece of the world toward which we point our sense organs. Moreover, he argues,

Each person’s previous experiences will have led to the development of particular concepts of things, which will influence what objects can be seen and what they will appear to be. As a consequence, it is not unusual for two investigators to disagree about their observations if the investigators are looking at the data according to different conceptual frameworks. Resolution of such conflicts requires that the investigators clarify for each other the concepts that they have in mind. (15)

In other words, scientists may need to share a bundle of background assumptions about the world to look at a particular piece of that world and agree on what they see. Much more is involved in seeing “what anyone can see” than meets the eye.

We’ll say more about this challenge in the next post, when we look at how Renaissance anatomists tried to build (and communicate) objective knowledge about the human body.
_____________

[1] “The Nature of Science,” in Panel on Scientific Responsibility and the Conduct of Research, National Academy of Sciences, National Academy of Engineering, Institute of Medicine. Responsible Science, Volume I: Ensuring the Integrity of the Research Process. Washington, DC: The National Academies Press, 1992.

[2] Frederick Grinnell, The Scientific Attitude. Guilford Press, 1992.

Ada Lovelace Day book review: Maria Mitchell and the Sexing of Science.

Today is Ada Lovelace Day. Last year, I shared my reflections on Ada herself. This year, I’d like to celebrate the day by pointing you to a book about another pioneering woman of science, Maria Mitchell.

Maria Mitchell and the Sexing of Science: An Astronomer among the American Romantics
by Renée Bergland
Boston: Beacon Press
2008

What is it like to be a woman scientist? In a society where being a woman is somehow a distinct experience from being an ordinary human being, the answer to this question can be complicated. And, in a time and place where being a scientist, being a professional — indeed, even being American — was still something that was very much under construction, the complexities of the answer can add up to a biography of that time, that place, that swirl of intellectual and cultural ferment, as well as of that woman scientist.

The astronomer Maria Mitchell was not only a pioneering woman scientist in the early history of the United States, but she was one of the nation’s first professional scientists. Renée Bergland’s biography of Mitchell illuminates a confluence of circumstances that made it possible for Mitchell to make her scientific contributions — to be a scientist at all. At the same time, it tracks a retrograde cultural swing of which Mitchell herself was aware: a loss, during Mitchell’s lifetime, of educational and career opportunities for women in the sciences.

Maria Mitchell was the daughter of two people who were passionate about learning, and about each other. Her mother, Lydia Coleman Mitchell, worked at both of Nantucket’s lending libraries in order to avail herself of their collections. Her father, William Mitchell, turned down a spot as a student at Harvard — which Lydia, as a woman, was barred from attending — to stay on Nantucket and make a life with Lydia. Maria was born in 1818, the third child of ten (nine of whom survived to adulthood) in a family that nurtured its daughters as well as its sons and where a near constant scarcity of resources prompted both hard work and ingenuity.

William Mitchell was one of the Nantucket men who didn’t go to sea on a whaling ship, working instead on the island in a variety of capacities, including astronomer. His astronomical knowledge was welcomed by the community in public lectures (since youth who planned to go to sea would benefit from an understanding of astronomy if they wanted to be able to navigate by the stars), and he used his expertise to calibrate the chronometers ship captains used to track their longitude while at sea.

Since he was not off at sea, William was there with Lydia overseeing the education of the Mitchell children, much of it taking place in the Mitchell home. Nantucket did not establish a public school until 1827; when it did, its first principal was William Mitchell. Maria attended the public school for the few years her father was principal, then followed him to the private school he founded on the island. William’s astronomical work, conducted at home, was part of Maria’s education, and by the time she was 11 years old, she was acting as his assistant in the work. As it was not long before Maria’s mathematical abilities and training (most of it self-taught) soon exceeded her father’s, this was a beneficial relationship on both sides.

Maria herself did some teaching of the island’s children. Later she ran the Nantucket Atheneum, a cross between a community library and a center of culture. All the while, she continued to assist her father with astronomical observations and provided the computational power that drove their collaboration. She made nightly use of the rooftop observatory at the Pacific Bank (where the Mitchell family lived when William took a post there), and one evening in 1847, Maria’s sweeps of the heavens with her telescope revealed a streak in the sky that she recognized as a new comet.

The announcement of the comet beyond the Mitchell family gives us a glimpse into just what was at stake in such a discovery. Maria herself was inclined towards modesty, some might argue pathologically so. William, however, insisted that the news must be shared, and contacted the astronomers at Harvard he knew owing to his own work. As Bergland describes it:

When Mitchell discovered the comet and her father reported it to the Bonds at Harvard [William Bonds was the director of the Harvard Observatory, his son George his assistant], the college president at the time, Edward Everett, saw an opening: Mitchell was a remarkably appealing woman whose talent and modesty were equally indisputable. She could never be accused of being a status seeker. But if Everett could convince the Danish government [which was offering a medal to the discoverer of a new comet] that reporting her discovery to the Harvard Observatory was the equivalent of reporting the discovery to the British Royal Observatory or the Danish Royal Observatory, the Harvard Observatory would gain the status of an international astronomical authority.

Maria was something of a pawn here. She was proud of her discovery, but her intense shyness made her reluctant to publicize it. Yet that shyness was exactly what made her so useful to President Everett. Her friend George Bond had also discovered comets, but he’d been unsuccessful at arguing on his own behalf against the authorities of Europe. Since Bond was directly affiliated with the Harvard College Observatory, Harvard’s hands were tied; Everett had never even tried to defend Bond’s claims. But by framing Mitchell as something of a damsel in distress, Everett could bring his diplomatic skills to bear to establish the precedent that Harvard’s observatory was as reliable as the British Royal Observatory at Greenwich. (p. 67)

There was more than just a (potential) scientific priority battle here (as other astronomers had observed this comet within a few days of Maria Mitchell’s observation of it), there was a battle for institutional credibility for Harvard and for international credibility for the United States as a nation that could produce both important science and serious scientists. Thus, “Miss Mitchell’s Comet” took on a larger significance. While Harvard at the time would have had no use for a woman student, nor for a woman professor, they found it useful to recognize Maria Mitchell as a legitimate astronomer, since doing so advanced their broader interests.

Maria Mitchell’s claim to priority for the comet (one that turned out to have an unusual orbit that was tricky to calculate) was recognized. Besides the Danish medal, this recognition got her a job. In 1849, she was hired by the United States Nautical Almanac as the “computer of Venus”, making her one of the country’s very first professional astronomers.

Her fame as an astronomer also opened doors for her (including doors to observatories) as she left Nantucket in 1857 to tour Europe. The trip was one she hoped would give her a good sense of where scientific research was headed. As it turned out, it also gave her a sense of herself as an American, a scientist, and a woman moving in a very male milieu. Maria Mitchell was horrified to encounter neglected telescopes and rules that banned women from even setting foot within certain university facilities. She rubbed shoulders with famous scientists, including one Charles Babbage and Mary Somerville, the woman William Whewell invented the word “scientist” to describe:

When Whewell groped for words and finally coined “scientist” to describe her, the issue was not primarily gender, but rather the newness of Somerville’s endeavor — her attempt to connect all the physical sciences to one another. …

Another, even more important reason that Whewell … felt the need for a new term was that a new professional identity was developing. Those who studied the material world were beginning to distinguish themselves from philosophers, whose provinces were more metaphysical than physical. But the first steps of this separation had been quite insulated from each other: chemists, mathematicians, astronomers, and the soon-to-be-named physicists did not necessarily see themselves as sharing an identity or as working at a common endeavor. Somerville’s treatise On the Connexion of the Physical Sciences was instrumental in showing the various investigators that their work was connected — they were all practitioners of science.

Although the development of the word “scientist” related more to the philosophical point (argued by Somerville) that the sciences could be unified than it did to gender, “scientist” did gradually replace the older formulation, “man of science.” Gender also entered in, Whewell thought, because as a woman, Somerville was better equipped to see connection than a man. … Whewell argued that Somerville’s womanly perspective enhanced rather than obscured her vision. (pp. 146-147)

In Somerville, Mitchell found a woman who was a fellow pioneer on something of a new frontier in terms of how doing science was perceived. Though the time Mitchell spent with Somerville was brief, the relationship involved real mentoring:

Somerville talked to her about substantive scientific questions as none of the British scientists had done; Mitchell first learned about the works of the physicist James Prescott Joule in Florence [where she met Somerville], despite having spent months in scientific circles in England, where Joule lived and worked. Somerville took Mitchell seriously as an intellect, and wanted to share her wide-ranging knowledge and encourage Mitchell in her own endeavors. She made her affection for Mitchell clear, and she offered the support and encouragement the younger scientist needed. Best of all, Mitchell liked her. She was charming and kind, someone for Mitchell to emulate in every way. (p. 151)

Somerville was not just a role model for Mitchell. The reciprocal nature of their relationship made her a true mentor for Mitchell, someone whose faith in Mitchell’s capabilities helped Mitchell herself to understand what she might accomplish. This relationship launched Mitchell towards greater engagement with the public when she returned to the U.S.

Maria Mitchell broke more ground when she was hired by the newly formed Vassar College (a women’s college) as a professor of astronomy. While she was first interviewed for the position in 1862, the trustees were locked in debate over whether a woman could properly be a professor at the college, and Mitchell was not actually appointed until 1865. Her appointment included an observatory where Mitchell conducted research, taught, and lived. At Vassar, she broke with the authoritarian, lecture-style instruction common in other departments. Instead, she engaged her students in hands-on, active learning, challenged them to challenge her, and involved them in astronomical research. And, when it became clear that there was not enough time in a day to fully meet the competing demands of teaching and research (plus other professional duties and her duties to her family), Mitchell recorded a resolution in her notebook:

RESOLVED: In case of my outliving father and being in good health, to give my efforts to the intellectual culture of women, without regard to salary. (p. 203)

Such a commitment was vital to Maria Mitchell, especially as, during her time at Vassar, she was aware of a societal shift that was narrowing opportunities for women to participate in the sciences or in intellectual pursuits, in the realms of both education and professions. Pioneer though she was, she saw her female students being offered less by the world than she was, and it made her sad and angry.

Renée Bergland’s biography of Maria Mitchell lays out the complexities at work in Mitchell’s family environment, in the culturally rich yet geographically isolated Nantucket island, in the young United States, and in the broader international community of scientific thinkers and researchers. The factors that play a role in a person’s educational and intellectual trajectory are fascinating to me, in part because so many of them seem like they’re just a matter of chance. How important was it to Maria Mitchell’s success that she grew up in Nantucket, when she did, with the parents that she had? If she had grown up in Ohio or Europe, if she had been born a few decades earlier or later, if her parents had been less enthusiastic about education, is there any way she would have become an astronomer? How much of the early recognition of Mitchell’s work was connected to the struggle of the U.S. as a relatively new country to establish itself in the international community of science? (Does it even make sense to think of an international community of science in the mid-nineteenth century? Was it less about having American scientists accepted into such a community and more about national bragging rights? What might be the current state of the U.S. scientifically if other opportunities to establish national prowess had been pursued instead?)

Especially gripping are the questions about the proper role of females in scientific pursuits, and how what was “proper” seemed contingent upon external factors, including the availability (or not) of men for scientific labors during the American Civil War. I was surprised, reading this book, to discover that science and mathematics were considered more appropriate pursuits for girls (while philosophy and classical languages were better suited to boys) when Maria Mitchell was young. (How, in light of this history, do so many people get away with insinuating that females lack the intrinsic aptitude for science and math?) The stereotype in Mitchell’s youth that sciences were appropriate pursuits for girls seems to have been based on a certain kind of essentialism about what girls are like, as well as what I would identify as a misunderstanding about how the sciences operate and what kind of picture of the world they can be counted on to deliver. Mitchell, as much as anyone, seemed to be pushing her astronomical researches in a direction very different from the “safe” science people expected — yet in her writings, she also makes claims about women that could be read as essentialist, too. It’s hard to know whether these were these rhetorical moves, or whether Mitchell really bought into there being deep, fundamental differences between the sexes. This makes her story more complicated — and more compelling — than a straightforward narrative of a heroic scientist and professor battling injustice.

Indeed, there are moments here where I wanted to grab Maria Mitchell by the shoulders and shake her, as when she negotiated a lower salary for herself at Vassar than she was offered, even though she foresaw that it would lead to unfairly low salaries of the women faculty who followed her. Was her rejection of the higher salary just a matter of being honest to a fault about her limited teaching experience and her wavering self-confidence? Was she instead worried that accepting the higher salary might give the trustees an excuse not to take on the college’s first woman professor? Was opening the doors to other women in the professorate a more pressing duty than ensuring they would get the same respect — or at least, the same pay — as their male counterparts?

Given the seriousness with which Mitchell approached the task of increasing educational and professional opportunities for women, I can’t help but wondering how many of her choices were driven by a sense of duty. On balance, did Mitchell live the life she wanted to live, or the life she thought she ought to live to make things better? (Would she have drawn such a distinction herself?)

Some of these questions are connected to the various other strands of this rich biography. For example, Bergland does quite a lot to explore Maria Mitchell’s Quaker background, her own inclination to part company with the Society of Friends on certain matters of religious belief, the influence of her cultural Quakerism on and off Nantucket island, even how her plain Quaker dress made her an exotic and an object of curiosity during her travels through Europe at a time when the U.S. was arguably a developing country.

Bergland’s book is a captivating read that will be of interest to anyone curious about the development of educational institutions and professional communities, about the ways political and societal forces pull at the life of the mind, or about the ways people come to steer their interactions in many different circles to achieve what they think must be achieved.

An earlier version of this review was first published here.

* * * * *

Want to help kids in a high poverty high school get outside and really experience astronomy? Please consider supporting “Keep Looking Up”, a DonorsChoose project aimed at purchasing a telescope for a brand new astronomy class in Chouteau, OK. Even a few dollars can make a difference.

Book review: The Radioactive Boy Scout.

When I and my three younger siblings were growing up, our parents had a habit of muttering, “A little knowledge is a dangerous thing.” The muttering that followed that aphorism usually had to do with the danger coming from the “little” amount of knowledge rather than a more comprehensive understanding of whatever field of endeavor was playing host to the hare-brained scheme of the hour. Now, as a parent myself, I suspect that another source of danger involved asymmetric distribution of the knowledge among the interested parties: while our parents may have had knowledge of the potential hazards of various activities, knowledge that we kids lacked, they didn’t always have detailed knowledge of what exactly we kids were up to. It may take a village to raise a child, but it can take less than an hour for a determined child to scorch the hell out of a card table with a chemistry kit. (For the record, the determined child in question was not me.)

The question of knowledge — and of gaps in knowledge — is a central theme in The Radioactive Boy Scout: The Frightening True Story of a Whiz Kid and His Homemade Nuclear Reactor by Ken Silverstein. Silverstein relates the story of David Hahn, a Michigan teen in the early 1990s who, largely free of adult guidance or supervision, worked tirelessly to build a breeder reactor in his back yard. At times this feels like a tale of youthful determination to reach a goal, a story of a self-motivated kid immersing himself in self-directed learning and doing an impressive job of identifying the resources he required. However, this is also a story about how, in the quest to achieve that goal, safety considerations can pretty much disappear.

David Hahn’s source of inspiration — not to mention his guide to many of the experimental techniques he used — was The Golden Book of Chemistry Experiments. Published in 1960, the text by Robert Brent conveys an almost ruthlessly optimistic view of the benefits chemistry and chemical experimentation can bring, whether to the individual or to humanity as a whole. Part of this optimism is what appears to modern eyes as an alarmingly cavalier attitude towards potential hazards and chemical safety. If anything, the illustrations by Harry Lazarus downplay the risks even more than does the text — across 112 pages, the only pictured items remotely resembling safety apparatus are lab coats and a protective mask for an astronaut.

Coupled with the typical teenager’s baseline assumption of invulnerability, you might imagine that leaving safety considerations in the subtext, or omitting them altogether, could be a problem. In the case of a teenager teaching himself chemistry from the book, relying on it almost as a bible of the concepts, history, and experimental techniques a serious chemist ought to know, the lack of focus on potential harms might well have suggested that there was no potential for harm — or at any rate that the harm would be minor compared to the benefits of mastery. David Hahn seems to have maintained this belief despite a series of mishaps that made him a regular at his local emergency room.

Ah, youth.

Here, though, The Radioactive Boy Scout reminds us that young David Hahn was not the only party operating with far too little knowledge. Silverstein’s book expands on his earlier Harper’s article on the incident with chapters that convey just how widespread our ignorance of radioactive hazards has been for most of the history of our scientific, commercial, and societal engagement with radioactivity. At nearly every turn in this history, potential benefits have been extolled (with radium elixirs sold in the early 1900s to lower blood pressure, ease arthritis pain, and produce “sexual rejuvenescence”) and risks denied, sometimes until the body count was so large and the legal damages were so high that they could no longer be denied.

Surely part of the problem here is that the hazards of radioactivity are less immediately obvious than those of corrosive chemicals or explosive chemicals. The charred table is directly observable in a way that damage to one’s body from exposure to radioisotopes is not (partly because the table doesn’t have an immune system that kicks in to try to counter the damage). But the invisibility of these risks was also enhanced when manufacturers who used radioactive materials proclaimed their safety for both the end-user of consumer products and the workers making those products, and when the nuclear energy industry throttled the information the public got about mishaps at various nuclear reactors.

Possibly some of David Hahn’s teachers could have given him a more accurate view of the kinds of hazards he might undertake in trying to build a back yard breeder reactor … but the teen didn’t seem to feel like he could get solid mentoring from any of them, and didn’t let them in on his plans in any detail. The guidance he got from the Boy Scouts came in the form of an atomic energy merit badge pamphlet authored by the Atomic Energy Commission, a group created to promote atomic energy, and thus one unlikely to foreground the risks. (To be fair, this merit badge pamphlet did not anticipate that scouts working on the badge would actually take it upon themselves to build breeder reactors.) Presumably some of the scientists with whom David Hahn corresponded to request materials and advice on reactions would have emphasized the risks of his activities had they realized that they were corresponding with a high school student undertaking experiments in his back yard rather than with a science teacher trying to get clear on conceptual issues.

Each of these gaps of information ended up coalescing in such a way that David Hahn got remarkably close to his goal. He did an impressive job isolating radioactive materials from consumer products, performing chemical reactions to put them in suitable form for a breeder reactor, and assembling the pieces that might have initiated a chain reaction. He also succeeded in turning the back yard shed in which he conducted his work into a Superfund site. (According to Silverstein, the official EPA clean-up missed materials that his father and step-mother found hidden in their house and discarded in their household trash — which means that both the EPA and those close enough to the local landfill where the radioactive materials ended up had significant gaps in their knowledge about the hazards David Hahn introduced to the environment.)

The Radioactive Boy Scout manages to be at once an engaging walk through a challenging set of scientific problems and a chilling look at what can happen when scientific problems are stripped out of their real-life context of potential impacts for good and for ill that stretch across time and space and impact people who aren’t even aware of the scientific work being undertaken. It is a book I suspect my 13-year-old would enjoy very much.

I’m just not sure I’m ready to give it to her.

Ada Lovelace and the Luddites.

Today is Ada Lovelace Day.

If you are not a regular reader of my other blog, you may not know that I am a tremendous Luddite. I prefer hand-drawn histograms and flowcharts to anything I can make with a graphics program. I prefer LPs to CDs. (What’s an LP? Ask your grandparents.) I find it soothing to use log tables (and I know how to interpolate). I’d rather use a spiral-bound book of street maps than Google to find my way around.

Obviously, my status as a Luddite should not be taken to mean I am against all technological advances across the board (as here I am, typing on a computer, preparing a post that will be published using blogging software on the internet). Rather, I am suspicious of technological advances that seem to arise without much thought about how they influence the experience of the humans interacting with them, and of “improvements” that would require me to sink a bunch of time into learning new commands or operating instructions while producing at best a marginal improvement over the outcome I get from the technology I already know.

That is to say, my own inclination is to view technologies not as ends in themselves but as tools which, depending on how they are deployed, can enhance our lives or can make them harder.

The original Luddites were part of a workers’ movement in England in the early 19th century. The technologies these Luddites were against included the mechanical knitting machines and looms that shifted textile production from the hands of skilled knitters and weavers to a relatively unskilled labor force tending to the machines. In the current economic climate, it’s not too hard to see what the Luddites were worried about: even if the Industrial Revolution technologies didn’t result in an overall decrease in jobs (since you’d need workers to tend the machines), there would be no reason to assume that the owners of textile factories would be interested in retraining the skilled knitters and weavers already in existence to be the machine-tenders. And net stability (even increase) in the number of jobs can be cold comfort when your job goes away.

Continue reading