CD review: Baba Brinkman, “The Rap Guide to Evolution: Revised”

Baba Brinkman, "The Rap Guide to Evolution: Revised"

Baba Brinkman
“The Rap Guide to Evolution: Revised”
Lit Fuse Records, 2011

This is an album that is, in its way, one long argument (in 14 tracks) that the theory of evolution is a useful lens through which to make sense of our world and our lives. In making this argument, Brinkman also plays with standard conventions within the rap genre, pointing to predecessors and influences (not only rappers but also the original Chuck D), calling out enemies, bragging about his rapping prowess, and centering himself as an illustrative example of the processes he’s describing. There is also a healthy dose of swearing (as befits the genre). The ordering of the tracks is clearly thematic, with a substantial stretch near the middle of the album focused on sexual selection. Most of the tracks hold up well enough that you could listen to the album on shuffle, but I recommend listening to the whole thing in order first to get the fullest impact.

The first track, “Natural Selection 2.0,” opens by taking aim at people who can’t or won’t wrap their heads around the explanatory power of Darwin’s theory of evolution. Brinkman specifically targets creationists and other “Darwin-haters” for scorn, but his focus is less on their bad arguments than on their resistance to evolutionary biology’s good ones.

Track 2, “Black-eyed Peas,” borrows a strategy from Origin of Species and connects natural selection with the principles of domestication. Here, Brinkman includes not just cattle and peaches and black-eyed peas, but also artists struggling for survival within the music industry (including Black-Eyed Peas), and the chorus features a Fugees sample that rewards listeners of a certain age for surviving as long as they have.

Track 3, the catchy as Hell “I’m A African 2.0,” flips an Afrocentric anthem into a celebration of the common origins of all humanity. The verses also gesture towards ways that archaeologists, anthropologists, and geneticists are scientists taking different angles, and producing different evidence, on the same natural processes.

In track 4, “Creationist Cousins 2.0,” Brinkman offers a description of dinner-table debates about evolutionary theory that is really a song about the strategy of engagement (with hypotheses, empirical data, and objections) central to scientific knowledge-building. It’s also a song that reflects Brinkman’s faith that rational argumentation from evidence we can agree upon should ultimately lead us to shared conclusions. The reality of dialogic exchanges (and of scientific knowledge-building) is more complicated, but it’s hard to fully do justice to any real practice you’re trying to describe in a four minute song.

Track 5, “Survival of the Fittest 2.0,” starts with a shout-out to a bunch of evolutionary psychologists and then takes up the question of how to understand violent behavior and what might be construed as “poor life choices” in the environment of American inner cities. Brinkman pushes the gangsta rap genre’s description of harsh living conditions further by examining whether thug life might embody rational reproductive and survival strategies, all the while pointing us toward the possibility of addressing the economic and social inequalities in the environment that make these behaviors adaptive.

Track 6, “Group Selection 2.0,” simultaneously calls out Social Darwinism as unscientific (“Just because something exists in a state of nature/Doesn’t give it a moral basis, that’s a false correlation”) and explores the value of altruistic behavior. Here, Brinkman explicitly voices openness to group selection as a real evolutionary mechanism (“Some people say group selectionism is false/But I say let the evidence call it”).

Track 7, “Worst Comes to Worst 2.0,” continues the exploration of how much environment matters to what kinds of traits or behaviors are adaptive or maladaptive. Brinkman notes that Homo sapiens are apex predators who have a choice about whether to maintain environments in which violence against other humans works as an adaptive strategy. Since violence isn’t something to which our genes condemn us, he holds open the possibility that we could remake our environment to favor human behavior as “peaceful as Galapagos finches”.

Track 8, “Dr. Tatiana,” is an ode to the multifarious ways in which members of the animal kingdom knock boots (and a shout-out to the author noted for documenting them), as well as the track on the album least likely to be approved as a prom theme (although the decorating committee could have a lot of fun with it). It makes a compelling musical environment for examining the environments and intraspecies competitions in which particular intriguing mating practices might make sense.

Track 9, “Sexual Selection 2.0,” considers the hypothesis that complex language in general, and Baba Brinkman’s aptitude for rhyming in particular, is something that might have evolved to help win the competition for mates. Brinkman’s hip hop flow is enticing, but in this song it exposes his adaptationist assumption that all the traits that have persisted in our population got there because they were selected for to help us evade predators, combat parasites, or get laid. What would Stephen Jay Gould say?

Track 10, “Hypnotize 2.0,” continues in the theme of sexual selection, exploring secondary sexual characteristics (including, perhaps, mad rhyming skills) as adaptive traits:

So now this whole rap thing seems awfully strange

Talkin’ ‘bout, “He got game, and he’s not real

And he’s got chains” but wait, that’s a peacock’s tail!

‘Cause you never hear them say they got it cheap on sale

Which means that bling is meant to represent

How much they really spent, and at the end of the day
That’s the definition of a “fitness display”


Like a bowerbird’s nest, which takes hours of work

And makes the females catch a powerful urge

Just like a style of verse or an amazing flow

But it takes dedication and it takes a toll

‘Cause the best displays are unfakeable

The lyrics here make the suggestion, not explored in depth, that mimetic posers in the population may complicate the matter of mate selection.

Track 11, “Used To Be The Man,” fits nicely in the neighborhood of hip hop songs expressing young men’s anxiety and nostalgia for a world where they feel more at home. The lyrics note that we may be dragging around traits (like impressive upper body strength) that are no longer so adaptive, especially in rapidly changing social environments. Here, Brinkman gives eloquent voice to pain without committing a fallacious appeal to nature.

Track 12, “Don’t Sleep With Mean People,” is an up-tempo exhortation to take positive action to improve the gene pool. Here, you might worry that Brinkman hasn’t first established meanness as a heritable trait. However, doubters that being a jerk has a genetic basis (of which I am one) may be persuaded by the infectious chorus that a social penalty for being a jerk could improve behavior, if not the human genome.

Track 13, “Performance, Feedback, Revision 2.0,” suggests the ubiquity and usefulness of processes similar to natural selection in other parts of our lives. The album version (2.0) differs from the original (which you can find here) in instrumentation, precise lyrics, and and overall feel. Noticing this, a dozen tracks in to the album, made this listener consider whether the song functions like a genotype, with the particular performance of the song as the phenotypic expression in a particular environment.

In the last track of the album, “Darwin’s Acid 2.0,” Brinkman explores what the world of nature and of human experience looks like if you embrace the theory of evolution. The vision he weaves is of a world that is not grim or nihilistic, but intelligible and hopeful, where it is our responsibility to make good.

“The Rap Guide to Evolution: Revised” is — to me, anyway — a compelling rap album, with its balanced mix of tracks featuring flashy dextrous delivery, slower jams, and shout-along anthems. It’s worth noting, of course, that while I haven’t yet hit the post-menopausal granny demographic that Brinkman identifies (in “Sexual Selection 2.0”) as central to his existing fan base, my CD shelf is mostly stuck in the 20th Century, with Run DMC, Salt-N-Pepa, Beastie Boys, De La Soul, and Arrested Development — the band, not the show — as my rap touchstones. However, these tracks also find favor with my decidedly 21st Century offspring, whose appreciation of the scientific content and clever wordplay would not have been granted if they didn’t like the music. (Note to Mr. Brinkman: My daughters are now more likely to seek out a Baba Brinkman show than a gangsta rap show, but they will be restricting their efforts in propagating your lyrical dexterity — is that what the kids are calling it nowadays? — to Tumblr and the Twitterverse, at least while they’re living under my roof.)

While some (including The New Yorker) have compared Mr. Brinkman to Eminem in his vocal delivery, to my ear he is warmer and more melodic. As an unapologetic Richard Dawkins fanboy, he sometimes comes across like a hardcore adaptationist (rapping about bodies as mere machines for spreading our genes), but he also takes group selection seriously (as in track 6). Perhaps future work will give rise to a levels-of-selection rap battle between partisans of group selection, individual selection, and gene-level selection.

Baba Brinkman’s professed admiration for the work of evolutionary psychologists doesn’t manifest itself in this album in defenses of results based on blatantly bad methodology (at least as far as I can tell). “Creationist Cousins 2.0” does, however, include a swipe at a “gender feminist sister” — gender feminist being, of course, a label originated by a hater (and haters gonna hate). It’s not clear that any of this warrants an answer song, but if it did, I would be rooting for Kate Clancy, DNLee, and the appropriate counterpart of DJ Spinderella to deliver the response.

What’s notable in “The Rap Guide to Evolution: Revised” besides Baba Brinkman’s lyrical mastery is how exquisitely attentive he is to the importance of environment — not just its variability, but also the extent to which humans may be able to change our social, economic, and political environment to make traits we like bumping up against in the world more adaptive. Given that much visceral resistance to evolutionary theory seems grounded in a worry that it reduces humans to helpless cogs in a mechanism, or robots programmed to do the bidding of their genes, this reminder that environment can be every bit as much a moving part in the system as genes is a good one. The reality that could be that Brinkman offers here is fiercely optimistic:

In each of these cases, our intentional efforts
Can play the part of environmental pressures
I can say: “This is a space where a peaceful existence
Will never be threatened by needless aggression”
I can say: “This is an ecosystem where people listen
Where justice increases over egotism
This is a space where religions achieve co-existence
And racism decreases with each coalition”

As Darwin wrote, and Brinkman agrees, there is a grandeur in this view of life.

UPDATE:
Via Twitter, I’ve been reminded to point out that the album is a collaboration between Baba Brinkman and DJ and music producer Mr. Simmonds, “who is as responsible for the sound as [Baba Brinkman is] for the ideas”.

* * * * *
Baba Brinkman’s website

Videos of ancestral versions of the songs, produced with funding from the Wellcome Trust

The ethics of naming and shaming.

Lately I’ve been pondering the practice of responding to bad behavior by calling public attention to it.

The most recent impetus for my thinking about it was this tech blogger’s response to behavior that felt unwelcoming at a conference (behavior that seems, in fact, to have run afoul of that conference’s official written policies)*, but there are plenty of other examples one might find of “naming and shaming”: the discussion (on blogs and in other media outlets) of University of Chicago neuroscientist Dario Maestripieri’s comments about female attendees of the Society for Neuroscience meeting, the Office of Research Integrity’s posting of findings of scientific misconduct investigations, the occasional instructor who promises to publicly shame students who cheat in his class, and actually follows through on the promise.

There are many forms “naming-and-shaming” might take, and many types of behavior one might identify as problematic enough that they ought to be pointed out and attended to. But there seems to be a general worry that naming-and-shaming is an unethical tactic. Here, I want to explore that worry.

Presumably, the point of responding to bad behavior is because it’s bad — causing harm to individuals or a community (or both), undermining progress on a project or goal, and so forth. Responding to bad behavior can be useful if it stops bad behavior in progress and/or keeps similarly bad behavior from happening in the future. A response can also be useful in calling attention to the harm the behavior does (i.e., in making clear what’s bad about the behavior). And, depending on the response, it can affirm the commitment of individuals or communities that the behavior in question actual is bad, and that the individuals or communities see themselves as having a real stake in reducing it.

Rules, professional codes, conference harassment policies — these are some ways to specify at the outset what behaviors are not acceptable in the context of the meeting, game, work environment, or disciplinary pursuit. There are plenty of contexts, too, where there is no written-and-posted official enumeration of every type of unacceptable behavior. Sometimes communities make judgments on the fly about particular kinds of behavior. Sometimes, members of communities are not in agreement about these judgments, which might result in a thoughtful conversation within the community to try to come to some agreement, or the emergence of a rift that leads people to realize that the community was not as united as they once thought, or ruling on the “actual” badness or acceptability of the behavior by those within the community who can marshal the power to make such a ruling.

Sharing a world with people who are not you is complicated, after all.

Still, I hope we can agree that there are some behaviors that count as bad behaviors. Assuming we had an unambiguous example of someone engaging in such a behavior, should we respond? How should we respond? Do we have a duty to respond?

I frequently hear people declare that one should respond to bad behavior, but that one should do so privately. The idea here seems to be that letting the bad actor know that the behavior in question was bad, and should be stopped, is enough to ensure that it will be stopped — and that the bad behavior must be a reflection of a gap in the bad actor’s understanding.

If knowing that a behavior is bad (or against the rules) were enough to ensure that those with the relevant knowledge never engage in the behavior, though, it becomes difficult to explain the highly educated researchers who get caught fabricating or falsifying data or images, the legions of undergraduates who commit plagiarism despite detailed instructions on proper citation methods, the politicians who lie. If knowledge that a certain kind of behavior is unacceptable is not sufficient to prevent that behavior, responding effectively to bad behavior must involve more than telling the perpetrator of that behavior, “What you’re doing is bad. Stop it.”

This is where penalties may be helpful in responding to bad behavior — get benched for the rest of the game, or fail the class, or get ejected from the conference, or become ineligible for funding for this many years. A penalty can convey that bad behavior is harmful enough to the endeavor or the community that its perpetrator needs a “time-out”.

Sometimes the application of penalties needs to be private (e.g., when a law like the Family Education Rights and Privacy Act makes applying the penalty publicly illegal). But there are dangers in only dealing with bad behavior privately.

When fabrication, falsification, and plagiarism are “dealt with” privately, it can make it hard for a scientific community to identify papers in the scientific literature that they shouldn’t trust or researchers who might be prone to slipping back into fabricating, falsifying, or plagiarizing if they think no one is watching. (It is worth noting that large ethical lapses are frequently part of an escalating pattern that started with smaller ethical infractions.)

Worse, if bad behavior is dealt with privately, out of view of members of the community who witnessed the bad behavior in question, those members may lose faith in the community’s commitment to calling it out. Keeping penalties (if any) under wraps can convey the message that the bad behavior is actually tolerated, that official policies against it are empty words.

And sometimes, there are instances where the people within an organization or community with the power to impose penalties on bad actors seem disinclined to actually address bad behavior, using the cover of privacy as a way to opt out of penalizing the bad actors or of addressing the bad behavior in any serious way.

What’s a member of the community to do in such circumstances? Given that the bad behavior is bad because it has harmful effects on the community and its members, should those aware of the bad behavior call the community’s attention to it, in the hopes that the community can respond to it (or that the community’s scrutiny will encourage the bad actor to cease the bad behavior)?

Arguably, a community that is harmed by bad behavior has an interest in knowing when that behavior is happening, and who the bad actors are. As well, the community has an interest in stopping the bad behavior, in mitigating the harms it has already caused, and in discouraging further such behavior. Naming-and-shaming bad actors may be an effective way to secure these interests.

I don’t think this means naming-and-shaming is the only possible way to secure these interests, nor that it is always the best way to do so. Sometimes, however, it’s the tool that’s available that seems likely to do the most good.

There’s not a simple algorithm or litmus test that will tell you when shaming bad actors is the best course of action, but there are questions that are worth asking when assessing the options:

  • What are the potential consequences if this piece of bad behavior, which is observable to at least some members of the community, goes unchallenged?
  • What are the potential consequences if this piece of bad behavior, which is observable to at least some members of the community, gets challenged privately? (In particular, what are the potential consequences to the person engaging in the bad behavior? To the person challenging the behavior? To others who have had occasion to observe the behavior, or who might be affected by similar behavior in the future?)
  • What are the potential consequences if this piece of bad behavior, which is observable to at least some members of the community, gets challenged publicly? (In particular, what are the potential consequences to the person engaging in the bad behavior? To the person challenging the behavior? To others who have had occasion to observe the behavior, or who might be affected by similar behavior in the future?)

Challenging bad behavior is not without costs. Depending on your status within the community, challenging a bad actor may harm you more than the bad actor. However, not challenging bad behavior has costs, too. If the community and its members aren’t prepared to deal with bad behavior when it happens, the community has to bear those costs.
_____
* Let me be clear that this post is focused on the broader question of publicly calling out bad behavior rather than on the specific details of Adria Richards’ response to the people behind her at the tech conference, whether she ought to have found their jokes unwelcoming, whether she ought to have responded to them the way she did, or what have you. Since this post is not about whether Adria Richards did everything right (or everything wrong) in that particular instance, I’m going to be quite ruthless in pruning comments that are focused on her particular circumstances or decisions. Indeed, commenters who make any attempt to use the comments here to issue threats of violence against Richards (of the sort she is receiving via social media as I compose this post), or against anyone else, will have their information (including IP address) forwarded to law enforcement.

If you’re looking for my take on the details of the Adria Richards case, I’ll have a post up on my other blog within the next 24 hours.

Building a scientific method around the ideal of objectivity.

While modern science seems committed to the idea that seeking verifiable facts that are accessible to anyone is a good strategy for building a reliable picture of the world as it really is, historically, these two ideas have not always gone together. Peter Machamer describes a historical moment when these two senses of objectivity were coupled in his article, “The Concept of the Individual and the Idea(l) of Method in Seventeenth-Century Natural Philosophy.” [1]

Prior to the emergence of a scientific method that stressed objectivity, Machamer says, most people thought knowledge came from divine inspiration (whether written in holy books or transmitted by religious authorities) or from ancient sources that were only shared with initiates (think alchemy, stone masonry, and healing arts here). Knowledge, in other words, was a scarce resource that not everyone could get his or her hands (or brains) on. To the extent that a person found the world intelligible at all, it was probably based on the story that someone else in a special position of authority was telling.

How did this change? Machamer argues that it changed when people started to think of themselves as individuals. The erosion of feudalism, the reformation and counter-reformation, European voyages to the New World (which included encounters with plants, animals, and people previously unknown in the Old World), and the shift from a geocentric to a heliocentric view of the cosmos all contributed to this shift by calling old knowledge and old sources of authority into question. As the old sources of knowledge became less credible (or at least less monopolistic), the individual came to be seen as a new source of knowledge.

Machamer describes two key aspects of individuality at work. One is what he calls the “Epistemic I.” This is the recognition that an individual can gain knowledge and ideas directly from his or her own interactions with the world, and that these interactions depend on senses and powers of reason that all humans have (or could have, given the opportunity to develop them). This recognition casts knowledge (and the ability to get it) as universal and democratic. The power to build knowledge is not concentrated in the hands (or eyes) of just the elite — this power is our birthright as human beings.

The other side of individuality here is what Machamer calls the “Entrepreneurial I.” This is the belief that an individual’s insights deserve credit and recognition, perhaps even payment. This recognition casts the individual who has it as a leader, or a teacher — definitely, as a special human worth listening to.

Pause for a moment to notice that this tension is still present in science. For all the commitment to science as an enterprise that builds knowledge from observations of the world that others must be able to make (which is the whole point of reproducibility), scientists also compete for prestige and career capital based on which individual was the first to observe (and report observing) a particular detail that anyone could see. Seeing something new is not effortless (as we’ve discussed in the last two posts), but there’s still an uneasy coexistence between the idea of scientific knowledge-building as within the powers of normal human beings and the idea of scientific knowledge-building as the activity of special human beings with uniquely powerful insights and empirical capacities.

The two “I”s that Machamer describes came together as thinkers in the 1600s tried to work out a reliable method by which individuals could replace discredited sources of “knowledge” and expand on what remained to produce their own knowledge. Lots of “natural philosophers” (what we would call scientists today) set out to formulate just such a method. The paradox here is that each thinker was selling (often literally) a way of knowing that was supposed to work for everyone, while simultaneously presenting himself as the only one clever enough to have found it.

Looking for a method that anyone could use to get the facts about the world, the thinkers Machamer describes recognized that they needed to formulate a clear set of procedures that was broadly applicable to the different kinds of phenomena in the world about which people wanted to build knowledge, that was teachable (rather than being a method that only the person who came up with it could use), and that was able to bring about consensus and halt controversy. However, in the 1600s there were many candidates for this method on offer, which meant that there was a good bit of controversy about the question of which method was the method.

Among the contenders for the method, the Baconian method involved cataloguing many experiences of phenomena, then figuring out how to classify them. The Galilean method involved representing the phenomena in terms of mechanical models (and even going so far as to build the corresponding machine). The Hobbesian model focused on analyzing compositions and divisions of substances in order to distinguish causes from effects. And these were just three contenders in a crowded field. If there was a common thread in these many methods, it was describing or representing the phenomena of interest in spatial terms. In the seventeenth century, as now, seeing is believing.

In a historical moment when people were considering the accessibility and the power of knowledge through experience, it became clear to the natural philosophers trying to develop an appropriate method that such knowledge also required control. To get knowledge, it was not enough to have just any experience -– you had to have the right kind of experiences. This meant that the methods under development had to give guidance on how to track empirical data and then analyze it. As well, these methods had to invent the concept of a controlled experiment.

Whether it was in a published dialogue or an experiment conducted in a public space before witnesses, the natural philosophers developing knowledge-building methods recognized the importance of demonstration. Machamer writes:

Demonstration … consists in laying a phenomenon before oneself and others. This “laying out” exhibits the structure of the phenomenon, exhibits its true nature. What is laid out provides an experience for those seeing it. It carries informational certainty that causes assent.” (94)

Interestingly, there seems to have been an assumption that once people hit on the appropriate procedure for gathering empirical facts about the phenomena, these facts would be sufficient to produce agreement among those who observed them. The ideal method was supposed to head off controversy. Disagreements were either a sign that you were using the wrong method, or that you were using the right method incorrectly. As Machamer describes it:

[T]he doctrines of method all held that disputes or controversies are due to ignorance. Controversies are stupid and accomplish nothing. Only those who cannot reason properly will find it necessary to dispute. Obviously, as noted, the ideal of universality and consensus contrasts starkly with the increasing number of disputes that engage these scientific entrepreneurs, and with the entrepreneurial claims of each that he alone has found the true method.

Ultimately, what stemmed the proliferations of competing methods was a professionalization of science, in which the practitioners essentially agreed to be guided by a shared method. The hope was that the method the scientific profession agreed upon would be the one that allowed scientists to harness human senses and intellect to best discover what the world is really like. Within this context, scientists might still disagree about the details of the method, but they took it that such agreements ought to be resolved in such a way that the resulting methodology better approximated this ideal method.

The adoption of shared methodology and the efforts to minimize controversy are echoed in Bruce Bower’s [2] discussion of how the ideal of objectivity has been manifested in scientific practices. He writes:

Researchers began to standardize their instruments, clarify basic concepts, and write in an impersonal style so that their peers in other countries and even in future centuries could understand them. Enlightenment-influenced scholars thus came to regard facts no longer as malleable observations but as unbreakable nuggets of reality. Imagination represented a dangerous, wild force that substituted personal fantasies for a sober, objective grasp of nature. (361)

What the seventeenth century natural philosophers Machamer describes were striving for is clearly recognizable to us as objectivity -– both in the form of an objective method for producing knowledge and in the form of a body of knowledge that gives a reliable picture of how the world really is. The objective scientific method they sought was supposed to produce knowledge we could all agree upon and to head off controversy.

As you might imagine, the project of building reliable knowledge about the world has pushed scientists in the direction of also building experimental and observational techniques that are more standardized and require less individual judgment across observers. But an interesting side-effect of this focus on objective knowledge as a goal of science is the extent to which scientific reports can make it look like no human observers were involved in making the knowledge being reported. The passive voice of scientific papers — these procedures were performed, these results were observed — does more than just suggest that the particular individuals that performed the procedures and observed the results are interchangeable with other individuals (who, scientists trust, would, upon performing the same procedures, see the same results for themselves). The passive voice can actually erase the human labor involved in making knowledge about the world.

This seems like a dangerous move when objectivity is not an easy goal to achieve, but rather one that requires concerted teamwork along with one’s objective method.
_____________

[1] “The Concept of the Individual and the Idea(l) of Method in Seventeenth-Century Natural Philosophy,” in Peter Machamer, Marcello Pera, and Aristides Baltas (eds.), Scientific Controversies: Philosophical and Historical Perspectives. Oxford University Press, 2000.

[2] Bruce Bower, “Objective Visions,” Science News. 5 December 1998: Vol. 154, pp. 360-362

The challenges of objectivity: lessons from anatomy.

In the last post, we talked about objectivity as a scientific ideal aimed at building a reliable picture of what the world is actually like. We also noted that this goal travels closely with the notion of objectivity as what anyone applying the appropriate methodology could see. But, as we saw, it takes a great deal of scientific training to learn to see what anyone could see.

The problem of how to see what is really there is not a new one for scientists. In her book The Scientific Renaissance: 1450-1630 [1], Marie Boas Hall describes how this issue presented itself to Renaissance anatomists. These anatomists endeavored to learn about the parts of the human body that could be detected with the naked eye and the help of a scalpel.

You might think that the subject matter of anatomy would be more straightforward for scientists to “see” than the cells Fred Grinnell describes [2] (discussed in the last post), which require preparation and staining and the twiddling of knobs on microscopes. However, the most straightforward route to gross anatomical knowledge -– dissections of cadavers -– had its own challenges. For one thing, cadavers (especially human cadavers) were often in short supply. When they were available, anatomists hardly ever performed solitary dissections of them. Rather, dissections were performed, quite literally, for an audience of scientific students, generally with a surgeon doing the cutting while a professor stood nearby and read aloud from an anatomical textbook describing the organs, muscles, or bones encountered at each stage of the dissection process. The hope was that the features described in the text would match the features being revealed by the surgeon doing the dissecting, but there were doubtless instances where the audio track (as it were) was not quite in sync with the visual. Also, as a practical matter, before the invention of refrigeration dissections were seasonal, performed in the winter rather than the warmer months to retard the cadaver’s decomposition. This put limits on how much anatomical study a person could cram into any given year.

In these conditions, most of the scientists who studied anatomy logged many more hours watching dissections than performing dissections themselves. In other words, they were getting information about the systems of interest by seeing rather than by doing -– and they weren’t always seeing those dissections from the good seats. Thus, we shouldn’t be surprised that anatomists greeted the invention of the printing press by producing a number of dissection guides and anatomy textbooks.

What’s the value of a good textbook? It shares detailed information compiled by another scientist, sometimes over the course of years of study, yet you can consume that information in a more timely fashion. If it has diagrams, it can give you a clearer view of what there is to observe (albeit through someone else’s eyes) than you may be able to get from the cheap seats at a dissection. And, if you should be so lucky as to get your own specimens for study, a good textbook can guide your examination of the new material before you, helping you deal with the specimen in a way that lets you see more of what there is to see (including spatial relations and points of attachment) rather than messing it up with sloppy dissection technique.

Among the most widely used anatomy texts in the Renaissance were “uncorrupted” translations of On the Use of the Parts and Anatomical Procedures by the ancient Greek anatomist Galen, and the groundbreaking new text On the Fabric of the Human Body (published in 1543) by Vesalius. The revival of Galen fit into a pattern of Renaissance celebration of the wisdom of the ancients rather than setting out to build “new” knowledge, and Hall describes the attitude of Renaissance anatomists toward his work as “Galen-worship.” Had Galen been alive during the Renaissance, he might well have been irritated at the extent to which his discussions of anatomy -– based on dissections of animals, not human cadavers –- were taken to be authoritative. Galen himself, as an advocate of empiricism, would have urged other anatomists to “dissect with a fresh eye,” attentive to what the book of nature (as written on the bodies of creatures to be dissected) could teach them.

As it turns out, this may be the kind of thing that’s easier to urge than to do. Hall asks,

[W]hat scientific apprentice has not, many times since the sixteenth century, preferred to trust the authoritative text rather than his own unskilled eye? (137)

Once again, it requires training to be able to see what there is to see. And surely someone who has written textbooks on the subject (even centuries before) has more training in how to see than does the novice leaning on the textbook.

Of course, the textbook becomes part of the training in how to see, which can, ironically, make it harder to be sure that what you are seeing is an accurate reflection of the world, not just of the expectations you bring to your observations of it.

The illustrations in the newer anatomy texts made it seem less urgent to anatomy students that they observe (or participate in) actual dissections for themselves. As the technique for mass-produced illustrations got better (especially with the shift from woodcuts to engravings), the illustrators could include much more detail in their images. Paradoxically, this could be a problem, as the illustrator was usually someone other than the scientist who wrote the book, and the author and illustrator were not always in close communication as the images were produced. Given a visual representation of what there is to observe and a description of what there is to observe in the text, which would a student trust more?

Bruce Bower discusses this sort of problem in his article “Objective Visions,” [3] describing the procedures used by Dutch anatomist Berhard Albinus in the mid-1700s to create an image of the human skeleton. Bower writes:

Albinus carefully cleans, reassembles, and props up a complete male skeleton; checks the position of each bone in comparison with observations of an extremely skinny man hired to stand naked next to the skeleton; he calculates the exact spot at which an artist must sit to view the skeleton’s proportions accurately; and he covers engraving plates with cross-hatched grids so that images can be drawn square-by-square and thus be reproduced more reliably. (360)

Here, it sounds like Albinus is trying hard to create an image that accurately conveys what there is to see about the skeleton and its spatial relations. The methodology seems designed to make the image-creation faithful to the particulars of the actual specimen — in a word, objective. But, Bower continues:

After all that excruciating attention to detail, the eminent anatomist announces that his atlas portrays not a real skeleton, but an idealized version. Albinus has dictated alterations to the artist. The scrupulously assembled model is only a spingboard for insights into a more “perfect” representation of the human skeleton, visible only to someone with Albinus’ anatomical acumen. (360)

Here, Albinus was trying to abstract away from the peculiarities of the particular skeleton he had staged as a model for observation in order to describe what he saw as the real thing. This is a decidedly Platonist move. Plato’s view was that the stuff of our world consists largely of imperfect material instantiations of immaterial ideal forms -– and that science makes the observations it does of many examples of material stuff to get a handle on those ideal forms.

If you know the allegory of the cave, however, you know that Plato didn’t put much faith in feeble human sense organs as a route to grasping the forms. The very imperfection of those material instantiations that our sense organs apprehend would be bound to mislead us about the forms. Instead, Plato thought we’d need to use the mind to grasp the forms.

This is a crucial juncture where Aristotle parted ways with Plato. Aristotle still thought that there was something like the forms, but he rejected Plato’s full-strength rationalism in favor of an empirical approach to grasping them. If you wanted to get a handle on the form of “horse,” for example, Aristotle thought the thing to do was to examine lots of actual specimens of horse and to identify the essence they all have in common. The Aristotelian approach probably feels more sensible to modern scientists than the Platonist alternative, but note that we’re still talking about arriving at a description of “horse-ness” that transcends the observable features of any particular horse.

Whether you’re a Platonist, an Aristotelian, or something else, it seems pretty clear that scientists do decide that some features of the systems they’re studying are crucial and others are not. They distinguish what they take to be background from what they take to be the thing they’re observing. Rather than presenting every single squiggle in their visual field, they abstract away to present the piece of the world they’re interested in talking about.

And this is where the collaboration between anatomist and illustrator gets ticklish. What happens if the engraver is abstracting away from the observed particulars differently than the anatomist would? As Hall notes, the engravings in Renaissance anatomy texts were not always accurate representations of the texts. (Nor, for that matter, did the textual descriptions always get the anatomical features right — Renaissance anatomists, Vesalius included, managed to repeat some anatomical mistakes that went back to Galen, likely because they “saw” their specimens through a lens of expectations shaped by what Galen said they were going to see.)

On top of this, the fact that artists like Leonardo Da Vinci studied anatomy to improve their artistic representations of the human form spilled back to influence Renaissance scientific illustrators. These illustrators, as much as their artist contemporaries, may have looked beyond the spatial relations between bones or muscles or internal organs for hidden beauty in their subjects. While this resulted in striking illustrations, it also meant that their engravings were not always accurate representations of the cadavers that were officially their subjects.

These factors conspired to produce visually arresting anatomy texts that exerted an influence on how the anatomy students using them understood the subject, even when these students went beyond the texts to perform their own dissections. Hall writes,

[I]t is often quite easy to “see” what a textbook or manual says should be seen. (141)

Indeed, faced with a conflict between the evidence of one’s eyes pointed at a cadaver and the evidence of one’s eyes pointed at an anatomical diagram, one might easily conclude that the cadaver in question was a weird variant while the diagram captured the “standard” configuration.

Bower’s article describes efforts scientists made to come up with visual representations that were less subjective. Bower writes:

Scientists of the 19th century rapidly adopted a new generation of devices that rendered images in an automatic fashion. For instance, the boxy contraption known as the camera obscura projected images of a specimen, such as a bone or a plant, onto a surface where a researcher could trace its form onto a piece of paper. Photography soon took over and further diminished human involvement in image-making. … Researchers explicitly equated the manual representation of items in the natural world with a moral code of self-restraint. … A blurry photograph of a star or ragged edges on a slide of tumor tissues were deemed preferable to tidy, idealized portraits. (361)

Our naïve picture of objectivity may encourage us that seeing is believing, and that mechanically captured images are more reliable than those rendered by the hand of a (subjective) human, but it’s important to remember that pictures -– even photographs -– have points of view, depend on choices made about the conditions of their creation, and can be used as arguments to support one particular way of seeing the world over another.

In the next post, we’ll look at how Seventeenth Century “natural philosophers” labored to establish a general-use method for building reliable knowledge about the world, and at how the notion of objectivity was connected to these efforts, and to the recognizable features of “the scientific method” that resulted.
_____________

[1] Marie Boas Hall, The Scientific Renaissance: 1450-1630. Dover, 1994.

[2] Frederick Grinnell, The Scientific Attitude. Guilford Press, 1992.

[3] Bruce Bower, “Objective Visions,” Science News. 5 December 1998: Vol. 154, pp. 360-362