Lads’ mags, sexism, and research in psychology: an interview with Dr. Peter Hegarty (part 2).

In this post, I continue my interview with Dr. Peter Hegarty, a social psychologist at the University of Surrey and one of the authors of ” ‘Lights on at the end of the party’: Are lads’ mags mainstreaming dangerous sexism?”, which was published in The British Journal of Psychology in December. My detailed discussion of that paper is here. The last post presented part 1 of our interview, in which Dr. Hegarty answered questions about the methodology of this particular research, as well as about some of the broader methodological differences between research in psychology and in sciences that are focused on objects of study other than humans.

Janet Stemwedel: It’s been pointed out that the university students that seem to be the most frequent subjects of psychological research are WEIRD (Western Educated Industrialized Rich Democratic). Is the WEIRDness of university students as subjects in this research something that should make us cautious about the strength of the conclusions we draw?  Or are university students actually a reasonably appropriate subject pool from the point of view of exploring how lads’ mags work?

Peter Hegarty: According to the historian Kurt Danziger in his book Constructing the Subject, students became an unmarked “normative” subject population for psychologists, at least in the United States, between the world wars. Since then, criticisms of over-reliance on student samples have been common (such as those of Quin McNemar in the 1940s, or David Sears in the 1980s). Within the history of this criticism, perhaps what is most distinct about the recent argument about WIERDness is that it draws on the developments in cultural psychology of the last 20 years or so. For this specific study, our rational for studying young people on a campus was not only convenience; they are also the target market for these magazines, by virtue of their age, and by virtue of possessing the disposable income to purchase them.

May I take the time to offer a slightly broader perspective on the problem of under- and over-representation of social groups in psychology? The issue is not simply one of who gets included, and who does not. This is because groups can be disempowered and science compromised by being erased (as the WIERD criticism presumes), and groups can be disempowered when they are consistently located within the psychologists’ gaze – as in Foucaultian disciplinary power. African-Americans are oversampled in the US literature on forensic psychology, but that literature is not anti-racist, it’s largely based on a “deficit” model of race (Carter & Forsythe, 2007). The issue is not simply one of inclusion or exclusion, but one of how inclusion happens, as sociologist Steven Epstein’s work on inclusive paradigms in medicine nicely shows.

In other experiments and content analyses, my colleagues and I have found that people spontaneously explain group differences by attending to lower power groups more of the time. In our own research we have observed this pattern in scientists publications and in explanations produced in the lab with regard to race, gender, and sexuality, for example (Hegarty & Buechel, 2006; Hegarty & Pratto, 2004). On the face of it, this might lead to greater stereotyping of the lower power “marked” group. Indeed, as Suzanne Bruckmueller’s work on linguistic framing subtly shows, once a group is positioned as “the effect to be explained” in an account of group differences, then people tend to infer that the group has less power (Bruckmüller & Abele, 2010). Our work suggests that to trouble the “normative” status that WIERD people occupy in our ontologies, that inclusion is necessary but not sufficient. It’s also important to reframe our questions about difference to think concretely about normative groups. In the case of our lads’ mags research, we were heartened that people were prompted to reframe questions about the widespread problem of violence against women away from the small category of convicted rapists, to ask broader questions about how such violence is normalized.

JS: A lot of scientists seem to have a love/hate relationship with mass media. They want the public to understand their research and why it’s interesting and important, but media coverage sometimes gets the details badly wrong, or obliterates the nuance.  And, given the subject matter of your research (which the average person might reasonably connect to his or her own concerns more easily than anything we might learn about the Higgs boson), it seems like misunderstandings of what the research means could get amplified pretty quickly.  What has your experience been as far as the media coverage of your research?  Are there particular kinds of issues you’d like the public to grasp better when they read or hear about this kind of research?

PH: Your question touches on the earlier point about the difference between the human and natural sciences. Our work is caught up in “looping effects” as people interpret it for themselves, but the Higgs boson doesn’t care if the folks in CERN discover it or not. (I think, I’m no expert on sub-atomic physics!) Although some research that I released last year on sexist language got good coverage in the media (Hegarty, Watson, Fletcher & McQueen, 2011), the speed and scale of the reaction to the Horvath et al. (2011) paper was a new experience for me, so I am learning about the media as I go.

There is no hard and fast boundary between “the media” and “the public” who are ‘influenced’ by that media anymore; I’m not sure there ever was one. The somewhat ‘viral’ reaction to this work on the social networking sites such as twitter was visibly self-correcting in ways that don’t fit with social scientists’ theories that blame the media for beguiling the public. Some journalists misunderstood the procedures of Experiment 1 in our study, and it was misdescribed in some media sources. But on Twitter, folk were re-directing those who were reproducing that factual error to the Surrey website. Overall, watching the Twitter feeds reminded me most of the experience of giving a class of students an article to discuss and watching a very useful conversation emerge about what the studies had hypothesized, what they had found, how much you might conclude from the results, and what the policy implications might be. I am somewhat more optimistic about the affordances of social media for education as a result of this experience.

JS: Given the connection between your research questions in this research and actual features of our world that might matter to us quite a lot (like how young men view and interact with the women with whom they share a world), it seems like ultimately we might want to *use* what we learn from the research to make things better, rather than just saying, “Huh, that’s interesting.”  What are the challenges to moving from description to prescription here?  Are there other “moving parts” of our social world you think we need to understand better to respond effectively to what we learn from studies like these?

Related to what I’ve said above, I would like people to see the research as a “red flag” about the range and character of media that young people now read, and which are considered “normal.” There are now numerous anecdotes on the web of people who have been prompted by this research to look at a lads’ mag for the first time – and been surprised or shocked by what they see. We are also in contact with some sex educators about how this work might be used to educate men for a world in which this range of media exists. Precisely because we think this research might have relevance for a broad range of people who care about the fact that people should have pleasure, intimacy, and sex without violence, bullying and hatred,

We have suggested that it should prompt investment in sex education rather than censorship. In so doing, we are adopting an ‘incrementalist’ approach to people’s intelligence about sex and sexual literacy. Carol Dweck’s work shows that children and young people who believe their intelligence to be a fixed ‘entity’ do not fare as well academically as those who believe their intelligence might be something ‘incremental’ that can be changed through effort. Censorship approaches seem to us to be based on fear, and to assume a rather fixed limit to the possibilities of public discourse about sex. We do not make those assumptions, but we fear that they can become self-fulfilling prophecies.

JS: How do you keep your prescriptive hunches from creeping into the descriptive project you’re trying to do with your research?

I’m not sure that it is possible or desirable to exclude subjectivity from science; your last question obliged me to move from description to prescription. It is sometimes striking how much many scientists want to be ‘above politics’ and influence policy, to advocate and remain value-neutral, to change the world, but not to intervene etc. My thinking on this matter borrows more from Sandra Harding’s view of ‘strong objectivity,’ and particularly her idea that the science we get is affected by the range of people included in its production and the forms of social relationships in which they participate. I also think that Stephen Shapin’s book A Social History of Truth is a useful albeit distal explanation of why the question of subjectivity in science is often seen as an affront to honour and the opposite of reasoned dispassionate discussion. In the UK, there is now an obligation on scientists to engage non-academic publics by reporting’ impact summaries to the government as part of national exercises for documenting research excellence. However, this policy can overlook the importance of two-way dialogue between academic and non-academic audiences about how we create different kinds of knowledge for different kinds of purposes. For those reasons, I’m grateful for the opportunity to participate in a more dialogical forum about science and ethics like this one.

Bibliography

Bruckmüller, S., & Abele, A. (2010). Comparison focus in intergroup comparisons: Who we compare to whom influences who we see as powerful and agentic. Personality and Social Psychology Bulletin, 36, 1424-1435.

Carter, R.T., & Forsythe, J.M. (2007). Examining race and culture in psychology journals: The case of forensic psychology. Professional Psychology: Theory and Practice, 38, 133-142.

Danziger, K. (1990). Constructing the Subject: Historical Origins of Psychological Research. Cambridge, UK: Cambridge University Press.

Dweck, C. (2000). Self-theories: Their Role in Motivation, Personality and Development. Psychology Press.

Epstein, S. (2007). Inclusion: The Politics of Difference in Medical Research. Chicago: Univeristy of Chicago Press.

Foucault, M. (1978). Discipline and Punish: The Birth of the Prison. Trans. Alan Sheridan. New York, Random House.

Hacking, I. (1995). The looping effects of human kinds. In Dan Sperber, David Premack and Ann James Premack (Eds.), Causal Cognition: A Multi-Disciplinary Debate (pp. 351-383). Oxford, UK: Oxford University Press.

Harding, S. (1987). The Science Question in Feminism. Ithaca, NY: Cornell University Press.

Hegarty, P., & Buechel C. (2006). Androcentric reporting of gender differences in APA journals: 1965-2004. Review of General Psychology, 10, 377-389.

Hegarty, P, & Pratto F. (2004) The differences that norms make: Empiricism, social constructionism, and the interpretation of group differences. Sex Roles, 50, 445-453.

Hegarty P.J., Watson, N., Fletcher L, & McQueen, G. (2011) When gentlemen are first and ladies are last: Effects of gender stereotypes on the order of romantic partners’ names. British Journal of Social Psychology, 50, 21-35.

Horvath, M.A.H., Hegarty, P., Tyler, S. & Mansfield, S. (2011).“Lights on at the end of the party”: Are Lads Mags’ Mainstreaming Dangerous Sexism? British Journal of Psychology. Available from http://onlinelibrary.wiley.com/doi/10.1111/j.2044-8295.2011.02086.x/abstract

McNemar, Q. (1940). Sampling in psychological research. Psychological Bulletin, 37, 331-365.

Sears, D. O. (1986). College sophomores in the laboratory: Influences of a narrow data base on social psychology’s view of human nature. Journal of Personality and Social Psychology, 51, 515-530.

Shapin, S. (1994). A Social History of Truth: Civility and Science in Seventeenth-Century England. Chicago: University of Chicago Press.

The Research Works Act: asking the public to pay twice for scientific knowledge.

There’s been a lot of buzz in the science blogosphere recently about the Research Works Act, a piece of legislation that’s been introduced in the U.S. that may have big impacts on open access publishing of scientific results. John Dupuis has an excellent round-up of posts on the subject. I’m going to add my two cents on the overarching ethical issue.

Here’s the text of the Research Works Act:

No Federal agency may adopt, implement, maintain, continue, or otherwise engage in any policy, program, or other activity that–

(1) causes, permits, or authorizes network dissemination of any private-sector research work without the prior consent of the publisher of such work; or

(2) requires that any actual or prospective author, or the employer of such an actual or prospective author, assent to network dissemination of a private-sector research work. …

In this Act:

(1) AUTHOR- The term ‘author’ means a person who writes a private-sector research work. Such term does not include an officer or employee of the United States Government acting in the regular course of his or her duties.

(2) NETWORK DISSEMINATION- The term ‘network dissemination’ means distributing, making available, or otherwise offering or disseminating a private-sector research work through the Internet or by a closed, limited, or other digital or electronic network or arrangement.

(3) PRIVATE-SECTOR RESEARCH WORK- The term ‘private-sector research work’ means an article intended to be published in a scholarly or scientific publication, or any version of such an article, that is not a work of the United States Government (as defined in section 101 of title 17, United States Code), describing or interpreting research funded in whole or in part by a Federal agency and to which a commercial or nonprofit publisher has made or has entered into an arrangement to make a value-added contribution, including peer review or editing. Such term does not include progress reports or raw data outputs routinely required to be created for and submitted directly to a funding agency in the course of research.

(Bold emphasis added.)

Let’s take this at the most basic level. If public money is used to fund scientific research, does the public have a legitimate expectation that the knowledge produced by that research will be shared with the public? If not, why not? (Is the public allocating scarce public funds to scientific knowledge-building simply to prop up that sector of the economy and/or keep the scientists off the streets?)

Assuming that the public has the right to share in the knowledge built on the public’s dime, should the public have to pay to access that knowledge (at around $30 per article) from a private sector journal? The text of the Research Works Act suggests that such private sector journals add value to the research that they publish in the form of peer review and editing. Note, however, that peer review for scientific journals is generally done by other scientists in the relevant field for free. Sure, the journal editors need to be able to scare up some likely candidates for peer reviewers, email them, and secure their cooperation, but the value being added in terms of peer reviewing here is added by volunteers. (Note that the only instance of peer reviewing in which I’ve participated where I’ve actually been paid for my time involved reviewing grant proposals for a federal agency. In other words, the government doesn’t think peer review should be free … but a for-profit publishing concern can help itself to free labor and claim to have added value by virtue of it.)

Maybe editing adds some value, although journal editors of private sector journals have been taken to task for favoring flashy results, and for occasionally subverting their own peer review process to get those flashy results published. But there’s something like agreement that the interaction between scientists that happens in peer review (and in post-publication discussions of research findings) is what makes it scientific knowledge. That is to say, peer review is recognized as the value-adding step science could not do without.

The public is all too willing already to see public money spent funding scientific research as money wasted. If members of the public have to pay again to access research their tax dollars already paid for, they are likely to be peeved. They would not be wrong to feel like the scientific community had weaseled out of fulfilling its obligation to share the knowledge it builds for the good of the public. (Neither would they be wrong to feel like their government had fallen down on an ethical obligation to the public here, but whose expectations of their government aren’t painfully low at the moment?) A rightfully angry public could mean less public funding for scientific research — which means that there are pragmatic, as well as ethical, reasons for scientists to oppose the Research Works Act.

And, whether or not the Research Works Act becomes the law of the land in the USA, perhaps scientists’ ethical obligations to share publicly funded knowledge with the public ought to make them think harder — individually and as a professional community — about whether submitting their articles to private sector journals, or agreeing to peer review submission for private sector journals, is really compatible with living up to these obligations. There are alternatives to these private sector journals, such as open access journals. Taking those alternatives seriously probably requires rethinking the perceived prestige of private sector journals and how metrics of that prestige come into play in decisions about hiring, promotion, and distribution of research funds, but sometimes you have to do some work (individually and as a professional community) to live up to your obligations.

Science and ethics shouldn’t be muddled (or, advice for Jesse Bering).

Jesse Bering’s advice column is provoking some strong reactions. Most of these suggest that his use of evolutionary psychology in his answers lacks a certain scientific rigor, or that he’s being irresponsible in providing what looks like scientific cover for adult men who want to have sex with pubescent girls.

My main issue is that the very nature of Jesse Bering’s column seems bound to muddle scientific questions and ethical questions.

In response to this letter:

Dear Jesse,
I am a non-practicing heterosexual hebephile—and I think most men are—and find living in this society particularly difficult given puritanical, feminist, and parental forces against the normal male sex drive. If sex is generally good for both the body and the brain, then how is a teen having sex with an adult (versus another teen) bad for their mind? I feel like the psychological arguments surrounding the present age of consent laws need to be challenged. My focus is on consensual activity being considered always harmful in the first place. Since the legal notions of consent are based on findings from the soft sciences, shouldn’t we be a little more careful about ruining an adult life in these cases?
—Deep-thinking Hebephile

Jesse Bering offers:

  • The claim that “there are few among us who aren’t the direct descendents of those who’d be incarcerated as sex offenders today”.
  • A pointer to research on men’s measurable penile response to sexualized depiction of very young teenagers.
  • A comment that “there’s some reason to believe that a hebephilic orientation would have been biologically adaptive in the ancestral past”.
  • A mention of the worldwide variations in age-of-consent laws as indicative of deep cultural disagreements.
  • A pointer to research that “challenge[s] the popular notion that sex with underage minors is uniformly negative for all adolescents in such relationships” (although it turns out the subjects of this research were adolescent boys; given cultural forces acting on boys and girls, this might make a difference)
  • An anecdote about a 14-year-old boy who got to have sex with a prostitute before being killed by the Nazis in a concentration camp, and about how this made his father happy.
  • A comment that “Impressionist artist Paul Gauguin relocated to French Polynesia to satisfy his hebephilic lust with free-spirited Tahitian girls” in the 19th Century, but that now in the 21st century there’s less sympathy for this behavior.

And this is advice?*

Let’s pick up on just one strand of the scientific information referenced in Jesse Bering’s answer. If there exists scientific research that suggests that your trait is shared by others in the population, or that your trait may have been an adaptive one for your ancestors earlier in our evolutionary journey, what exactly does that mean?

Does it mean that your trait is a good one for you to have now? It does not.

Indeed, we seem to have no shortage of traits that may well have helped us dodge the extinction bullet but now are more likely to get us into trouble given our current environment. (Fondness for sweets is the one that gets me, and I still have cookies to bake.) Just because a trait, or a related behavior, comes with an evolutionary origin story doesn’t make it A-OK.

Otherwise, you could replace ethics and moral philosophy with genetics and evolutionary psychology.

Chris Clarke provides a beautiful illustration of how badly off the rails we might go if we confuse scientific explanation with moral justification — or with actual advice, for that matter.

This actually raises the question of what exactly Jesse Bering intends to accomplish with his “advice column”. Here’s what he says when describing the project:

Perhaps in lieu of offering you advice on how to handle your possibly perverted father-in-law who you suspect is an elderly frotteur, or how to be tactful while delicately informing your co-worker that she smells like a giant sewer rat, I can give you something even better—a peek at what the scientific data have to say about your particular issue. In other words, perhaps I can tell you why you’re going through what you are rather than what to do about it. I may not believe in free will, but I’m a firm believer that knowledge changes perspective, and perspective changes absolutely everything. Once you have that, you don’t need anyone else’s advice.

And good advice is really only good to the extent it aligns with actual research findings, anyway. Nearly two centuries worth of data in the behavioral sciences is available to inform our understanding of our everyday (and not so everyday) problems, yet rarely do we take advantage of this font of empirical wisdom…

That’s not to say that I can’t give you a piece of my subjective mind alongside the objective data. I’m happy to judge you mercilessly before throwing you and your awkward debacle to the wolves in the comments section. Oh, I’m only kidding—kind of. Actually, anyone who has read my stuff in the past knows that I’m a fan of the underdog and unconventional theories and ideas. Intellectual sobriety has never been a part of this blog and never will be, if I can help it, so let’s have a bit of fun.

(Bold emphasis added.)

Officially, Jesse Bering says he’s not offering advice, just information. It may end up being perspective-changing information, which will lead to the advice-asker no longer needing to ask anyone for advice. But it’s not actually advice!

As someone who teaches strategies in moral decision-making, I will note here that taking other people’s interests into account is absolutely central to being ethical. One way we can get a handle on other people’s interests is by asking others for advice. And, we don’t usually conceive of getting information about others and their interests as a one-shot deal.

On the point that good advice ought to align with “actual research findings,” I imagine Jesse Bering is taking actual research findings as our best current approximation of the facts. It’s important to recognize, though, that there are some published research findings that turn out to have been fabricated or falsified, and others that were the result of honest work but that have serious methodological shortcomings. Some scientific questions are hard. Even our best actual research findings may provide limited insight into how to answer them.

All of which is to say, it seems like what might really help someone looking for scientific information relevant to his personal problem would be a run-down of what the best available research tells us — and of what uncertainties still remain — rather than just finding some quirky handful of studies.

Indeed, Jesse Bering notes that he’s a fan of unconventional theories and ideas. On the one hand, it’s good to put this bias on the table. However, it strikes me that his recognition of this bias puts an extra obligation on him when he offers his services to advice seekers: an obligation to cast a heightened critical eye on the methodology used to conduct the research that supports such theories and ideas.

And maybe this comes back to the question of what the people writing to Jesse Bering for advice are actually looking for. If they want the comfort of knowing what the scientists know about X (for whatever X it is the writer is asking about), they ought to be given an accurate sense of how robust or tenuous that scientific knowledge actually is.

As well, they ought to be reminded that what we know about where X came from is a completely separate issue from whether I ought to let my behavior be directed by X. Scientific facts can inform our ethical decisions, but they don’t make the ethical questions go away.

_______
*Stephanie Zvan offers the best actual response to the the letter-writer’s request for advice, even if it wasn’t the answer the letter-writer wanted to hear.

In which I form the suspicion that I am not Nature’s intended audience.

Without the benefit of lots of time for reflection or analysis, my off-the-cuff reactions to Ed Rybicki’s piece “Womanspace” in the “Futures” section of Nature:

  1. It suggests (incorrectly) that I, as a middle-aged woman, might not be so interested in electronic gadgets or classic rock.
  2. And that I, as a woman, have some innate (or socially conditioned) “gatherer” approach to shopping, which I don’t; I’m more of the “hunter” Rybicki describes, which I suppose makes me masculine.
  3. As well, being a “hunter”-style shopper does not get me out of primary responsibility for acquiring clothes for my children. (Indeed, while I have been lectured by a teacher about how worn-out knees and art-related stains on my child’s clothes might erode that child’s self esteem, no teacher has ever taken up this issue with the male parent of that child. It’s clear whose job the teachers think it is to clothe the children properly.)
  4. Also, “a to-die-for pair of discounted shoes” is so far off my shopping radar as to be in some other universe within the multiverse. Again, does this mean I’m not a proper member of the category “women”?
  5. With regards to Rybicki’s question, “Have you never had the experience of talking to your significant female other as you wend your way through the complexity of a supermarket — only to suddenly find her 20 metres away with her back to you?”, my mind is drawn not to gendered differences (whether innate or learned) in movement through space-time but rather to differences (likely learned, likely variable within members of genders) in how people engage (or don’t) with those with whom they are trying to have a conversation.
  6. Even given my fairly low level of shopping-fu, I would never expect to find underwear (“knickers”) in a supermarket. Perhaps this is because I have been responsible for buying my own clothing (and food) for my whole adult life, which has given me at least a passing familiarity with what items are stocked in a supermarket and what items are stocked in a clothing store.
  7. If presenting as male in society would mean that someone else would take on responsibility for buying my clothing, I would seriously consider it. Even though I can’t grow facial hair worth a damn.
  8. Demonstrating incompetence once again is demonstrated to be an excellent strategy to avoid being asked to take on a task a second time — unless, of course, it is a task that is deemed a “natural” area of competence for members of your gender, in which case you’re pretty much out of luck weaseling out of it. (This is why I have to buy my own damn clothes.)
  9. Once again, I am frustrated that science fiction seems focused mainly on rethinking our technologies and the physical structure of our reality, rather than on imagining new social structures, relations, and expectations about human diversity.

Maybe all this shows is that Rybicki, in his piece, was not talking to me. If so, I hope that Nature is consciously adopting the strategy of being a “lad mag” (albeit a geeky one), else they are unwittingly alienating a good portion of their potential audience accidentally, which seems foolish.

* * * * *

For a bigger-picture response, read Christie.

Methodology versus beliefs: What did Marcus Ross do wrong?

We’ve been discussing whether good science has more to do with the methodology you use or with what you believe, and considering the particular case of Ph.D. geoscientist and young earth creationist Marcus Ross (here and here). At least some of the responses to these two posts seem to offer the view that: (1) of course what makes for a reliable piece of scientific knowledge is the methodology used to produce it (and especially to check it for error), but (2) the very fact that Marcus Ross is committed to young earth creationism, which means among other things that he is committed to the belief that the earth is not more than 10,000 years old, is a fatal blow to his scientific credibility as a geoscientist.

Either this boils down to claiming that having young earth creationist beliefs makes it impossible to use scientific methodology and generate a reliable piece of knowledge (even though Ross seems to have done just that in writing his dissertation), or perhaps to claiming instead that a person who holds young earth creationist beliefs and also uses standard scientific methodology to generate bits of scientific knowledge must have some ulterior motive for generating them. In this latter case, I take it the worry is not with the respectability of the product (i.e., the scientific knowledge claims), nor of the process (i.e., the standard sorts of evidence or inferential machinery being used to support the scientific knowledge claims), but rather of the producer (i.e., the person going through all the scientific motions yet still believing in young earth creationism).

I think it’s worth examining the general unease and trying to be more precise about what people think Marcus Ross might be doing wrong here. However, let the record reflect that I have not been surveilling Marcus Ross — not sitting in on the classes he teaches, not tracking down and reading his scientific publications, not following him to geological meetings or church or the supermarket. What this means is that we’re going to be examining hypotheticals here, rather than scads of empirical facts about what Marcus Ross actually does.

Possibility 1: Ross is using his geoscience Ph.D. to gain unwarranted increase in credibility for young earth creationist beliefs.

Ross teaches geology at Liberty University. Part of this teaching seems to involve setting out the kinds of theories, evidence, and inferential machinery (including accepted dating methods and the evidential support for them) that you’d expect students to learn in a geology class in a secular university. Part of it also seems to involve laying out the details of young earth creationism (which is not accepted as scientific by the scientists who make up the field of geoscience), the claims it supports, and on what evidential basis. Obviously, the claims of young earth creationism are bolstered by quite different evidence and a quite distinct (religious) inferential structure.

One approach to this pedagogy would be to bring out the important differences, both in the conclusions of geology and of young earth creationism and in the recognized rules for drawing, testing, and supporting conclusions between the two. Indeed, Ross’s comments make it sound like this is the approach he takes:

In my classes here at Liberty University I introduce my students to the reasons why geologists think the Earth is ancient, or why various organisms are viewed as strong evidence for evolution.  I do this so that they understand that these arguments are well thought-out, and to teach them to respect the ideas of those with whom they disagree.

If Ross is actually making it clear how scientific inference differs from faith-based claims, then is should be clear to any of his students who are paying attention that the science Ross studied in graduate school does not support his young earth creationism. Rather, the science supports the scientific inference. His faith supports young earth creationism. The two are different.

If, on the other hand, Ross were to mischaracterize the theories, evidence, and inferential machinery of geoscience in his classes, that would be bad. It would amount to lying about the nature of geoscience (and perhaps also of science more broadly).

In the same way, if Ross were to claim that the body of geological knowledge, or the methods of geoscience, or the empirical evidence recognized by geoscientists lent scientific support to the claims of young earth creationism, that would also be lying.

Ross (and his students) might still accept young earth creationism, but they would be doing so on religious rather than scientific grounds — something that a careful study of geoscience and its methods should make clear. If anything, such a study should underline that the rules for the scientific credibility of a claim are orthogonal to the rules for the religious credibility of a claim.

Possibility 2: Ross doesn’t intend to use his geoscience Ph.D. to gain unwarranted increase in credibility for young earth creationist beliefs, but it has that effect on his audience anyway.

You might worry that Marcus Ross’s status as a Ph.D. geoscientist lends extra credibility to all the beliefs he voices — at least when those beliefs are judged by an audience of undergraduates who are enamored by Ph.D.s. That’s a hard degree to get, after all, and you have to be really smart to get one, right? And, smart people (especially those certified to be Ph.D.-smart by Ph.D. granting institutions) have more credible beliefs than everyone else, right?

If Ross’s students are making this sort of judgment about his credibility — and they might well be — it’s a silly judgment to make. It would be akin to assuming that my Ph.D. in chemistry would make me a more credible commentator on the theories of Descartes or Husserl. Let me assure you, it does not! (That’s why I spent six additional years of my life in graduate school developing the expertise relevant for work in philosophy.)

Indeed, the kind of extra credibility young earth creationism might gain in the minds of undergraduates by this route speaks more to a lack of critical thinking on the part of the undergraduates than it does to any dishonesty on Ross’s part. It also makes me yearn for the days of robust teen rebellion and reflexive mistrust of anyone over 30.

We should be fair, though, and recognize that it’s not just college students who can be dazzled by an advanced degree. Plenty of grown-ups in the larger society have the same reaction. Uncritically accepting the authority of the Ph.D. to speak on matters beyond the tether of his expertise is asking to be sold snake oil.

In light of the increased authority non-scientists seem to grant those with scientific training even outside the areas of their scientific expertise, it might be reasonable to ask scientists to be explicit about when they are speaking as scientists and when they are speaking as people with no special authority (or, perhaps, with authority that has some source other than scientific training). But, if we think Marcus Ross has an obligation to note that his scientific training does not support his views in the realm of young earth creationism, we probably ought to hold other scientists to the same obligation when they speak of matters beyond their scientific expertise. Fair is fair.

Possibility 3: Ross is using his engagement with the community of geoscientists to make it appear to outsiders as though his young earth creationist views are scientifically respectable, even though he knows they aren’t.

This is a possibility raised by Donald Prothero’s account of “stealth creationism” at meetings of the Geological Society of America (GSA). Prothero writes:

Most of the time when I attend the meetings, there are plenty of controversial topics and great debates going on within the geological community, so the profession does not suppress unorthodox opinions or play political games. This is the way it should be in any genuine scientific discipline. I’ve seen amazingly confrontational knock-down-drag-out sessions about particularly hotly debated ideas, but always conducted in a spirit of honest scientific exchange and always hewing to rules of science and naturalism. To get on the meeting program, scientists must propose to organize sessions around particular themes, along with field trips to geologically interesting sites within driving distance of the convention city, and the GSA host committee reads and approves these proposals. But every once in a while, I see a poster title and abstract with something suspicious about it. When I check the authors, they turn out to be Young-Earth Creationists (YEC) who claim the earth is only 6000 years old and all of geology can be explained by Noah’s flood. When I visit the poster session, it’s usually mobbed by real geologists giving the YECs a real grilling, even though the poster is ostensibly about some reasonable geologic topic, like polystrate trees in Yellowstone, and there is no overt mention of Noah’s flood in the poster. But the 2010 meeting last year in Denver took the cake: there was a whole field trip run by YECs who did not identify their agenda, and pretended that they were doing conventional geology—until you read between the lines.

Marcus Ross was one of the leaders of the field trip in question, as was Steve Austin of the Institute for Creation Research. Prothero quotes his colleague Steve Newton’s account of this GSA meeting field trip:

Through the entire trip, the leaders never identified themselves as YECs or openly advocated Noah’s flood or a 6000-year-old earth. Instead, the entire trip was filled with stops at outcrops where the leaders emphasized the possible evidence for sudden deposition of the strata at Garden of the Gods near Colorado Springs, without stating explicitly that they believed this sudden deposition was Noah’s flood in action. (There are LOTS of instances of local rapid and sudden deposition of strata in real geology, but they are local and clearly cannot be linked to any global flood). As Newton described it:

Furthermore, the field trip leaders were careful not to make overt creationist references. If the 50 or so field trip participants did not know the subtext and weren’t familiar with the field trip leaders, it’s quite possible that they never realized that the leaders endorsed geologic interpretations completely at odds with the scientific community. Even the GSA Sedimentary Geology Division had initially signed on as a sponsor of the trip (though they backed out once they learned the views of the trip leaders).

But the leaders’ Young-Earth Creationist views were apparent in rhetorical subtleties. For example, when Austin referred to Cambrian outcrops, he described them as rocks that are “called Cambrian.” It’s an odd phrasing, allowing use of the proper geologic term while subtly denying its implications. In one instance, when Austin was asked by a trip attendee about the age of a rock unit, he responded somewhat cryptically, “Wherever you want to go there.” Such phrasing was telling, if you knew what to listen for.

Subtext about the age of formations was a big part of the Young-Earth Creationist rhetoric on the trip. As we moved on to each field trip stop, a narrative began to emerge: the creationist concept of Noah’s Flood as explanation for the outcrops. Although no one uttered the words “Noachian Flood,” the guides’ descriptions of the geology were revealing and rather coy. For example, at the first stop—a trail off Highway 24 near Manitou Spring—Austin stated that the configuration of the units was “the same over North America,” and had been formed by a massive marine transgression. “Whatever submerged the continent,” Austin went on, it must have been huge in scale.

Here, a charitable reading of the field trip might be that the believers in geology were taking in the sights and interpreting the evidence with the (scientific) inferential machinery of geology, while the young earth creationists were taking in the very same sights and interpreting the evidence with the (religious) inferential machinery of young earth creationism. But, Prothero argues that there’s more than this going on:

Sadly, the real problem here is that YEC “geologists” come back from this meeting falsely bragging that their “research” was enthusiastically received, and that they “converted” a lot of people to their unscientific views. As Newton pointed out, they will crow in their publicity that they are attending regular professional meetings and presenting their research successfully. For those who don’t know any better, it sounds to the YEC audience like they are conventional geologists doing real research and that they deserve to be taken seriously as geologists—even though every aspect of their geology is patently false (see Chapter 3 in my 2007 Evolution book). And so, once more the dishonesty of the YEC takes advantage of the openness and freedom of the scientific community to exploit it to their own ends, and abuse the privilege of open communication to push anti-scientific nonsense on the general population that doesn’t know the difference.

Prothero notes (as does Marcus Ross in his comments on this blog) that the research by young earth creationists that is well received by the geological community is completely conventional, using only the inferential machinery of geoscience and making no use of the assumptions of young earth creationism. But presenting work (or leading a field trip) with a young earth creationist subtext (i.e., possibly these observations can be interpreted as evidence of a really big flood of some kind …) to an audience of geologists, and then spinning a lack of loud objections to a conclusion you didn’t explicitly present as if it were endorsement of that conclusion by the geologists is a dishonest move.

Honest engagement with a scientific community means putting your evidential and methodological cards on the table. It means, if you want to know whether other scientists would endorse (or even accept as not-totally-implausible) a particular conclusion, you put that particular conclusion out there for their examination. All you can reasonably conclude from the fact that other scientists didn’t shoot down a conclusion that you never openly stated is that those other scientists did not read your mind.

Possibility 4: It’s wrong for Ross to maintain his young earth creationist beliefs after the thorough exposure to scientific theories, evidence, and methodology that he received in his graduate training in geosciences.

Learning to be a scientist means, among other things, learning scientific patterns of thought, scientific standards for evaluating knowledge claims, and scientific methods for generating and testing new knowledge claims. Such immersion in the tribe of science and in the activity of scientific research, some might argue, should have driven the young earth creationist beliefs right out of Marcus Ross’s head.

Maybe we could reasonably expect this outcome if his young earth creationist beliefs depended on the same kind of evidence and inferential machinery as do scientific claims. However, they do not. Young earth creationist claims are not scientific claims, but faith-based claims. Young earth creationism sets itself apart from the inferential structure of science — if its adherents are persuaded that a claim is credible on the basis of faith (e.g., in a particular reading of scriptures), then no arrangement of empirical evidence could be sufficient to reliably undermine that adherence.

To be sure, this means that a scientist like Marcus Ross who is also a young earth creationist has non-scientific beliefs in his head. But, if we’re going to assert that scientific training ought, when done right, to purge the trainee of all non-scientific beliefs, then there is precious little evidence that scientific training is being done right anywhere.

There are quite a lot of scientists with non-scientific beliefs that persist. They have beliefs about who would be the best candidate to vote for in a presidential election, about what movie will be most entertaining, about what entree at the restaurant will be most delicious and nutritious. They have beliefs about whether the people they care for also care for them, and about whether their years of toil on particular research questions will make the world a better place (or, more modestly, whether they will have been personally fulfilling). Many of these beliefs are hunches, no better supported by the available empirical evidence than are the beliefs routinely formed by non-scientists.

This is not to say that the evidence necessarily argues against holding these beliefs. Rather, the available evidence may be so sparse as to be inadequate to support or undermine the belief. Still, scientific training does not prevent the person so trained from forming beliefs in these instances — and this may be useful, especially since there are situations where sitting on the fence waiting for decisive evidence is not the best call. (Surely we have more complete evidence about what kind of president Richard M. Nixon would make now than was available in November 1968, but it’s too late for us to use that evidence to vote in the 1968 presidential election.)

If harboring non-scientfic beliefs is a crime, we’d be hard pressed to find a single member of the tribe of science who is not at least a little guilty.

Maybe it’s more reasonable to hold scientists accountable to recognize which of their beliefs are well supported by empirical evidence and which are not. A bit of reflection is probably sufficient to help scientists sort out the scientific beliefs from the non-scientific beliefs. And, to the extent that Marcus Ross wants to be a practicing member of the tribe of science (or even an intellectually honest outsider with enough scientific training that he ought to be able to tell the difference), it’s just as reasonable to hold him accountable for recognizing which sort of beliefs constitute his young earth creationism.

Being able to tell the difference between scientific and non-scientific beliefs is not only a more attainable goal for human scientists than having only scientific beliefs, but it is a much easier standard for the tribe of science to police, since it involves examining what kinds of claims a person asserts as backed by the science — something other scientists can check by examining evidence and arguments — rather than examining what’s in a person’s head.

These possibilities strike me as the most likely candidates for what’s bugging science-minded people about Marcus Ross. If I’ve missed what’s bugging you about him, please make your case in the comments.

Evaluating scientific claims (or, do we have to take the scientist’s word for it?)

Recently, we’ve noted that a public composed mostly of non-scientists may find itself asked to trust scientists, in large part because members of that public are not usually in a position to make all their own scientific knowledge. This is not a problem unique to non-scientists, though — once scientists reach the end of the tether of their expertise, they end up having to approach the knowledge claims of scientists in other fields with some mixture of trust and skepticism. (It’s reasonable to ask what the right mixture of trust and skepticism would be in particular circumstances, but there’s not a handy formula with which to calculate this.)

Are we in a position where, outside our own narrow area of expertise, we either have to commit to agnosticism or take someone else’s word for things? If we’re not able to directly evaluate the data, does that mean we have no good way to evaluate the credibility of the scientist pointing to the data to make a claim?

This raises an interesting question for science journalism, not so much about what role it should play as what role it could play.

If only a trained scientist could evaluate the credibility of scientific claims (and then perhaps only in the particular scientific field in which one was trained), this might reduce science journalism to a mere matter of publishing press releases, or of reporting on scientists’ social events, sense of style, and the like. Alternatively, if the public looked to science journalists not just to communicate the knowledge claims various scientists are putting forward but also to do some evaluative work on our behalf — sorting out credible claims and credible scientists from the crowd — we might imagine that good science journalism demands extensive scientific training (and that we probably need a separate science reporter for each specialized area of science to be covered).

In an era where media outlets are more likely to cut the science desk than expand it, pinning our hopes on legions of science-Ph.D.-earning reporters on the science beat might be a bad idea.

I don’t think our prospects for evaluating scientific credibility are quite that bad.

Scientific knowledge is built on empirical data, and the details of the data (what sort of data is relevant to the question at hand, what kind of data can we actually collect, what techniques are better or worse for collecting the data, how we distinguish data from noise, etc.) can vary quite a lot in different scientific disciplines, and in different areas of research within those disciplines. However, there are commonalities in the basic patterns of reasoning that scientists in all fields use to compare their theories with their data. Some of these patterns of reasoning may be rather sophisticated, perhaps even non-intuitive. (I’m guessing certain kinds of probabilistic or statistical reasoning might fit this category.) But others will be the patterns of reasoning that get highlighted when “the scientific method” is taught.

In other words, even if I can’t evaluate someone else’s raw data to tell you directly what it means, I can evaluate the way that data is used to support or refute claims. I can recognize logical fallacies and distinguish them from instances of valid reasoning. Moreover, this is the kind of thing that a non-scientist who is good at critical thinking (whether a journalist or a member of the public consuming a news story) could evaluate as well.

One way to judge scientific credibility (or lack thereof) is to scope out the logical structure of the arguments a scientist is putting up for consideration. It is possible to judge whether arguments have the right kind of relationship to the empirical data without wallowing in that data oneself. Credible scientists can lay out:

  • Here’s my hypothesis.
  • Here’s what you’d expect to observe if the hypothesis is true. Here, on the other hand, is what you’d expect to observe if the hypothesis is false.
  • Here’s what we actually observed (and here are the steps we took to control the other variables).
  • Here’s what we can say (and with what degree of certainty) about the hypothesis in the light of these results.
  • Here’s the next study we’d like to do to be even more sure.

And, not only will the logical connections between the data and what is inferred from them look plausible to the science writer who is hip to the scientific method, but they ought to look plausible to other scientists — even to scientists who might prefer different hypotheses, or different experimental approaches. If what makes something good science is its epistemology — the process by which data are used to generate and/or support knowledge claims — then even scientists who may disagree with those knowledge claims should still be able to recognize the patterns of reasoning involved as properly scientific. This suggests a couple more things we might ask credible scientists to display:

  • Here are the results of which we’re aware (published and unpublished) that might undermine our findings.
  • Here’s how we have taken their criticisms (or implied criticisms) seriously in evaluating our own results.

If the patterns of reasoning are properly scientific, why wouldn’t all the scientists agree about the knowledge claims themselves? Perhaps they’re taking different sets of data into account, or they disagree about certain of the assumptions made in framing the question. The important thing to notice here is that scientists can disagree with each other about experimental results and scientific conclusions without thinking that the other guy is a bad scientist. The hope is that, in the fullness of time, more data and dialogue will resolve the disagreements. But good, smart, honest scientists can disagree.

This is not to say that there aren’t folks in lab coats whose thinking is sloppy. Indeed, catching sloppy thinking is the kind of thing you’d hope a good general understanding of science would help someone (like a scientific colleague, or a science journalist) to do. At that point, of course, it’s good to have backup — other scientists who can give you their read on the pattern of reasoning, for example. And, to the extent that a scientist — especially one talking “on the record” about the science (whether to a reporter or to other scientists or to scientifically literate members of the public) — displays sloppy thinking, that would tend to undermine his or her credibility.

There are other kinds of evaluation you can probably make of a scientist’s credibility without being an expert in his or her field. Examining a scientific paper to see if the sources cited make the claims that they are purported to make by the paper citing them is one way to assess credibility. Determining whether a scientist might be biased by an employer or a funding source may be harder. But there, I suspect many of the scientists themselves are aware of these concerns and will go the extra mile to establish their credibility by taking the possibility that they are seeing what they want to see very seriously and testing their hypotheses fairly stringently so they can answer possible objections.

It’s harder still to get a good read on the credibility of scientists who present evidence and interpretations with the right sort of logical structure but who have, in fact, fabricated or falsified that evidence. Being wary of results that seem too good to be true is probably a good strategy here. Also, once a scientist is caught in such misconduct, it’s entirely appropriate not to trust another word that comes from his or her mouth.

One of the things fans of science have tended to like is that it’s a route to knowledge that is, at least potentially, open to any of us. It draws on empirical data we can get at through our senses and on our powers of rational thinking. As it happens, the empirical data have gotten pretty complicated, and there’s usually a good bit of technology between the thing in the world we’re trying to observe and the sense organs we’re using to observe it. However, those powers of rational thinking are still at the center of how the scientific knowledge gets built. Those powers need careful cultivation, but to at least a first approximation they may be enough to help us tell the people doing good science from the cranks.

What a scientist knows about science (or, the limits of expertise).

In a world where scientific knowledge might be useful in guiding decisions we make individually and collectively, one reason non-scientists might want to listen to scientists is that scientists are presumed to have the expertise to sort reliable knowledge claims from snake oil. If you’re not in the position to make your own scientific knowledge, your best bet might be to have a scientific knowledge builder tell you what counts as good science.

But, can members of the public depend on any scientist off the street (or out of the lab) to vet all the putative scientific claims for credibility?

Here, we have to grapple with the relationship between Science and particular scientific disciplines — and especially with the question of whether there is enough of a common core between different areas of science that scientists trained in one area can be trusted to recognize the strengths and weaknesses of work in another scientific area. How important is all that specialization research scientists do? Can we trust that, to some extent, all science follows the same rules, thus equipping any scientist to weigh in intelligently about any given piece of it?

It’s hard to give you a general answer to that question. Instead, as a starting point for discussion, let me lay out the competence I personally am comfortable claiming, in my capacity as a trained scientist.

As someone trained in a science, I am qualified:

  1. to say an awful lot about the research projects I have completed (although perhaps a bit less about them when they were still underway).
  2. to say something about the more or less settled knowledge, and about the live debates, in my research area (assuming, of course, that I have kept up with the literature and professional meetings where discussions of research in this area take place).
  3. to say something about the more or less settled (as opposed to “frontier”) knowledge for my field more generally (again, assuming I have kept up with the literature and the meetings).
  4. perhaps, to weigh in on frontier knowledge in research areas other than my own, if I have been very diligent about keeping up with the literature and the meetings and about communicating with colleagues working in these areas.
  5. to evaluate scientific arguments in areas of science other than my own for logical structure and persuasiveness (though I must be careful to acknowledge that there may be premises of these arguments — pieces of theory or factual claims from observations or experiments that I’m not familiar with — that I’m not qualified to evaluate).
  6. to recognize, and be wary of, logical fallacies and other less obvious pseudo-scientific moves (e.g., I should call shenanigans on claims that weaknesses in theory T1 count as support for alternative theory T2).
  7. to recognize that experts in fields of science other than my own generally know what the heck they’re talking about.
  8. to trust scientists in fields other than my own to rein in scientists in those fields who don’t know what they are talking about.
  9. to face up to the reality that, as much as I may know about the little piece of the universe I’ve been studying, I don’t know everything (which is part of why it takes a really big community to do science).

This list of my qualifications is an expression of my comfort level more than anything else. It’s not elitist — good training and hard work can make a scientist out of almost anyone. But, it recognizes that with as much as there is to know, you can’t be an expert on everything. Knowing how far the tether of your expertise extends is part of being a responsible scientist.

So, what kind of help can a scientist give the public in evaluating what is presented as scientific knowledge? What kind of trouble can a scientist encounter in trying to sort out the good from the bad science for the public? Does the help scientists offer here always help?

Trust me, I’m a scientist.

In an earlier post, I described an ideal of the tribe of science that the focus of scientific discourse should be squarely on the content — the hypotheses scientists are working with, the empirical data they have amassed, the experimental strategies they have developed for getting more information about our world — rather than on the particular details of the people involved in this discourse. This ideal is what sociologist of science Robert K. Merton* described as the “norm of universalism”.

Ideals, being ideals, can be hard to live up to. Anonymous peer review of scientific journal articles notwithstanding, there are conversations in the tribe of science where it seems to matter a lot who is talking, not just what she’s saying about the science. Some scientists were trained by pioneers in their fields, or hired to work in prestigious and well-funded university departments. Some have published surprising results that have set in motion major changes in the scientific understanding of a particular phenomenon, or have won Nobel Prizes.

The rest can feel like anonymous members in a sea of scientists, doing the day to day labor of advancing our knowledge without benefit of any star power within the community. Indeed, probably lots of scientists prefer the task of making the knowledge, having no special need to have their names widely known within their fields and piled with accolades.

But there’s a peculiar consequence of the idea that scientists are all in the knowledge-buiding trenches together, focused on the common task rather than on self-agrandizement. When scientists are happily ensconced in the tribe of science, very few of them take themselves to be stars. But when the larger society, made up mostly of non-scientists, encounters a scientist — any scientist — that larger society might take him to be a star.

Merton touched on this issue when he described another norm of the tribe of science, disinterestedness. One way to think about the norm of disinterestedness is that scientists aren’t doing science primarily to get the big bucks, or fame, or attractive dates. Merton’s description of this community value is a bit more subtle. He notes that disinterestedness is different from altruism, and that scientists needn’t be saints.

The best way to understand disinterestedness might be to think of how a scientist working within her tribe is different from an expert out in the world dealing with laypeople. The expert, knowing more than the layperson, could exploit the layperson’s ignorance or his tendency to trust the judgment of the expert. The expert, in other words, could put one over on the layperson for her own benefit. This is how snake oil gets sold.

The scientist working within the tribe of science can expect no such advantage. Thus, trying to put one over on other scientists is a strategy that shouldn’t get you far. By necessity, the knowledge claims you advance are going to be useful primarily in terms of what they add to the shared body of scientific knowledge, if only because your being accountable to the other scientists in the tribe means that there is no value added to the claims from using them to play your scientific peers for chumps.

Merton described situations in which the bona fides of the tribe of science were used in the service of non-scientific ends:

Science realizes its claims. However, its authority can be and is appropriated for interested purposes, precisely because the laity is often in no position to distinguish spurious from genuine claims to such authority. The presumably scientific pronouncements of totalitarian spokesmen on race or economy or history are for the uninstructed laity of the same order as newspaper reports of an expanding universe or wave mechanics. In both instances, they cannot be checked by the man-in-the-street and in both instances, they may run counter to common sense. If anything, the myths will seem more plausible and are certainly more comprehensible to the general public than accredited scientific theories, since they are closer to common-sense experience and to cultural bias. Partly as a result of scientific achievements, therefore, the population at large becomes susceptible to new mysticisms expressed in apparently scientific terms. The borrowed prestige of science bestows prestige on the unscientific doctrine. (p. 277))

(Bold emphasis added)

The success of science — the concentrated expertise of the tribe — means that those outside of it may take “scientific” claims at face value. Unable to make an independent evaluation of their credibility, lay people can easily fall prey to a wolf in scientist’s clothing, to a huckster assumed to be committed first and foremost to the facts (as scientists try to be) who is actually distorting them to look after his own ends.

This presents a serious challenge for non-scientists — and for scientists, too.

If the non-scientist can’t determine whether a purportedly scientific claim is a good one — whether, for example, it is supported by the empirical evidence — the non-scientist has to choose between accepting that claim on the authority of someone who claims to be a scientist (which in itself raises another evaluative problem for the non-scientist — what kind of credentials do you need to see from the guy wearing the lab coat to believe that he’s a proper scientist?), or setting aside all putative scientific claims and remaining agnostic about them. You trust that the “Science” label on a claim tells you something about its quality, or you recognize that it conveys even less useful information to you than a label that says, “Now with Jojoba!”

If late-night infomercials and commercial websites are any indication, there are not strong labeling laws covering what can be labeled as “Science”, at least in a sales pitch aimed at the public at large.** This leaves open the possibility that the claims made by the guy in the white lab coat that he’s saying are backed by Science would not be recognized by other scientists as backed by science.

The problem this presents for scientists is two-fold.

On the one hand, scientists are trying to get along in a larger society where some of what they discover in their day jobs (building knowledge) could end up being relevant to how that larger society makes decisions. If we want our governments to set sensible policy as far as tackling disease outbreaks, or building infrastructure that won’t crumble in floods, or ensuring that natural resources are utilized sustainably, it would be good for that policy to be informed by the best relevant knowledge we have on the subject. Policy makers, in other words, want to be able to rely on science — something that scientists want, too (since usually they are working as hard as they are to build the knowledge so that the knowledge can be put to good use). But that can be hard to do if some members of the tribe of science go rogue, trading on their scientific credibility to sell something as science that is not.

Even if policy makers have some reasonable way to tell the people slapping the Science label on claims that aren’t scientific, there will be problems in a democratic society where the public at large can’t reliably tell scientists from purveyors of snake-oil.

In such situations, the public at large may worry that anyone with scientific credentials could be playing them for suckers. Scientists who they don’t already know by reputation may be presumed to be looking out for their own interests rather than to be advancing scientific knowledge.

A public distrustful of scientists’ good intentions or trustworthiness in interactions with non-scientists will convey that distrust to the people making policy for them.

This means that scientists have a strong interest in identifying the members of the tribe of science who go rogue and try to abuse the public’s trust. People presenting themselves as scientists while selling unscientific claims are diluting the brand of Science. They undermine the reputation science has for building reliable knowledge. They undercut the claim other scientists make that, in their capacity as scientists, they hold themselves accountable to the way the world really is — to the facts, no matter how inconvenient they may be.

Indeed, if the tribe of science can’t make the case that it is serious about the task of building reliable knowledge about the world and using that knowledge to achieve good things for the public, the larger public may decide that putting up public monies to support scientific research is a bad idea. This, in turn, could lead to a world where most of the scientific knowledge is built with private money, by private industry — in which case, we might have to get most of our scientific knowledge from companies that actually are trying to sell us something.

_____
*Robert K. Merton, “The Normative Structure of Science,” in The Sociology of Science: Theoretical and Empirical Investigations. University of Chicago Press (1979), 267-278.

**There are, however, rules that require the sellers of certain kinds of products to state clearly when they are making claims that have not been evaluated by the Food and Drug administration.

Environmental impacts of what we eat: the difficulty of apples-to-apples comparisons.

When we think about food, how often do we think about what it’s going to do for us (in terms of nutrition, taste, satiety), and how often do we focus on what was required to get it to our tables?

Back when I was a wee chemistry student learning how to solve problems in thermodynamics, my teachers described the importance for any given problem of identifying the system and the surroundings. The system was the piece of the world that was the focus of the the problem to be solved — on the page or the chalkboard (I’m old), it was everything inside the dotted line you drew to enclose it. The surroundings were outside that dotted line — everything else.

Those dotted lines we drew were very effective in separating the components that would get our attention from everything else — exactly what we needed to do in order to get our homework problems done on a deadline. But it strikes me that sometimes we can forget that what we’ve relegated to surroundings still exists out there in the world, and indeed might be really important for other questions that matter, too.

In recent years, there seems to be growing public awareness of food as something that doesn’t magically pop into existence at the supermarket or the restaurant kitchen. People now seem to recall that there are agricultural processes that produce food — and to have some understanding that these processes have impacts on other pieces of the world. The environmental impacts, especially, are on our minds. However, figuring out just what the impacts are is challenging, and this makes it hard for us to evaluate our choices with comparisons that are really apples-to-apples.
Continue reading

Building knowledge (and stuff) ethically: the principles of “Green Chemistry”.

Like other scientific disciplines, chemistry is in the business of building knowledge. In addition to knowledge, chemistry sometimes also builds stuff — molecules which didn’t exist until people figured out ways to make them.

Scientists (among others) tend to assume that knowledge is a good thing. There are instances where you might question this assumption — maybe when the knowledge is being used for some evil purpose, or when the knowledge has been built on your dime without giving you much practical benefit, or when the knowledge could give you practical benefit except that it’s priced out of your reach.

Even setting these worries aside, we should recognize that there are real costs involved in building knowledge. These costs mean that it’s not a sure thing that more knowledge is always better. Rather, we may want to evaluate whether building a particular piece of knowledge (or a particular new compound) is worth the cost.

In chemistry, these costs aren’t just a matter of the chemist’s time, or of the costs of the lab facilities and equipment. Some of these costs are directly connected to the chemical reagents being brought together in reactions that transform the starting materials into something new. These chemical reagents (in solid, liquid, or gas phase, pure or in mixtures or in solutions) all come from somewhere. The “somewhere” could be a source in nature, or a reaction conducted in the laboratory, or a reaction process conducted on a large scale in a factory.

Getting a reasonably pure chemical substance in the jar means sorting out the other stuff hanging around with that substance — impurities, leftover reactants from the reaction that makes the desired substance, “side-products” of the reaction that makes the desired substance. (A side-product is a lot like a side-effect, in that it’s produced by the reaction but it’s not the thing you’re actually trying to produce.) When you’re isolating the substance you’re after, that other stuff has to go somewhere. If there’s not a particular way to collect the other stuff and put it to some other use, that other stuff becomes chemical waste.

There’s a sense in which all waste is chemical waste, since everything in our world is made up of chemicals. The thing to watch with waste products from chemical reactions is whether these waste products will engage in further chemical reactions wherever you end up storing them. Or, if you’re not careful about how you store them, they might get into our air or water, or into plants and animals, where they might have undesired or unforeseen effects.

In recent years, chemists have been working harder to recognize that the chemicals they work with come from someplace, that the ones they generate in the course of their experiments need to end up someplace, and to think about more sustainable ways to build chemical compounds and chemical knowledge. A good place to see this thinking is in The Twelve Principles of Green Chemistry (here as set out by Anastas, P. T.; Warner, J. C. Green Chemistry: Theory and Practice, Oxford University Press: New York, 1998, p.30.):

  1. Prevention
    It is better to prevent waste than to treat or clean up waste after it has been created.
  2. Atom Economy Synthetic methods should be designed to maximize the incorporation of all materials used in the process into the final product.
  3. Less Hazardous Chemical Syntheses Wherever practicable, synthetic methods should be designed to use and generate substances that possess little or no toxicity to human health and the environment.
  4. Designing Safer Chemicals Chemical products should be designed to effect their desired function while minimizing their toxicity.
  5. Safer Solvents and Auxiliaries The use of auxiliary substances (e.g., solvents, separation agents, etc.) should be made unnecessary wherever possible and innocuous when used.
  6. Design for Energy Efficiency Energy requirements of chemical processes should be recognized for their environmental and economic impacts and should be minimized. If possible, synthetic methods should be conducted at ambient temperature and pressure.
  7. Use of Renewable Feedstocks A raw material or feedstock should be renewable rather than depleting whenever technically and economically practicable.
  8. Reduce Derivatives Unnecessary derivatization (use of blocking groups, protection/ deprotection, temporary modification of physical/chemical processes) should be minimized or avoided if possible, because such steps require additional reagents and can generate waste.
  9. Catalysis Catalytic reagents (as selective as possible) are superior to stoichiometric reagents.
  10. Design for Degradation Chemical products should be designed so that at the end of their function they break down into innocuous degradation products and do not persist in the environment.
  11. Real-time analysis for Pollution Prevention Analytical methodologies need to be further developed to allow for real-time, in-process monitoring and control prior to the formation of hazardous substances.
  12. Inherently Safer Chemistry for Accident Prevention Substances and the form of a substance used in a chemical process should be chosen to minimize the potential for chemical accidents, including releases, explosions, and fires.

At first blush, these might look like principles developed by a group of chemists who just returned from an Earth Day celebration, what with their focus on avoiding hazardous waste and toxicity, favoring renewable resources over non-renewable ones, and striving for energy efficiency. Certainly, thoroughgoing embrace of “Green Chemistry” principles might result in less environmental impact due to extraction of starting materials, storage (or escape) of wastes, and so forth.

But these principles can also do a lot to serve the interests of chemists themselves.

For example, a reaction that can be conducted at ambient temperature and pressure requires less fancy equipment (i.e., equipment to maintain temperature and/or pressure at non-ambient conditions). It’s not just more energy efficient, it’s less of a hassle for the experimenter. Safer solvents are better for the environment and the public at large, but it’s usually the chemists working with the solvents who are at immediate risk when solvents are extremely flammable or corrosive or carcinogenic. And generating less hazardous waste means paying for the disposal of less hazardous waste — which means that there’s also an economic benefit to being more environmentally friendly.

What I find really striking about these principles of “Green Chemistry” is the optimism they convey that chemists are smart enough to figure out new and better ways to produce the compounds they want to produce. The challenge is to rethink the old strategies for making the compound of interest, strategies that might have relied on large amounts of non-renewable starting materials and generated lots of waste products at each intermediate step. Chemistry is a science that focuses on transformations, but part of its beauty is that there are multiple paths that might get us from staring materials to a particular product. “Green Chemistry” challenges its practitioners to use the existing knowledge base to find out what is possible, and to build new knowledge about these possibilities as chemists build new molecules.

And, to the extent that chemistry is in the business of finding new knowledge (rather than relying on established chemical knowledge as a master cook book), these twelve principles seem almost obvious. Given the choice, would you ever want to make a new compound for an application that had the desired function but maximized toxicity? Would you choose a synthetic strategy that generated more waste rather than less (and whose waste was less likely to break down into innocuous compounds rather than more)? Would you opt to perform the separation with a solvent that was more likely to explode if a less explosive solvent would do the trick? Probably not. Of course, you’d be on the lookout for a better way to solve the chemical problem — where “better” takes into account things like cost, experimental tractability, risks to the experimenter, and risks to the broader public (including our shared environment).

This is not to say that adhering to the principles of “Green Chemistry” would be sufficient to be an ethical chemist. Conceivable, one could follow all these principles and still fabricate, falsify, or plagiarize, for example. But in explicitly recognizing some of the costs associated with building chemical knowledge, and pushing chemists to minimize those costs, the principles of “Green Chemistry” do seem to honor chemists’ obligations to the welfare of the people with whom they are sharing a world.