Things to read on my other blog: #scio12 preparations, truthiness at NYT, and an interview with a chloroplast.

For those of you who mostly follow my writing here on “Doing Good Science,” I thought I should give you a pointer to some things I’ve posted so far this month (which is almost half-over already?!) on my other blog, “Adventures in Ethics and Science”. Feel free to jump in to the discussions in the comments over there. Or, if you prefer, go ahead and discuss them here.

The month kicked off with a bunch of posts looking forward to ScienceOnline 2012, which is next week. First, on the issue of what to pack:

Packing for #scio12: plague relief.
Packing for #scio12: what are you drinking?
Packing for #scio12: sharing space with others.
Packing for #scio12: plumbing the inky depths.

Then, a discussion of what’s special about an unconference: Looking ahead to #scio12: the nature of the unconference. In this post, I put a call out for contributions to the wikis for the two sessions I’ll be helping to moderate: one (with Amy Freitag) on “Citizens, experts, and science”, the other (with Christie Wilcox) on “Blogging Science While Female”. Those wiki pages are just calling out for ideas, questions, or useful links. (Your ideas, questions, or useful links! What are you waiting for?)

After that, my response to a recent blog post by the New York Times’s Public Editor: Straightforward answers to questions we shouldn’t even have to ask: New York Times edition.

Finally, courtesy of my elder offspring, Friday Sprog Blogging: Interview with a Chloroplast..

Lads’ mags, sexism, and research in psychology: an interview with Dr. Peter Hegarty (part 2).

In this post, I continue my interview with Dr. Peter Hegarty, a social psychologist at the University of Surrey and one of the authors of ” ‘Lights on at the end of the party’: Are lads’ mags mainstreaming dangerous sexism?”, which was published in The British Journal of Psychology in December. My detailed discussion of that paper is here. The last post presented part 1 of our interview, in which Dr. Hegarty answered questions about the methodology of this particular research, as well as about some of the broader methodological differences between research in psychology and in sciences that are focused on objects of study other than humans.

Janet Stemwedel: It’s been pointed out that the university students that seem to be the most frequent subjects of psychological research are WEIRD (Western Educated Industrialized Rich Democratic). Is the WEIRDness of university students as subjects in this research something that should make us cautious about the strength of the conclusions we draw?  Or are university students actually a reasonably appropriate subject pool from the point of view of exploring how lads’ mags work?

Peter Hegarty: According to the historian Kurt Danziger in his book Constructing the Subject, students became an unmarked “normative” subject population for psychologists, at least in the United States, between the world wars. Since then, criticisms of over-reliance on student samples have been common (such as those of Quin McNemar in the 1940s, or David Sears in the 1980s). Within the history of this criticism, perhaps what is most distinct about the recent argument about WIERDness is that it draws on the developments in cultural psychology of the last 20 years or so. For this specific study, our rational for studying young people on a campus was not only convenience; they are also the target market for these magazines, by virtue of their age, and by virtue of possessing the disposable income to purchase them.

May I take the time to offer a slightly broader perspective on the problem of under- and over-representation of social groups in psychology? The issue is not simply one of who gets included, and who does not. This is because groups can be disempowered and science compromised by being erased (as the WIERD criticism presumes), and groups can be disempowered when they are consistently located within the psychologists’ gaze – as in Foucaultian disciplinary power. African-Americans are oversampled in the US literature on forensic psychology, but that literature is not anti-racist, it’s largely based on a “deficit” model of race (Carter & Forsythe, 2007). The issue is not simply one of inclusion or exclusion, but one of how inclusion happens, as sociologist Steven Epstein’s work on inclusive paradigms in medicine nicely shows.

In other experiments and content analyses, my colleagues and I have found that people spontaneously explain group differences by attending to lower power groups more of the time. In our own research we have observed this pattern in scientists publications and in explanations produced in the lab with regard to race, gender, and sexuality, for example (Hegarty & Buechel, 2006; Hegarty & Pratto, 2004). On the face of it, this might lead to greater stereotyping of the lower power “marked” group. Indeed, as Suzanne Bruckmueller’s work on linguistic framing subtly shows, once a group is positioned as “the effect to be explained” in an account of group differences, then people tend to infer that the group has less power (Bruckmüller & Abele, 2010). Our work suggests that to trouble the “normative” status that WIERD people occupy in our ontologies, that inclusion is necessary but not sufficient. It’s also important to reframe our questions about difference to think concretely about normative groups. In the case of our lads’ mags research, we were heartened that people were prompted to reframe questions about the widespread problem of violence against women away from the small category of convicted rapists, to ask broader questions about how such violence is normalized.

JS: A lot of scientists seem to have a love/hate relationship with mass media. They want the public to understand their research and why it’s interesting and important, but media coverage sometimes gets the details badly wrong, or obliterates the nuance.  And, given the subject matter of your research (which the average person might reasonably connect to his or her own concerns more easily than anything we might learn about the Higgs boson), it seems like misunderstandings of what the research means could get amplified pretty quickly.  What has your experience been as far as the media coverage of your research?  Are there particular kinds of issues you’d like the public to grasp better when they read or hear about this kind of research?

PH: Your question touches on the earlier point about the difference between the human and natural sciences. Our work is caught up in “looping effects” as people interpret it for themselves, but the Higgs boson doesn’t care if the folks in CERN discover it or not. (I think, I’m no expert on sub-atomic physics!) Although some research that I released last year on sexist language got good coverage in the media (Hegarty, Watson, Fletcher & McQueen, 2011), the speed and scale of the reaction to the Horvath et al. (2011) paper was a new experience for me, so I am learning about the media as I go.

There is no hard and fast boundary between “the media” and “the public” who are ‘influenced’ by that media anymore; I’m not sure there ever was one. The somewhat ‘viral’ reaction to this work on the social networking sites such as twitter was visibly self-correcting in ways that don’t fit with social scientists’ theories that blame the media for beguiling the public. Some journalists misunderstood the procedures of Experiment 1 in our study, and it was misdescribed in some media sources. But on Twitter, folk were re-directing those who were reproducing that factual error to the Surrey website. Overall, watching the Twitter feeds reminded me most of the experience of giving a class of students an article to discuss and watching a very useful conversation emerge about what the studies had hypothesized, what they had found, how much you might conclude from the results, and what the policy implications might be. I am somewhat more optimistic about the affordances of social media for education as a result of this experience.

JS: Given the connection between your research questions in this research and actual features of our world that might matter to us quite a lot (like how young men view and interact with the women with whom they share a world), it seems like ultimately we might want to *use* what we learn from the research to make things better, rather than just saying, “Huh, that’s interesting.”  What are the challenges to moving from description to prescription here?  Are there other “moving parts” of our social world you think we need to understand better to respond effectively to what we learn from studies like these?

Related to what I’ve said above, I would like people to see the research as a “red flag” about the range and character of media that young people now read, and which are considered “normal.” There are now numerous anecdotes on the web of people who have been prompted by this research to look at a lads’ mag for the first time – and been surprised or shocked by what they see. We are also in contact with some sex educators about how this work might be used to educate men for a world in which this range of media exists. Precisely because we think this research might have relevance for a broad range of people who care about the fact that people should have pleasure, intimacy, and sex without violence, bullying and hatred,

We have suggested that it should prompt investment in sex education rather than censorship. In so doing, we are adopting an ‘incrementalist’ approach to people’s intelligence about sex and sexual literacy. Carol Dweck’s work shows that children and young people who believe their intelligence to be a fixed ‘entity’ do not fare as well academically as those who believe their intelligence might be something ‘incremental’ that can be changed through effort. Censorship approaches seem to us to be based on fear, and to assume a rather fixed limit to the possibilities of public discourse about sex. We do not make those assumptions, but we fear that they can become self-fulfilling prophecies.

JS: How do you keep your prescriptive hunches from creeping into the descriptive project you’re trying to do with your research?

I’m not sure that it is possible or desirable to exclude subjectivity from science; your last question obliged me to move from description to prescription. It is sometimes striking how much many scientists want to be ‘above politics’ and influence policy, to advocate and remain value-neutral, to change the world, but not to intervene etc. My thinking on this matter borrows more from Sandra Harding’s view of ‘strong objectivity,’ and particularly her idea that the science we get is affected by the range of people included in its production and the forms of social relationships in which they participate. I also think that Stephen Shapin’s book A Social History of Truth is a useful albeit distal explanation of why the question of subjectivity in science is often seen as an affront to honour and the opposite of reasoned dispassionate discussion. In the UK, there is now an obligation on scientists to engage non-academic publics by reporting’ impact summaries to the government as part of national exercises for documenting research excellence. However, this policy can overlook the importance of two-way dialogue between academic and non-academic audiences about how we create different kinds of knowledge for different kinds of purposes. For those reasons, I’m grateful for the opportunity to participate in a more dialogical forum about science and ethics like this one.

Bibliography

Bruckmüller, S., & Abele, A. (2010). Comparison focus in intergroup comparisons: Who we compare to whom influences who we see as powerful and agentic. Personality and Social Psychology Bulletin, 36, 1424-1435.

Carter, R.T., & Forsythe, J.M. (2007). Examining race and culture in psychology journals: The case of forensic psychology. Professional Psychology: Theory and Practice, 38, 133-142.

Danziger, K. (1990). Constructing the Subject: Historical Origins of Psychological Research. Cambridge, UK: Cambridge University Press.

Dweck, C. (2000). Self-theories: Their Role in Motivation, Personality and Development. Psychology Press.

Epstein, S. (2007). Inclusion: The Politics of Difference in Medical Research. Chicago: Univeristy of Chicago Press.

Foucault, M. (1978). Discipline and Punish: The Birth of the Prison. Trans. Alan Sheridan. New York, Random House.

Hacking, I. (1995). The looping effects of human kinds. In Dan Sperber, David Premack and Ann James Premack (Eds.), Causal Cognition: A Multi-Disciplinary Debate (pp. 351-383). Oxford, UK: Oxford University Press.

Harding, S. (1987). The Science Question in Feminism. Ithaca, NY: Cornell University Press.

Hegarty, P., & Buechel C. (2006). Androcentric reporting of gender differences in APA journals: 1965-2004. Review of General Psychology, 10, 377-389.

Hegarty, P, & Pratto F. (2004) The differences that norms make: Empiricism, social constructionism, and the interpretation of group differences. Sex Roles, 50, 445-453.

Hegarty P.J., Watson, N., Fletcher L, & McQueen, G. (2011) When gentlemen are first and ladies are last: Effects of gender stereotypes on the order of romantic partners’ names. British Journal of Social Psychology, 50, 21-35.

Horvath, M.A.H., Hegarty, P., Tyler, S. & Mansfield, S. (2011).“Lights on at the end of the party”: Are Lads Mags’ Mainstreaming Dangerous Sexism? British Journal of Psychology. Available from http://onlinelibrary.wiley.com/doi/10.1111/j.2044-8295.2011.02086.x/abstract

McNemar, Q. (1940). Sampling in psychological research. Psychological Bulletin, 37, 331-365.

Sears, D. O. (1986). College sophomores in the laboratory: Influences of a narrow data base on social psychology’s view of human nature. Journal of Personality and Social Psychology, 51, 515-530.

Shapin, S. (1994). A Social History of Truth: Civility and Science in Seventeenth-Century England. Chicago: University of Chicago Press.

Lads’ mags, sexism, and research in psychology: an interview with Dr. Peter Hegarty (part 1).

Back in December, there was a study that appeared in The British Journal of Psychology that got a fair amount of buzz. The paper (Horvath, M.A.H., Hegarty, P., Tyler, S. & Mansfield, S., ” ‘Lights on at the end of the party’: Are lads’ mags mainstreaming dangerous sexism?” British Journal of Psychology. DOI:10.1111/j.2044-8295.2011.02086.x) looked the influence that magazines aimed at young men (“lads’ mags”) might have on how the young people who read them perceive their social reality. Among other things, the researchers found that the subjects in the study found the descriptions of women given by convicted sex offenders and lads’ mags are well nigh indistinguishable, and that when a quote was identified as from a lads’ mag (no matter what its actual source), subjects were more likely to say that they identified with the view it expressed than if the same quote was identified as coming from a rapist.

I wrote about the details of this research in a post on my other blog.

One of the authors of the study, Dr. Peter Hegarty, is someone I know a little from graduate school (as we were in an anthropology of science seminar together one term). He was gracious enough to agree to an interview about this research, and to answer some of my broader questions (as a physical scientist turned philosopher) about what doing good science looks like to a psychologist. Owing to its length, I’m presenting the interview in two posts, this one and one that will follow it tomorrow.

Janet Stemwedel: Is there something specific that prompted this piece of research — a particular event, or the Nth repetition of a piece of “common wisdom” that made it seem like it was time to interrogate it?  Or is this research best understood as part of a broader project (perhaps of identifying pieces of our social world that shape our beliefs and attitudes)?

Peter Hegarty: We came to this research for different reasons. Miranda [Horvath] had been working more consistently on the role of lads’ mags in popular culture than I had been (see Coy & Horvath, 2011). Prompted by another students’ interests, I had published a very short piece earlier this year on the question of representations of ‘heteroflexible’ women in lads’ mags (Hegarty & Buechel, 2011). The two studies reported in Horvath, Hegarty, Tyler & Mansfield (2011) were conducted as Suzannah Tyler and Sophie Mansfield’s M.Sc. Dissertations in Forensic Psychology, a course provided jointly by the University of Surrey and Broadmoor Hospital. Miranda and I took the lead on writing up the research after Miranda moved to Middlesex University in 2010.

JS: When this study was reported in the news, as the Twitters were lighting up with discussion about this research, some expressed concern that the point of the research was to identify lads’ mags as particularly bad (compared to other types of media), or as actually contributing to rapes.  Working from the information in the press release (because the research paper wasn’t quite out yet), there seemed to be some unclarity about precisely what inferences were being drawn from the results and (on the basis of what inferences people thought you *might* be drawing) about whether the research included appropriate controls — for example, quotes about women from The Guardian, or from ordinary-men-who-are-not-rapists.  Can you set us straight on what the research was trying to find out and on what inferences it does or does not support?  And, in light of the hypotheses you were actually testing, can you discuss the issue of experimental controls?

PH: Our research was focused on lads’ mags –- rather than other media –- because content analysis research had shown that those magazines were routinely sexist, operated in an advice-giving mode, and often dismissed their social influence. This is not the case –- as far as I know — with regard to The Guardian. So there was a rationale to focus on lads’ mags that was not based on prior research. We hoped to test our hypothesis that lads’ mags might be normalizing hostile sexism. This idea hung on two matters; is there an overlap in the discourse of lads’ mags and something that most people would accept as hostile sexism? Does that content appear more acceptable to young men when it appears to come from a lads’ mag? The two studies mapped onto these goals. In one, we found that young women and men couldn’t detect the source of a quote as coming from a convicted rapist’s interview or a lads’ mag. In another, young men identified more with quotes that they believed to have come from lads’ mags rather than convicted rapists.

JS: While we’re on the subject of controls, it strikes me that good experimental design in psychological research is probably different in some interesting ways from good experimental design in, say, chemistry.  What are some misconceptions those of us who have more familiarity with the so-called “hard sciences” have about social science research?  What kind of experimental rigor can you achieve without abandoning questions about actual humans-in-the-world?

PH: You are right that these sciences might have different ontologies, because psychology is a human science. There are a variety of perspectives on this, with scholars such as Ian Hacking arguing for a separate ontology of the human sciences and more postmodern authors such as Bruno Latour arguing against distinctions between humans and things. Generally, I would be loath do describe differences between the sciences in terms of the metaphor of “hardness,” because the term is loaded with implicature. First, psychology is a potentially reflexive science about people, conducted by people and is characterized by what the philosopher Ian Hacking calls “looping effects;” people’s thoughts, feelings and behaviours are themselves influenced by psychological theories about them. Second, measurement in psychology is more often dependent on normalization and relative judgment (as in an IQ test, or a 7-point Likert item on a questionnaire, for example). Third, there is a lot of validity to the Foucaultian argument that the “psy- disciplines” have often been used in the service of the state, to divide people into categories of “normal” and “abnormal” people, so that different people might be treated very differently without offending egalitarian ideologies. Much of clinical psychology and testing takes this form.

Critics of psychology often stop there. By so doing, they overlook the rich tradition within psychology of generating knowledge that troubles forms of normalization, by suggesting that the distinction between the “normal” and the “abnormal” is not as firm as common sense suggests. Studies in this tradition might include Evelyn Hooker’s (1957) demonstration – from that dark era when homosexuality was considered a mental illness – that there are no differences in the responses of gay and straight men to personality tests. One might also include David Rosenhan’s (1973) study in which ordinary people managed to deceive psychiatrists that they were schizophrenic. A third example might be stereotype threat research (e.g., by Claude Steele and Joshua Aronson, 1995), which shows that the underperformance of African Americans on some standardized tests reflects not genuine ability, but a situational constraint introduced by testing conditions. Like these studies, we would hope ours would trouble’s people’s sense of what they take for granted about differences between people. In particular we hope that people will reconsider what they think they know about “extreme” sexism – that leads to incarceration – and “normal” sexism, that is now typical for young men to consume. I would urge academic critics of psychology – particularly those that focus on its complicity with Foucaultian disciplinary power, and the power of the state more generally – to develop more critiques that can account for such empirical work.

For the last half a century, “rigor” in empirical psychology has been organized by the language of validity and reliability of measurement (Cronbach & Meehl, 1955). Psychologists also tend to be Popperians, who construct “falsifiable” theories and use Fischerian inferential statistics to construct experiments that afford the possibility of falsification. However, inferential norms are changing in the discipline for three reasons. First, the rise of neuroscience has lead to a more inductive form of inference in which mapping and localization plays a greater role in scientific explanation. Second, social psychologists are increasingly engaging with structural equation modelling and offering confirmatory models of social processes. Third, there is “statistical reform” in psychology, away from the ritual of statistical significance testing toward making variability more transparent through the reporting of confidence intervals, effect sizes, and exact significance values. See Spellman (2012) for one very recent discussion of what’s happening within the genre of scientific writing in psychology around retaining rigor and realism in psychological science.

JS: One thing that struck me in reading the paper was that instruments have been developed to measure levels of sexism.  Are these measures well-accepted within the community of research psychologists?  (I am guessing that if the public even knew about them, they would be pretty controversial in some quarters … maybe the very quarters whose denizens would get high scores on these measures!)

We used two well-established measures; the ambivalent sexism inventory and the AMMSA, and one measure of endorsement of lads’ mags that we developed ourselves for the study. We describe some of the previous findings of other researchers who have used these scales to examine individual differences in responses to vignettes about sexual violence in the article. We feel more confident of the measure we developed ourselves because it was highly correlated with all other measures of sexism and because it was highly correlated with men’s identification with quotes from rapists and from lads’ mags. In other words, we followed the logic of psychologists such as Lee Cronbach, Paul Meehl and Donald Campbell for establishing and developing the “construct validity” of the empirical scales.

* * * * *

Tomorrow, in the second part of my interview with Peter Hegarty, we discuss the WEIRD-ness of college students as subjects for psychological research, how to go from description to prescription, and what it’s like for scientists to talk about their research with the media in the age of Twitter. Stay tuned!

Bibliography

Cronbach, L. J., & Meehl, P. E. (1955). Construct validity in psychological tests. Psychological Bulletin, 52, 281-302.

Coy, M., & Horvath, M.A.H. (2011).‘Lads mags’, young men’s attitudes towards women and acceptance of myths about sexual aggression. Feminism & Psychology, 21, 144-150.

Foucault, M. (1978). Discipline and Punish: The Birth of the Prison. Trans. Alan Sheridan. New York, Random House.

Hacking, I. (1995). The looping effects of human kinds. In Dan Sperber, David Premack and Ann James Premack (Eds.), Causal Cognition: A Multi-Disciplinary Debate (pp. 351-383). Oxford, UK: Oxford University Press.

Hegarty, P., & Buechel C (2011) ‘”What Blokes Want Lesbians to be”: On FHM and the socialization of pro-lesbian attitudes among heterosexual-identified men’. Sage Publications Feminism & Psychology, 21, 240-247.

Hooker, E. (1957). The adjustment of the male overt homosexual. Journal of Projective Techniques, 21, 18-31.

Horvath, M.A.H., Hegarty, P., Tyler, S. & Mansfield, S. (2011).“Lights on at the end of the party”: Are Lads Mags’ Mainstreaming Dangerous Sexism? British Journal of Psychology. Available from http://onlinelibrary.wiley.com/doi/10.1111/j.2044-8295.2011.02086.x/abstract

Latour, B. (1993). We Have Never Been Modern. Cambridge, MA: Harvard University Press.

Rosenhan, D.L. (1973). On being sane in insane places. Science, 179, 250-258.

Spellman, B.A. (2012). Introduction to the special section: Data, data everywhere. . . especially in my file drawer. Perspectives on Psychological Science, 7, 58-59.

Steele, C., & Aronson, J. (1995). Stereotype threat and the intellectual test performance of African Americans.” Journal of Personality and Social Psychology 69, 797-811.

Help high school “nerds” visit the Large Hadron Collider.

Last week, I got a really nice email, and a request, from a reader. She wrote:

I am a high school senior and an avid follower of your blog. I am almost definitely going to pursue science in college – either chemistry, physics, or engineering; I haven’t quite decided yet! I am the editor of my school’s newspaper, and I frequently write about science topics; I find science journalism interesting and possibly will pursue it as a career. 

I’m writing because this spring, 32 physics students from my high school will hopefully be taking a trip to the Large Hadron Collider at CERN in Geneva. We are extremely excited to make the trip, as it will allow us to glimpse some of the most groundbreaking physics research in the world. Twenty-two of the 32 students going are girls, and we are all involved with the physics department at our school. Women are overwhelmingly outnumbered in the science classes at my school, especially the tougher Advanced Placement classes; thus, taking this trip with a majority of women feels like a triumph.

My correspondent is, this year, the president of her high school’s science club, which is affectionately called “BACON: the best All-around Club of Nerds”. If you look at the BACON website, you will see that they do some pretty neat stuff. They field a bunch of teams for competitions like the Science Olympiad, Zero Robotics, and the Spirit of Innovation Challenge. And, they launch weather balloons to capture video and still photographs in a near space environment, have a day of launching model rockets and flying model airplanes, and have created a giant tank of ooblek to run across.

Basically, the kind of science-y stuff that might make high school not just tolerable but fun, which I think is a pretty big deal.

Here’s where we get to the request.

The planned high school trip bringing the 32 students from Virginia to CERN will be exciting, but expensive. So, as students have come to do for pretty much every field trip, the BACON members are doing some fundraising. Here’s their fundraising page, from which we learn:

As we speak, scientists at CERN are conducting groundbreaking research and rewriting the science textbooks for future generations. It is imperative that our students gain an interest and understanding in such endeavors. A two-day tour of CERN will surely aid in our students’ comprehension of particle physics, the study of the mechanisms and interactions that underlie all chemical, biological, and cosmological processes. But more importantly, through exposure to the leading edge of physics research, this trip is intended to excite students about scientific progress and demonstrate the power of experimentation and collaboration outside of the classroom. …

We need money to cover the cost of travel, lodging, food, and tours. Specifically, the cost breakdown per student is as follows: $1000 for travel; $300 for meals; $300 for lodging; $100 for tours and exhibits. Thirty-two students are scheduled to attend, and without fundraising the total cost is $1700 per student. Unfortunately, not all students can afford this. Any donations are welcome to lower the per-student cost and facilitate this trip for all who want to go!

For donations of various sizes, they are offering perks ranging from thank you cards and pictures of the trip, to signed T-shirts, to something special from the CERN gift shop, to a video to thank you posted on YouTube.

If you want to help but can spare the cash for a monetary donation, you may still be able to help these plucky science students make their CERN trip a reality:

Tell your friends! Share this link with others: indiegogo.com/baconatcern. There are also other ways to help us besides monetary donations. Do you have any objects, gift certificates, coupons, or other items you could donate for a raffle? Do you have an idea for a fundraising event we could host? If you want to get involved, please email us: chsbacon@gmail.com. We are really looking forward to this amazing opportunity, and we appreciate any help you can provide. Thank you!

I know I’m looking forward to living vicariously through this group (since no doubt I’ll be grading mountains of papers when they’re scheduled to tour the LHC). If you want to pay some science enthusiasm forward to the next generation, here’s one way to do it.

Meanwhile, I will inquire about whether the BACONite can share some highlights of their trip (and their preparations for it) here.

The Research Works Act: asking the public to pay twice for scientific knowledge.

There’s been a lot of buzz in the science blogosphere recently about the Research Works Act, a piece of legislation that’s been introduced in the U.S. that may have big impacts on open access publishing of scientific results. John Dupuis has an excellent round-up of posts on the subject. I’m going to add my two cents on the overarching ethical issue.

Here’s the text of the Research Works Act:

No Federal agency may adopt, implement, maintain, continue, or otherwise engage in any policy, program, or other activity that–

(1) causes, permits, or authorizes network dissemination of any private-sector research work without the prior consent of the publisher of such work; or

(2) requires that any actual or prospective author, or the employer of such an actual or prospective author, assent to network dissemination of a private-sector research work. …

In this Act:

(1) AUTHOR- The term ‘author’ means a person who writes a private-sector research work. Such term does not include an officer or employee of the United States Government acting in the regular course of his or her duties.

(2) NETWORK DISSEMINATION- The term ‘network dissemination’ means distributing, making available, or otherwise offering or disseminating a private-sector research work through the Internet or by a closed, limited, or other digital or electronic network or arrangement.

(3) PRIVATE-SECTOR RESEARCH WORK- The term ‘private-sector research work’ means an article intended to be published in a scholarly or scientific publication, or any version of such an article, that is not a work of the United States Government (as defined in section 101 of title 17, United States Code), describing or interpreting research funded in whole or in part by a Federal agency and to which a commercial or nonprofit publisher has made or has entered into an arrangement to make a value-added contribution, including peer review or editing. Such term does not include progress reports or raw data outputs routinely required to be created for and submitted directly to a funding agency in the course of research.

(Bold emphasis added.)

Let’s take this at the most basic level. If public money is used to fund scientific research, does the public have a legitimate expectation that the knowledge produced by that research will be shared with the public? If not, why not? (Is the public allocating scarce public funds to scientific knowledge-building simply to prop up that sector of the economy and/or keep the scientists off the streets?)

Assuming that the public has the right to share in the knowledge built on the public’s dime, should the public have to pay to access that knowledge (at around $30 per article) from a private sector journal? The text of the Research Works Act suggests that such private sector journals add value to the research that they publish in the form of peer review and editing. Note, however, that peer review for scientific journals is generally done by other scientists in the relevant field for free. Sure, the journal editors need to be able to scare up some likely candidates for peer reviewers, email them, and secure their cooperation, but the value being added in terms of peer reviewing here is added by volunteers. (Note that the only instance of peer reviewing in which I’ve participated where I’ve actually been paid for my time involved reviewing grant proposals for a federal agency. In other words, the government doesn’t think peer review should be free … but a for-profit publishing concern can help itself to free labor and claim to have added value by virtue of it.)

Maybe editing adds some value, although journal editors of private sector journals have been taken to task for favoring flashy results, and for occasionally subverting their own peer review process to get those flashy results published. But there’s something like agreement that the interaction between scientists that happens in peer review (and in post-publication discussions of research findings) is what makes it scientific knowledge. That is to say, peer review is recognized as the value-adding step science could not do without.

The public is all too willing already to see public money spent funding scientific research as money wasted. If members of the public have to pay again to access research their tax dollars already paid for, they are likely to be peeved. They would not be wrong to feel like the scientific community had weaseled out of fulfilling its obligation to share the knowledge it builds for the good of the public. (Neither would they be wrong to feel like their government had fallen down on an ethical obligation to the public here, but whose expectations of their government aren’t painfully low at the moment?) A rightfully angry public could mean less public funding for scientific research — which means that there are pragmatic, as well as ethical, reasons for scientists to oppose the Research Works Act.

And, whether or not the Research Works Act becomes the law of the land in the USA, perhaps scientists’ ethical obligations to share publicly funded knowledge with the public ought to make them think harder — individually and as a professional community — about whether submitting their articles to private sector journals, or agreeing to peer review submission for private sector journals, is really compatible with living up to these obligations. There are alternatives to these private sector journals, such as open access journals. Taking those alternatives seriously probably requires rethinking the perceived prestige of private sector journals and how metrics of that prestige come into play in decisions about hiring, promotion, and distribution of research funds, but sometimes you have to do some work (individually and as a professional community) to live up to your obligations.

Suit against UCLA in fatal lab fire raises question of who is responsible for safety.

Right before 2011 ended (and, as it happened, right before the statute of limitations ran out), the Los Angeles County district attorney’s office filed felony charges against the University of California regents and UCLA chemistry professor Patrick Harran in connection with a December 2008 fire in Harran’s lab that resulted in the death of a 23-year-old staff research assistant, Sheharbano “Sheri” Sangji.

As reported by The Los Angeles Times:

Harran and the UC regents are charged with three counts each of willfully violating occupational health and safety standards. They are accused of failing to correct unsafe work conditions in a timely manner, to require clothing appropriate for the work being done and to provide proper chemical safety training.

Harran, 42, faces up to 4½ years in state prison, Robison said. He is out of town and will surrender to authorities when he returns, said his lawyer, Thomas O’Brien, who declined to comment further.

UCLA could be fined up to $1.5 million for each of the three counts.

[UCLA vice chancellor for legal affairs Kevin] Reed described the incident as “an unfathomable tragedy,” but not a crime.

The article notes that Sangji was working as a staff research assistant in Harran’s lab while she was applying to law schools. It mentions that she was a 2008 graduate of Pomona College but doesn’t mention whether she had any particular background in chemistry.

As it happens, the work she was doing in the Harran lab presented particular hazards:

Sangji was transferring up to two ounces of t-butyl lithium from one sealed container to another when a plastic syringe came apart in her hands, spewing a chemical compound that ignites when exposed to air. The synthetic sweater she wore caught fire and melted onto her skin, causing second- and third-degree burns.

In May 2009, Cal/OSHA fined UCLA a total of $31,875 after finding that Sangji had not been trained properly and was not wearing protective clothing.

Two months before the fatal fire, UCLA safety inspectors found more than a dozen deficiencies in the same lab, according to internal investigative and inspection reports reviewed by The Times. Inspectors found that employees were not wearing requisite protective lab coats and that flammable liquids and volatile chemicals were stored improperly.

Corrective actions were not taken before the fire, the records showed.

Actions to address the safety deficiencies were taken after the fire, but these were, obviously, too late to save Sangji.

I’m not a lawyer, and I’m not interested in talking about legalities here — whether for the particular case the Los Angeles DA’s office will be pursuing against UCLA or for academic research labs more generally.

Rather, I want to talk about ethics.

Knowledge-building can be a risky business. In some situations, it involves materials that pose direct dangers to the people handling them, to the people in the vicinity, and even to people some distance away who are just trying to get on with their lives (e.g., if the hazardous materials get out into our shared environment).

Generally, scientists doing research that involves hazardous materials do what they can to find out how to mitigate the hazards. They learn appropriate ways of handling the materials, of disposing of them, of protecting themselves and others in case of accidents.

But, knowing the right ways to deal with hazardous materials is not sufficient to mitigate the risks. Proper procedures need to be implemented. Otherwise, your knowledge about the risks of hazardous materials is mostly useful in explaining bad outcomes after they happen.

So, who is ethically responsible for keeping an academic chemistry lab safe? And what exactly is the shape this responsibility takes — that is, what should he or she be doing to fulfill that obligation?

What’s the responsibility of the principal investigator, the scientist leading the research project and, in most cases, heading the lab?

What’s the responsibility of the staff research assistant or technician, doing necessary labor in the lab for a paycheck?

What’s the responsibility of the graduate student in the research group, trying to learn how to do original research and to master the various skills he or she will need to become a PI someday? (It’s worth noting here that there’s a pretty big power differential between grad students and PIs, which may matter as far as how we apportion responsibility. Still, this doesn’t mean that those with less power have no ethical obligations pulling on them.)

What’s the responsibility of the institution under whose auspices the lab is operating? When a safety inspection turns up problems and issues a list of issues that must be corrected, has that responsibility been discharged? When faculty members hire new staff research assistants, or technicians, or graduate students, does the institution have any specific obligations to them (as far as providing safety training, or a place to bring their safety concerns, or protective gear), or does this all fall to the PI?

And, what kind of obligations do these parties have in the case that one of the other players falls down on some of his or her obligations?

If I were still working in a chemistry lab, thinking through ethical dimensions like these before anything bad happened would not strike me as a purely academic exercise. Rather, it would be essential to ensuring that everyone stays as safe as possible.

So, let’s talk about what that would look like.