Giving thanks.

This being the season, I’d like to take the opportunity to pause and give thanks.

I’m thankful for parents who encouraged my curiosity and never labeled science as something it was inappropriate for me to explore or pursue.

I’m thankful for teachers who didn’t present science as if it were confined within the box of textbooks and homework assignments and tests, but instead offered it as a window through which I could understand ordinary features of my world in a whole new way. A particular teacher who did this was my high school chemistry teacher, Mel Thompson, who bore a striking resemblance to Dr. Bunsen Honeydew and would, on occasion, blow soap bubbles with a gas jet as we took quizzes, setting them alight with a Bunsen burner before they reached the ceiling. Mr. Thompson always conveyed his strong conviction that I could learn anything, and on that basis he was prepared to teach me anything about chemistry that I wanted to learn.

I’m thankful for the awesome array of women who taught me science as an undergraduate and a graduate student, both for their pedagogy and for the examples they provided of different ways to be a woman in science.

I’m especially thankful for my mother, who was my first and best role model with respect to the challenges of graduate school and becoming a scientist.

I’m thankful for the mentors who have found me and believed in me when I needed help believing in myself.

I’m thankful for the opportunity graduate school gave me to make the transition from learning knowledge other people had built to learning how to build brand new scientific knowledge myself.

I’m thankful that the people who trained me to become a scientist didn’t treat it as a betrayal when I realized that what I really wanted to do was become a philosopher. I’m also thankful for the many, many scientists who have welcomed my philosophical engagement with their scientific work, and who have valued my contributions to the training of their science students.

I’m thankful for my children, through whose eyes I got the chance to relive the wonder of discovering the world and its workings all over again. I’m also thankful to them for getting me to grapple with some of my own unhelpful biases about science, for helping me to get over them.

I’m thankful for the opportunity to make a living pursuing the questions that keep me up at night. I’m thankful that pursuing some of these questions can contribute to scientific practice that builds reliable knowledge while being more humane to its practitioners, to better public understanding of science (and of scientists), and perhaps even to scientists and nonscientists doing a better job of sharing a world with each other.

And, dear readers, I am thankful for you.

Fall semester musing on numbers.

The particular numbers on which I’m focused aren’t cool ones like pi, although I suspect they’re not entirely rational, either.

I teach at a public university in a state whose recent budget crises have been epic. That means that funding for sections of classes (and especially for the faculty who teach those sections of classes) has been tight.

My university is a teaching-focused university, which means that there has also been serious effort to ensure that the education students get at the university gives them a significant level of mastery over their major subject, helps them develop compentencies and qualities of mind and skills, and so forth. How precisely to ensure this is an interesting conversation, couched in language about learning objectives and assessments and competing models of learning. But for at least some of the things our students are supposed to learn, the official judgment has been that this will require students to write (and receive meaningful feedback on) a minimum number of words, and for them to do so in classes with a relatively small maximum number of students.

In a class where students are required to write, and receive feedback on, a total of at least 6000 words, it seems absolutely reasonable that you wouldn’t want more than 25 students in the class. Do you want to grade and comment on more than 150,000 words per class section you are teaching? (At my university, it’s usually three or four sections per semester.) That’s a lot of feedback, and for it to be at all useful in assisting student learning, it’s best of you don’t go mad in the process of giving it.

There’s a recognition, then, that on a practical level, for courses that help students learn by way of a lot of writing, smaller class sizes are good. From the student’s point of view as well, there are arguably additional benefits to a smaller class size, whether being able to ask questions during lectures or class discussions, not feeling lost in the crowd, or what have you.

At least for a certain set of courses, the university recognizes that smaller classes are better and requires that the courses be no larger than 25.

But remember that tight funding? This means that the university has also put demands on departments, schools, and colleges within the university to maintain higher and higher student-faculty ratios.

If you make one set of courses small, to maintain the required student-faculty ratio, you must make other courses big — sometimes very, very big.

But while we’re balancing numbers and counting beans, we are still a teaching-focused university. That might mean that what supports effective teaching and learning should be a constraint on our solutions to the bean-counting problems.

We’re taking as a constraint that composition, critical thinking, and chemistry lab (among others) are courses where keeping class sizes small makes for better teaching and learning.

Is there any reason (beyond budgetary expedience) to think that the courses that are made correspondingly large are also making for better teaching and learning? Is there any subject we teach to a section of 200 that we couldn’t teach better to 30? (And here, some sound empirical research would be nice, not just anecdata.)

I can’t help but wonder if there is some other way to count the beans that would better support our teaching-focused mission, and our students.

Resistance to ethics instruction: considering the hypothesis that moral character is fixed.

This week I’ve been blogging about the resistance to required ethics coursework one sometimes sees in STEM* disciplines. As one reason for this resistance is the hunch that you can’t teach a person to be ethical once they’re past a certain (pre-college) age, my previous post noted that there’s a sizable body of research that supports ethics instruction as an intervention to help people behave more ethically.

But, as I mentioned in that post, the intuition that one’s moral character is fixed by one’s twenties can be so strong that folks don’t always believe what the empirical research says about the question.

So, as a thought experiment, let’s entertain the hypothesis that, by your twenties, your moral character is fixed — that you’re either ethical or evil by then and there’s nothing further ethics instruction can do about it. If this were the case, how would we expect scientists to respond to other scientists or scientific trainees who behave unethically?

Presumably, scientists would want the unethical members of the tribe of science identified and removed, permanently. Under the fixed-character hypothesis, the removal would have to be permanent, because there would be every reason to expect the person who behaved unethically to behave unethically again.

If we took this seriously, that would mean every college student who ever cheated on a quiz or made up data for a lab report should be barred from entry to the scientific community, and that every grown-up scientist caught committing scientific misconduct — or any ethical lapse, even those falling well short of fabrication, falsification, or plagiarism — would be excommunicated from the tribe of science forever.

That just doesn’t happen. Even Office of Research Integrity findings of scientific misconduct don’t typically lead to lifetime disbarment from federal research funding. Instead, they usually lead to administrative actions imposed for a finite duration, on the order of years, not decades.

And, I don’t think the failure to impose a policy of “one strike, you’re out” for those who behave unethically is because members of the tribe of science are being held back by some naïvely optimistic outside force (like the government, or the taxpaying public, or ethics professors). Nor is it because scientists believe it’s OK to lie, cheat, and steal in one’s scientific practice; there is general agreement that scientific misconduct damages the shared body of knowledge scientists are working to build.

When dealing with members of their community who have behaved unethically, scientists usually behave as if there is a meaningful difference between a first offense and a pattern of repeated offenses. This wouldn’t make sense if scientists were truly committed to the fixed-character hypothesis.

On the other hand, it fits pretty well with the hypothesis that people may be able to learn from their mistakes — to be rehabilitated rather than simply removed from the community.

There are surely some hard cases that the tribe of science view as utterly irredeemable, but graduate students or early career scientists whose unethical behavior is caught early are treated by many as probably redeemable.

How to successfully rehabilitate a scientist who has behaved unethically is a tricky question, and not one scientists seem inclined to speak about much. Actions by universities, funding agencies, or governmental entities like the Office of Research Integrity are part of the punishment landscape, but punishment is not the same thing as rehabilitation. Meanwhile, it’s unclear whether individual actions to address wrongdoing are effective at heading off future unethical behavior.

If it takes a village to raise a scientist, it may take concerted efforts at the level of scientific communities to rehabilitate scientists who have strayed from the path of ethical practice. We’ll discuss some of the challenges with that in the next post.

______
*STEM stands for science, technology, engineering, and mathematics.

Resistance to ethics instruction: the intuition that ethics cannot be taught.

In my last post, I suggested that required ethics coursework (especially for students in STEM* disciplines) are met with a specific sort of resistance. I also surmised that part of this resistance is the idea that ethics can’t be taught in any useful way, “the idea that being ethical is somehow innate, a mere matter of not being evil.”

In a comment on that post, ThomasB nicely illustrates that particular strain of resistance:

Certainly scientists, like everyone else in our society, must behave ethically. But what makes this a college-level class? From the description, it covers the basic do not lie-cheat-steal along with some anti-bullying and possibly a reminder to cite one’s references. All of which should have been instilled long before college.

So what is there to teach at this point? The only thing I can think of specific to science is the “publish or perish” pressure to keep the research dollars flowing in. Or possibly the psychological studies showing that highly intelligent and creative people are more inclined to be dishonest than ordinary people. Possibly because they are better at rationalizing doing what they want to do. Which is why I used the word “instilled” earlier: it seems to me that ethics comes more from the emotional centers of the brain than the conscious analytical part. As soon as we start consciously thinking about ethics, they seem to go out the window. Such as the study from one of the Ivy League schools where the students did worse at the ethics test at the end of the class than at the beginning.

So I guess the bottom line is whether the science shows that ethics classes at this point in a person’s life actually show an improvement in the person’s behavior. As Far as I know, there has been no such study done.

(Bold emphasis added.)

I think it’s reasonable to ask, before requiring an intervention (like ethics coursework), what we know about whether this sort of intervention is likely to work. I think it’s less reasonable to assume it won’t work without consulting the research on the matter.

As it happens, there has been a great deal of research on whether ethics instruction is an intervention that helps people behave more ethically — and the bulk of it shows that well-designed ethics instruction is an effective intervention.

Here’s what Bebeau et al. (1995) have to say about the question:

When people are given an opportunity to reflect on decisions and choices, they can and do change their minds about what they ought to do and how they wish to conduct their personal and professional lives. This is not to say that any instruction will be effective, or that all manner of ethical behavior can be developed with well-developed ethics instruction. But it is to say — and there is considerable evidence to show it — that ethics instruction can influence the thinking processes that relate to behavior. …

We do not claim that radical changes are likely to take place in the classroom or that sociopaths can be transformed into saints via case discussion. But we do claim that significant improvements can be made in reasoning about complex problems and that the effort is worthwhile. We are not alone in this belief: the National Institutes of Health, the National Science Foundation, the American Association for the Advancement of Science, and the Council of Biology Editors, among others, have called for increased attention to training in the responsible conduct of scientific research. Further, our belief is buttressed by empirical evidence from moral psychology. In Garrod (1993), James R. Rest summarizes the “several thousand” published studies on moral judgment and draws the following conclusions:

  • development of competence in ethical problem-solving continues well into adulthood (people show dramatic changes in their twenties, as in earlier years);
  • such changes reflect profound reconceptualization of moral issues;
  • formal education promotes ethical reasoning;
  • deliberate attempts to develop moral reasoning … can be demonstrated to be effective; and
  • studies link moral reasoning to moral behavior

So, there’s a body of research that supports ethics instruction as an intervention to help people behave more ethically.

Indeed, part of how ethics instruction helps is by getting students to engage analytically, not just emotionally. I would argue that making ethical decisions involves moving beyond gut feelings and instincts. It means understanding how your decisions impact others, and considering the ways your interests and theirs intersect. It means thinking through possible impacts of the various choices available to you. It means understanding the obligations set up by our relations to others in personal and professional contexts.

And methodology for approaching ethical decision making can be taught. Practice in making ethical decisions makes it easier to make better decisions. And making these decisions in conversation with other people who may have different perspectives (rather than just following a gut feeling) forces us to work out our reasons for preferring one course of action to the alternatives. These reasons are not just something we can offer to others to defend what we did, but they are things we can consider when deciding what to do in the first place.

As always, I reckon that there are some people who will remain unmoved by the research that shows the efficacy of ethics instruction, preferring to cling to their strong intuition that college-aged humans are past the point where an intervention like an ethics class could make any impact on their ethical behavior. But if that’s an intuition that ought to guide us — if, by your twenties, you’re either a good egg or irredeemably corrupt — it’s not clear that our individual or institutional responses to unethical behavior by scientists make any sense.

That’s the subject I’ll take up in my next post.

______
*STEM stands for science, technology, engineering, and mathematics.

______
Bebeau, M. J., Pimple, K. D., Muskavitch, K. M., Borden, S. L., & Smith, D. H. (1995). Moral reasoning in scientific research. Cases for teaching and assessment. Bloomington, IN: Poynter Center for the Study of Ethics and Assessment.

Garrod, A. (Ed.). (1993). Approaches to moral development: New research and emerging themes. Teachers College Press.

Resistance to ethics is different from resistance to other required courses.

For academic types like myself, the end of the semester can be a weird juxtaposition of projects that are ending and new projects that are on the horizon, a juxtaposition that can be an opportunity for reflexion.

I’ve just seen another offering of my “Ethics in Science” course to a (mostly successful) conclusion. Despite the fact that the class was huge (more than 100 students) for a course that is heavy on discussion, its students were significantly more active and engaged than those in the much smaller class I taught right after it. The students thought hard and well, and regularly floored me with their razor-sharp insights. All the evidence suggests that these students were pretty into it.

Meanwhile, I’m getting set for a new project that will involve developing ethics units for required courses offered in another college at my university — and one of the things I’ve been told is that the students required to take these courses (as well as some non-zero number of the professors in their disciplines) are very resistant to the inclusion of ethics coursework in courses otherwise focused on their major subjects.

I find this resistance interesting, especially given that the majority of the students in my “Ethics in Science” class were taking it because it was required for their majors.

I recognize that part of what’s going on may be a blanket resistance to required courses. Requirements can feel like an attack on one’s autonomy and individuality — rather than being able to choose what you will to study, you’re told what you must study to major in a particular subject or to earn a degree from a particular university. A course that a student might have been open to enjoying were it freely chosen can become a loathed burden merely by virtue of being required. I’ve seen the effect often enough that it no longer surprises me.

However, requirements aren’t usually imposed solely to constrain students’ autonomy. There’s almost always a reason that the course, or subject-matter, or problem-solving area that’s required is being required. The students may not know that reason (or judge it to be a compelling reason if they do know it), but that doesn’t meant that there’s not a reason.

In some ways, ethics is really not much different here from other major requirements or subject matter that students bemoan, including calculus, thermodynamics, writing in the major, and significant figures. On the other hand, the moaning for some of those other requirements tends to take the form of “When am I ever going to use that?”

I don’t believe I’ve ever heard a science or engineering student say, “When am I ever going to use ethics?”

In other words, they generally accept that they should be ethical, but they also sometimes voice resistance to the idea that a course (or workshop, or online training module) about how to be ethical will be anything but a massive waste of their time.

My sense is that at least part of what’s going on here is that scientists and engineers and their ilk feel like ethics are being imposed on them from without, by university administrators or funding agencies or accrediting organizations. Worse, the people exhorting scientists, engineers, et alia to take ethics seriously often seem to take a finger-wagging approach. And this, I suspect, makes it harder to get what those business types call “buy-in” from the scientists.

The typical story I’ve heard about ethics sessions in industry (and some university settings) goes something like this:

You get a big packet with the regulations you have to follow — to get your protocols approved by the IRB and/or the IACUC, to disclose potential conflicts of interest, to protect the company’s or university’s patent rights, to fill out the appropriate paperwork for hazardous waste disposal, etc., etc. You are admonished against committing the “big three” of falsification, fabrication, and plagiarism. Sometimes, you are also admonished against sexually harassing those with whom you are working. The whole thing has the feel of being driven by the legal department’s concerns: for goodness sake, don’t do anything that will embarrass the organization or get us into hot water with regulators or funders!


Listening to the litany of things you ought not to do, it’s really easy to think: Very bad people do things like this. But I’m not a very bad person. So I can tune this out, and I can kind of ignore ethics.


The decision to tune out ethics is enabled by the fact that the people wagging the fingers at the scientists are generally outsiders (from the legal department, or the philosophy department, or wherever). These outsiders are coming in telling us how to do our jobs! And, the upshot of what they’re telling us seems to be “Don’t be evil,” and we’re not evil! Besides, these outsiders clearly don’t care about (let alone understand) the science so much as avoiding scandals or legal problems. And they don’t really trust us not to be evil.


So just nod earnestly and let’s get this over with.

One hurdle here is the need to get past the idea that being ethical is somehow innate, a mere matter of not being evil, rather than a problem-solving practice that gets better with concrete strategies and repeated use. Another hurdle is the feeling that ethics instruction is the result of meddling by outsiders.


If ethics is seen as something imposed upon scientists by a group from the outside — one that neither understands science, nor values it, nor trusts that scientists are generally not evil — then scientists will resist ethics. To get “buy-in” from the scientists, they need to see how ethics are intimately connected to the job they’re trying to get done. In other words, scientists need to understand how ethical conduct is essential to the project of doing science. Once scientists make that connection, they will be ethical — not because someone else is telling them to be ethical, but because being ethical is required to make progress on the job of building scientific knowledge.
_____________
This post is an updated version of an ancestor post on my other blog, and was prompted by the Virtually Speaking Science discussion of philosophy in and of science scheduled for Wednesday, May 28, 2014 (starting 8 PM EDT/8 PM PDT). Watch the hashtags #VSpeak and #AskVS for more details.

Teaching chemistry while female: when my very existence was a problem.

Not quite 20 years ago, I was between graduate programs.

I had earned my Ph.D in chemistry and filed my applications to seven Ph.D. programs in philosophy. (There were some surreal moments on the way to this, including retaking the GRE two weekends after defending my chemistry dissertation — because, apparently, the GRE is a better predictor of success in graduate school than is success in graduate school.) In the interval between the graduate stipend from the chemistry program from which I was now a proud graduate and the (hypothetical) graduate stipend from the philosophy graduate program on the horizon, I needed to earn some money so I could continue to pay my rent.

I pieced together something approximating enough earnings. I spent a few hours a week as a research assistant to a visiting scholar studying scientific creativity. I spent many hours a week as an out-call SAT-prep tutor (which involved almost as many hours on San Francisco Bay Area freeways as it did working one-on-one with my pupils). I even landed a teaching gig at the local community college, although that wouldn’t start until the summer session. And, I taught the general chemistry segment of a Medical College Admission Test (MCAT) prep course.

Teaching the MCAT prep course involved four meetings (each four hours long, with three ten-minute breaks interspersed so people could stretch their legs, use the bathroom, find a vending machine, or what have you) with a large number of students planning to take the MCAT and apply to medical school. The time was divided between providing a refresher on general chemistry concepts and laying out problem-solving strategies for the “passage problems” to which the MCAT had recently shifted. I was working with old-school overhead transparencies (since this was 1994), with key points and the problems themselves in permanent ink and the working-out of the problems in transparency markers that erased with a damp cloth. The screen onto which the transparencies projected was very large, so I’d have to make use of the long rubber-tipped wooden pointer that was resting on the ledge of the chalkboard behind the screen.

During hour two of the very first meeting of the very first session I taught this MCAT prep course, as I retrieved the pointer from the chalk-ledge, I noticed that a single word had been written on the chalkboard:

Bitch

I was pretty sure it hadn’t been on the board at the beginning of the session. But I still had three hours worth of concepts to explain and problems to work before we could call it a day. So I ignored it and got down to business.

The second meeting with this group, I made a point of checking the chalkboard before I pulled down the projections screen, fired up the overhead projector, and commencing the preparation of the students for the MCAT.

Before the four hour session began, the chalkboard was blank. By the end of the four hours, again, there was a single word written on it:

Bitch

The same thing happened in our third session. By then it had started to really bug me, so, at the beginning of our fourth and final meeting together, I resolved at least to flush out whoever was doing the writing on the chalkboard. I collected all the chalk from the ledges and put it in the sink of the lab counter at the front of the room (for I was lecturing in a proper laboratory lecture hall, with sink, gas jets, and such). And, I brought a water bottle with me so I wouldn’t have to leave the lecture hall during the ten minute breaks to find a water fountain.

At the very first break, one of the young men in the prep course followed a path between the projection screen and the chalkboard, paused as if lost (or in search of chalk?), and then exited the room looking only a tiny bit sheepish.

On the board, appearing like a film negative against the light residue of chalk dust, he had written (I presume with a moistened finger):

Bitch

I still have no idea at all what provoked this hostility. The structure of the MCAT prep course was such that all I was doing was giving the students help in preparing for the MCAT. I was not grading them or otherwise evaluating them. Heck, I wasn’t even taking attendance!

What on earth about 25-year-old me, at the front of a lecture hall trying to make the essentials of general chemistry easy to remember and easy to apply to problem-solving — something these students presumably wanted, since they paid a significant amount of money to take the course — what made me a “bitch” to this young man? Why was it so important to him that not a single meeting we had passed without my knowing that someone in attendance (even if I didn’t know exactly who) thought I was a bitch?

When it happened, this incident was so minor, against the more overt hostility toward me as a woman in a male-dominated scientific field (soon to be followed, though I didn’t anticipate it at the time, by overt hostility toward me as a woman in male-dominated academic philosophy), that I almost didn’t remember it.

But then, upon reading this account of teaching while female, I did.

I remembered it so vividly that my cheeks were burning as they did the first time I saw that chalk-scrawled “bitch” and then had to immediately shake it off so that we could cover what needed to be covered in the time we had left for that meeting.

And I ask myself again, what was I doing, except a job that I was good at, a job that I did well, a job that I needed — what was I doing to that particular young man, paying for the service I was providing — that made me a bitch?

When we target chemophobia, are we punching down?

Over at Pharyngula, Chris Clarke challenges those in the chemical know on their use of “dihydrogen monoxide” jokes. He writes:

Doing what I do for a living, I often find myself reading things on Facebook, Twitter, or those increasingly archaic sites called “blogs” in which the writer expresses concern about industrial effluent in our air, water, consumer products or food. Sometimes the concerns are well-founded, as in the example of pipeline breaks releasing volatile organic chemicals into your backyard. Sometimes, as in the case of concern over chemtrails or toxic vaccines, the concerns are ill-informed and spurious.

And often enough, the educational system in the United States being the way it’s been since the Reagan administration, those concerns are couched in terms that would not be used by a person with a solid grounding in science. People sometimes miss the point of dose-dependency, of acute versus chronic exposure, of the difference between parts per million and parts per trillion. Sometimes their unfamiliarity with the basic facts of chemistry causes them to make patently ridiculous alarmist statements and then double down on them when corrected.

And more times than I can count, if said statements are in a public venue like a comment thread, someone will pipe up by repeating a particular increasingly stale joke. Say it’s a discussion of contaminants in tap water allegedly stemming from hydraulic fracturing for natural gas extraction. Said wit will respond with something like:

“You know what else might be coming out of your tap? DIHYDROGEN MONOXIDE!”

Two hydrogens, one oxygen … what’s coming out of your tap here is water. Hilarious! Or perhaps not.

Clarke argues that those in the chemical know whip out the dihydrogen monoxide joke to have a laugh at the expense of someone who doesn’t have enough chemical knowledge to understand whether conditions they find alarming really ought to alarm them. However, how it usually goes down is that other chemically literate people in earshot laugh while the target of the joke ends up with no better chemical understanding of things.

Really, all the target of the joke learns is that the teller of the joke has knowledge and is willing to use it to make someone else look dumb.

Clarke explains:

Ignorance of science is an evil that for the most part is foisted upon the ignorant. The dihydrogen monoxide joke depends for its humor on ridiculing the victims of that state of affairs, while offering no solution (pun sort of intended) to the ignorance it mocks. It’s like the phrase “chemophobia.” It’s a clan marker for the Smarter Than You tribe.

The dihydrogen monoxide joke punches down, in other words. It mocks people for not having had access to a good education. And the fact that many of its practitioners use it in order to belittle utterly valid environmental concerns, in the style of (for instance) Penn Jillette, makes it all the worse — even if those concerns aren’t always expressed in phraseology a chemist would find beyond reproach, or with math that necessarily works out on close examination.

There’s a weird way in which punching down with the dihydrogen monoxide joke is the evil twin of the “deficit model” in science communication.

The deficit model assumes that the focus in science communication to audiences of non-scientists should be squarely on filling in gaps in their scientific knowledge, teaching people facts and theories that they didn’t already know, as if that is the main thing they must want from science. (It’s worth noting that the deficit model seems to assume a pretty unidirectional flow of information, from the science communicator to the non-scientist.)

The dihydrogen monoxide joke, used the way Clarke describes, identifies a gap in understanding and then, instead of trying to fill it, points and laughs. If the deficit model naïvely assumes that filling gaps in knowledge will make the public cool with science, this kind of deployment of the dihydrogen monoxide joke seems unlikely to provoke any warm feelings towards science or scientists from the person with a gappy understanding.

What’s more, this kind of joking misses an opportunity to engage with what they’re really worried about and why. Are they scared of chemicals per se? Of being at the mercy of others who have information about which chemicals can hurt us (and in which amounts) and/or who have more knowledge about or control of where those chemicals are in our environment? Do they not trust scientists at all, or are they primarily concerned about whether they can trust scientists in the employ of multinational corporations?

Do their concerns have more to do with the information and understanding our policymakers have with regard to chemicals in our world — particularly about whether these policymakers have enough to keep us relatively safe, or about whether they have the political will to do so?

Actually having a conversation and listening to what people are worried about could help. It might turn out that people with the relevant scientific knowledge to laugh at the dihydrogen monoxide joke and those without share a lot of the same concerns.

Andrew Bissette notes that there are instances where the dihydrogen monoxide joke isn’t punching down but punching up, where educated people who should know better use large platforms to take advantage of the ignorant. So perhaps it’s not the case that we need a permanent moratorium on the joke so much as more careful thought about what we hope to accomplish with it.

Let’s return to Chris Clarke’s claim that the term “chemophobia” is “a clan marker for the Smarter Than You tribe.”

Lots of chemists in the blogosphere regularly blog and tweet about chemophobia. If they took to relentlessly tagging as “chemophobe!” people who are lacking access to the body of knowledge and patterns of reasoning that define chemistry, I’d agree that it was the same kind of punching down as the use of the dihydrogen monoxide joke Clarke describes. To the extent that chemists are actually doing this to assert membership in the Smarter Than You tribe, I think it’s counterproductive and mean to boot, and we should cut it out.

But, knowing the folks I do who blog and tweet about chemophobia, I’m pretty sure their goal is not to maintain clear boundaries between The Smart and The Dumb. When they fire off a #chemophobia tweet, it’s almost like they’re sending up the Batsignal, rallying their chemical community to fight some kind of crime.

So what is it these chemists — the people who have access to the body of knowledge and patterns of reasoning that define chemistry — find problematic about the “chemophobia” of others? What do they hope to accomplish by pointing it out?

Part of where they’re coming from is probably grounded in good old fashioned deficit-model reasoning, but with more emphasis on helping others learn a bit of chemistry because it’s cool. There’s usually a conviction that the basics of the chemistry that expose the coolness are not beyond the grasp of adults of normal intelligence — if only we explain in accessibly enough. Ash Jogalekar suggests more concerted efforts in this direction, proposing a lobby for chemistry (not the chemical industry) that takes account of how people feel about chemistry and what they want to know. However it’s done, the impulse to expose the cool workings of a bit of the world to those who want to understand them should be offered as a kindness. Otherwise, we’re doing it wrong.

Another part of what moves the chemists I know who are concerned with chemophobia is that they don’t want people who are not at home with chemistry to get played. They don’t want them to be vulnerable to quack doctors, nor to merchants of doubt trying to undermine sound science to advance a particular economic or political end, nor to people trying to make a buck with misleading claims, nor to legitimately confused people who think they know much more than they really do.

People with chemical know-how could help address this kind of vulnerability, being partners to help sort out the reliable information from the bogus, the overblown risks from risks that ought to be taken seriously or investigated further.

But short of teaching the folks without access to the body of knowledge and patterns of reasoning that define chemistry everything they know to be their own experts (which is the deficit model again), providing this kind of help requires cultivating trust. It requires taking the people to whom your offering the help seriously, recognizing that gaps in their chemical understanding don’t make them unintelligent or of less value as human beings.

And laughing at the expense of the people who could use your help — using your superior chemical knowledge to punch down — seems unlikely to foster that trust.

Science education: Am I part of the solution, or part of the problem?

In my blogging career (and even before), I’ve spent a fair bit of time bemoaning the low level of scientific education/literacy/competence among the American public. Indeed, I have expressed the unpopular opinion that all college students ought to do the equivalent of a minor in some particular science as one of their graduation requirements. I tell anyone who asks me (and a lot of people who don’t) that science is fun. Some of the very best teachers I know are science teachers.

But I wonder sometimes whether my exhortations are any help in turn the educational tide, or whether I’m just letting the current drag us in the wrong direction.

You see, I teach a philosophy of science course. (Actually, I teach multiple sections of it, and I teach it every semester.) And, at this university, that philosophy of science course satisfies the upper division general education requirement in science.

Yes, that’s right. Students can dodge taking an actual science course by taking a philosophy of science course instead. This yields throngs of students who are scared silly of anything scientific, and who know exactly one fact about philosophy: it’s in the Humanities college. (Humanities = fluffy, unthreatening classes where you read novels or watch films or look at paintings, and it’s all about what you think is going on, with no right or wrong answers. At least, this is what certain of my students assume before enrolling for this course.)

How on earth, given my aforementioned peevishness about science-scared students and community members, can I live with my role enabling the flight from learning some science?

It doesn’t hurt that some of the other options for filling this upper division science general education requirement have well-earned reputations for being “gut” courses (or as some like to say, “science-lite”). Notably absent from the list are many of the standard, science-major-y fundamentals. Instead, the list is heavy on physics for musicians, nutrition and exercise, and astronomy for people who will not do math under any circumstances. (The main exception: the offerings from geology and meteorology seem significantly more undiluted and rigorous ways to fulfill the requirement. Go earth and atmospheric scientists!) My course, I’m told, is actually kind of challenging. So even if the students are escaping a class in a science department, with me they’re not escaping work.

Also, the general education requirement was structured specifically to make students pay attention to the scientific method, to understand the difference between science and pseudo-science, and to understand science as an endeavor conducted by humans that has impacts on humans. As a former science student who took only the hard-core science courses intended for science majors, my experience is that we saw a lot of patterns of scientific reasoning, and we learned to extend these patterns to deal with new problems … but we didn’t have loads of time to get reflective about the scientific method. For me, that reflective awareness didn’t really happen until the semester I (1) started doing research, and (2) took a philosophy of science course. (Yes, both of those things happened in the same semester. I wish I could say I planned it that way, but it was serendipity.)

For the brief span of years in which I would have counted as a scientist, I think what I got out of philosophy of science made me a better scientist. (That I fell prey to philosophy’s charms and left science is another issue for another post.) And, the small cadre of science majors who take my course (perhaps because they’d be embarrassed to take a “physics for poets” kind of course) seem to get something useful from the course that they can bring back to their science-department understanding of science. In short, the science-y folk seem to think the course gives a pretty reasonable picture of the scientific method and the philosophical questions one might ask about its operations.

But what about the scared-of-science folk?

I can’t deny that there’s a part of me that wants to sign them up for intro chemistry (and biology, and physics). But I know full well that their hearts would explode from anxiety before they even got to the first quiz. Indeed, some have told me to my face that they think it’s “diabolical” for me to explain concepts like intertheoretic reduction or procedures for hypothesis testing using actual scientific examples (mentioning Boyle’s law and the details of the kinetic theory gases to boot). It’s hard to imagine these students willingly exposing themselves to courses where the scientific examples are the whole point. And, sadly, were they to confront their fears enroll in science courses, some of their instructors would decide up front that some of them were simply not smart enough to learn science.

I’m hopeful enough to think even the ones who are scared of science can come to understand something about the way scientist try to connect theories and evidence. I’m persistent enough to ask them to think about how scientists make decisions, and to make them do exercises where they have to try to think like scientists. I’m audacious enough to make them do research in the scholarly scientific literature, and to ask them to make some kind of sense of some of the articles they find there.

They may start out seeing my course as a way to dodge science, but by the end many of them are not as scared as science as they were at the beginning. (Or perhaps, they’ve shifted their fear to philosophy instead …)

Lately, though, there have been rumblings that maybe the upper division general education requirements — including the science requirement — should be scrapped, as a way to shorten the time to graduation (and, not coincidentally, to reduce the amount of money the state is putting up for the education of each of these students in our state-supported university system). There is not, to my knowledge, any plan to replace the learning objective-focused general education requirements with anything like a distribution requirement that might, for example, require everyone to take at least three courses from the sciences (and three from the social sciences, and three from the humanities or arts) in order to graduate without specifying which courses one should take. I would be wildly enthusiastic about this kind of distribution requirement … but the landscape that seems to be looming ahead is one of “less”. There would be less pressure for students to engage with material or ways of thinking outside their comfort zones, less expectation that a college graduate would have broad knowledge rather than specialized skills.

And, there would be even less opportunity to use a harmless looking philosophy course as a stealth weapon of science education.

So, while there’s a part of me that worries that my philosophy of science course enables the evasive maneuvers of students who are trying to avoid engaging with science instruction head-on, there’s another part of me that feels like I’m holding the line and helping more students to engage — and doing so in a time when the bean-counters are losing sight of whether it’s worth it for a state to pay a little more to have its population better educated about how science works.

DonorsChoose Science Bloggers for Students 2012: helping classrooms in the aftermath of Super-storm Sandy.

Super-storm Sandy did major damage to the East Coast, especially New Jersey and New York City. The offices of DonorsChoose are in New York City. Their fabulous staff is safe (and mostly dry) and their computer servers are up, which means the Science Bloggers for Students drive has been operational and ready to receive your donations. However, a bunch of potential donors to the drive have probably been kind of distracted keeping their own selves safe and dry.

So, a few things we’re doing about this situation.

FIRST, we’re extending the drive through next Friday, November 9. This gives our East Coast compatriots who are waiting to get power back a chance to join in the fun. The dollar-for-dollar match from the DonorsChoose Board of Directors will be extended to the end (unless we blow through all $50,000 first, which would be awesome). Just enter SCIENCE in the “Match or gift code” field at checkout, and every dollar you give up to $100 will be doubled.

SECOND, I’ve added three projects to my giving page from hurricane affected area:

Calculators for a math-intensive Earth Science class at a high school in New York City.

Soil test kits for Dr. Charles E Brimm Medical Arts High School in Camden, New Jersey, to help students in an environmental science class with their urban gardening project.

A human body torso display model for a middle school biology class in Carteret, New Jersey.

In the event that we get these fully funded before the end of the drive, I’ll add more.

THIRD, for each of these new projects that we get to full funding before the end of the drive, I will donate $25 to the American Red Cross for Sandy relief. If we get all three fully funded, I’ll donate $100 to the American Red Cross for Sandy relief. If we fully fund additional Sandy-affected-area projects beyond these three, it will be an additional $25 out of my pocket to the American Red Cross for each of them.

If you hit your $100 limit on the matching funds, I know you’ll lean on your family and friends who care about science education.

We can do this!

We dodged the apocalypse, so let’s help some classrooms.

We’re coming into the home stretch of our annual DonorsChoose Science Bloggers for Students drive:

Science Bloggers for Students: No Apocalypse in Sight (Transcript below)

And, now until the end of the drive, you can get your donations matched (up to $100 per donor) thanks to the generosity of the DonorsChoose.org Board of Directors. Just enter the match code SCIENCE in the “Match or gift code” field as you check out.

By the way, the DonorsChoose.org Board of Directors has put up $50,000 in matching funds, so once you’ve hit your match code limit, you might want to nudge your family, friends, and social media contacts to give to worthy projects and get their donations matched.

My giving page for the challenge is here. You can find other giving pages from Scientific American bloggers here.

Thanks in advance for your generosity!

Transcript of the video:

Today is November 1, 2012, which means that the prediction that the world would end in October of 2012? Didn’t happen. Now what?

After your hard work laying in emergency supplies for the apocalypse, a new day dawns … and there’s stuff to do: dishes to wash, rabbit runs to clean, and public school classrooms that still need help getting funds for equipment, field trips, even basic classroom supplies.

Here’s where DonorsChoose comes in: Pick a giving page from the Science Bloggers for Students challenge. Check out the projects and find one that matters to you. Give what you can, even if it’s just a buck. And now, until the end of the drive, you can use the match code SCIENCE to double your donation, up to $100. Give a dollar, the project you’re funding gets two dollars. Give $100, the project gets $200.

The world didn’t end — this time. So take this opportunity to do some good and help some kids before it does.