Friday Sprog Blogging: common ancestors.

Even though we’ve had many conversations with the sprogs on the general topic of evolution, the influence of pop culture (and especially cartoons) seems sometimes to muddle their grasp on the details. Last night, we revisited the subject.

Dr. Free-Ride: You wanted to talk about evolution.

Younger offspring: Mmm-hmm.

Dr. Free-Ride: What about evolution did you want to talk about?

Younger offspring: What evolved into humans?

Dr. Free-Ride: And at the dinner table, how did you phrase the question?

Younger offspring: At the dinner table, I think I said, “What creature was in between the ape and the human?”

Dr. Free-Ride: And [Dr. Free-Ride’s better half] suggested that that wasn’t the right way to look at it. Can you explain what the problem is with imagining that apes evolved into humans?

Younger offspring: Well, the problem with that is that apes are still alive now, humans are still alive now, they were evolving in the same period of time. So how could one ape evolve into a human? It’s like [elder offspring] evolving into me.

Dr. Free-Ride: Which strikes you both as ridiculous.

Elder offspring: Yeah, there’s no way [younger offspring] is related to me.

Dr. Free-Ride: Um, no. You’re related. Deal with it. Anyway, I guess the best way to think about where humans came from and where apes came from is that we have some common ancestor — kind of like your grandparents are ancestors you share with your cousins.

Elder offspring: We’re related to our cousins, but not exactly alike.

Dr. Free-Ride: But you are in the same species, which means potentially that human cousins could have offspring together. Anyway, we’re talking about many, many more generations between humans, and apes, and the common ancestors we share.

Elder offspring: Lots of generations.

Dr. Free-Ride: Anyway, knowing what you do about humans and apes, what do you think our common ancestor might have been like?

Younger offspring: Our common ancestor might have been like … yeah, I’ve got nothing. I think I’ll draw it.

Dr. Free-Ride: OK. And do you have any thoughts on what our common ancestor with apes might have been like?

Elder offspring: Trilobites.

Dr. Free-Ride: What?!

Elder offspring: That’s a very, very, very, very, very, very, very early common ancestor.

Dr. Free-Ride: Really? Of us and apes?

Elder offspring: Well, I think we’re distantly related.

Dr. Free-Ride: Very distantly. Well, it’s hard to roll the tape back to see where all the creatures that are here today came from — since the common ancestors different kinds of organisms share aren’t around now, it can be hard to imagine what they were like. I wonder if you have any ideas on how, from our common ancestor, we ended up with humans and apes (and possibly other relatives) the way they are now. How did the descendants of this common ancestor get to be different?

Elder offspring: Well, how evolution works is parents have offspring that are slightly different from the parents, and then they have offspring and their offspring are slightly different from them, and it keeps going on and on and on until you get the final product.

Dr. Free-Ride: But what makes those differences between generations?

Elder offspring: Mutation.

Younger offspring: Well, I think, over the years, those ancestors decided to change their way. They decided to live a better life. So, they went to cavemen, who had clothes — not regular clothes from the modern time, but clothes — and they had all these clubs, and they slept on rocks, and they lived in caves.

Dr. Free-Ride: That’s an interesting idea. So, you think it was based on decisions by the common ancestors and their offspring about how to be. And you over there, you’re just talking about random changes in the heritable traits from one generation to the next.

Elder offspring: Yes. And adaptations to help them survive in their environment.

Dr. Free-Ride: Ah, adaptations! That’s very important. Do you remember the force that Darwin thought was very important in evolution?

Elder offspring: Natural selection?

Dr. Free-Ride: Yeah, so do you know what natural selection is?

Elder offspring: The ones who survive go on to have offspring, the ones who don’t, don’t.

Dr. Free-Ride: But what makes a difference in which organisms survive long enough to have offspring and which don’t?

Elder offspring: Those that survive long enough to have offspring will have more of their type of creature.

Younger offspring: But why do those ones survive and the other ones don’t?

Dr. Free-Ride: The idea is that the traits you have can be inherited through your genes. If some of the characteristics you have are really well suited to your environment —

Elder offspring: Then you survive and breed.

Dr. Free-Ride: At least, you have a better chance of surviving. There’s still some luck involved.

Elder offspring: And then at least some of your offspring survive and breed, and then at least some of those survive and breed, and it goes on and on and on.

Dr. Free-Ride: Sure. But what’s happening there is that some of the traits that really don’t work so well in that environment, if the environment in relatively stable, just won’t last very long because the creatures that have those traits won’t last long enough to breed and pass them on to offspring. So here’s a thought: If there are apes and there are humans, and they have a common ancestor, maybe some of the common ancestor-type creatures, whatever they were, were in one kind of environment where the traits we’d think of as ape-like traits helped them to survive better, and maybe some other of the very same common ancestor-type creatures were in another environment where what we think of as the human traits were the ones that were useful for staying alive long enough to have offspring. Or, possibly, they could have been in very similar environments and ended up with two different ways to do well in that environment.

Younger offspring: Apes wear hair, we wear clothes.

Elder offspring: And we cook.

Younger offspring: And apes dig food out of each other’s hair.

Dr. Free-Ride: After a certain point, maybe we ended up with adaptation that were different from the ones our cousins the apes ended up with.

Younger offspring: My brother the ape!

Dr. Free-Ride: The song may say “my brother,” but I’m inclined to think the relationship is more like cousins than siblings.

Elder offspring: It’s more of a brother than the shrimp.

Dr. Free-Ride: Sure, and the shrimp is in the song too, I understand.

Younger offspring: And the lichens!

Elder offspring: And the anteater!

Dr. Free-Ride: But I’m not sure how you’ll feel about this part: I don’t think necessarily that the changes between us and the apes were anything that our ancestors intended. I think some of it was just whether the traits they had helped them survive in the environment they had, so they could have babies with the traits that they did. And some of it was luck.

The sprogs then went off to draw some common ancestors, as they imagine them.

From the younger offspring, a common ancestor of modern humans and modern apes (who also appears to be an ancestor of modern blue meanies:

From the elder offspring, a transitional form between trilobites and monkeys:

I wouldn’t have even thought to look for such a transitional form. Shows what I know.

Zucchini utilization: two recipes.

The Free-Ride family has spent the last several weeks dealing with an abundance of zucchini. Here are two of the smaller ones we harvested this week.

Since there’s a limit to how many zucchini you can give away without alienating your friends and neighbors, it’s good to have some tasty strategies for eating them. Here are to of the recipes we’ve been working.

Zucchini Faux-Risotto

Wash and trim about 3 pounds of zucchini. Halve them lengthwise and slice into semicircles (about 1/8 inch thick).

DIce one large onion.

Put a large pot of water on the stove to boil.

Heat up a couple tablespoons of olive oil in a large skillet. Add the onion and zucchini and toss to coat with oil. Cook on high heat without stirring too much (so that the zucchini and onions brown a bit). As you cook, the onions will get translucent and the zucchini will cook down significantly.

Meanwhile, boil 1 pound of orzo. (Ours is al dente after about nine minutes.) Drain, add to the skillet with the onion and zucchini, toss gently, and turn heat off.

Finely grate some asiago or other hard cheese until you have 1/2 to 1 cup. Toss with the orzo and vegetables. Season with salt and pepper to taste.

This dish is good hot or at room temperature.

Zucchini Bread

Preheat your oven to 350 oF. Lightly grease a standard loaf pan, or line with parchment paper.

Grate a very generous 2 cups of washed, unpeeled zucchini. (In a two-cup Pyrex liquid measuring cup, you want it to be overflowing with the grated zucchini.) If you have a food processor with a grating disk, this is a good time to break it out.

Put the grated zucchini in a large bowl with 3/4 cup sugar, 1/4 cup vegetable oil, the finely grated rind of half a large lemon, and a large egg. Beat together with a fork.

Sift together into the zucchini mixture 1.5 cups flour (this last batch I used 1/2 cup whole wheat, 1/3 cup white whole wheat, and 2/3 cup all purpose), 1/2 teaspoon baking soda, 1/4 teaspoon baking powder, 1/4 teaspoon salt, 1/4 teaspoon ground cinnamon, 1/4 teaspoon ground nutmeg, 1/4 teaspoon ground ginger, and 1/2 teaspoon ground cardamom. Stir together until incorporated.

Pour into the loaf pan and bake for about 55 minutes. Cool before removing from the loaf pan and slicing.

This is so moist that you won’t even think about buttering it until after you’ve gobbled it down.

What kind of problem is it when data do not support findings?

And, whose problem is it?

Yesterday, The Boston Globe published an article about Harvard University psychologist Marc Hauser, a researcher embarking on a leave from his appointment in the wake of a retraction and a finding of scientific misconduct in his lab. From the article:

In a letter Hauser wrote this year to some Harvard colleagues, he described the inquiry as painful. The letter, which was shown to the Globe, said that his lab has been under investigation for three years by a Harvard committee, and that evidence of misconduct was found. He alluded to unspecified mistakes and oversights that he had made, and said he will be on leave for the upcoming academic year. …

Much remains unclear, including why the investigation took so long, the specifics of the misconduct, and whether Hauser’s leave is a punishment for his actions.

The retraction, submitted by Hauser and two co-authors, is to be published in a future issue of Cognition, according to the editor. It says that, “An internal examination at Harvard University . . . found that the data do not support the reported findings. We therefore are retracting this article.’’

The paper tested cotton-top tamarin monkeys’ ability to learn generalized patterns, an ability that human infants had been found to have, and that may be critical for learning language. The paper found that the monkeys were able to learn patterns, suggesting that this was not the critical cognitive building block that explains humans’ ability to learn language. In doing such experiments, researchers videotape the animals to analyze each trial and provide a record of their raw data. …

The editor of Cognition, Gerry Altmann, said in an interview that he had not been told what specific errors had been made in the paper, which is unusual. “Generally when a manuscript is withdrawn, in my experience at any rate, we know a little more background than is actually published in the retraction,’’ he said. “The data not supporting the findings is ambiguous.’’

Gary Marcus, a psychology professor at New York University and one of the co-authors of the paper, said he drafted the introduction and conclusions of the paper, based on data that Hauser collected and analyzed.

“Professor Hauser alerted me that he was concerned about the nature of the data, and suggested that there were problems with the videotape record of the study,’’ Marcus wrote in an e-mail. “I never actually saw the raw data, just his summaries, so I can’t speak to the exact nature of what went wrong.’’
The investigation also raised questions about two other papers co-authored by Hauser. The journal Proceedings of the Royal Society B published a correction last month to a 2007 study. The correction, published after the British journal was notified of the Harvard investigation, said video records and field notes of one of the co-authors were incomplete. Hauser and a colleague redid the three main experiments and the new findings were the same as in the original paper. …

“This retraction creates a quandary for those of us in the field about whether other results are to be trusted as well, especially since there are other papers currently being reconsidered by other journals as well,’’ Michael Tomasello, co-director of the Max Planck Institute for Evolutionary Anthropology in Leipzig, Germany, said in an e-mail. “If scientists can’t trust published papers, the whole process breaks down.’’ …

In 1995, he [Hauser] was the lead author of a paper in the Proceedings of the National Academy of Sciences that looked at whether cotton-top tamarins are able to recognize themselves in a mirror. Self-recognition was something that set humans and other primates, such as chimpanzees and orangutans, apart from other animals, and no one had shown that monkeys had this ability.

Gordon G. Gallup Jr., a professor of psychology at State University of New York at Albany, questioned the results and requested videotapes that Hauser had made of the experiment.

“When I played the videotapes, there was not a thread of compelling evidence — scientific or otherwise — that any of the tamarins had learned to correctly decipher mirrored information about themselves,’’ Gallup said in an interview.

A quick rundown of what we get from this article:

  • Someone raised a concern about scientific misconduct that led to the Harvard inquiry, which in turn led to the discovery of “evidence of misconduct” in Hauser’s lab.
  • We don’t, however, have an identification of what kind of misconduct is suggested by the evidence (fabrication? falsification? plagiarism? other serious deviations from accepted practices?) or of who exactly committed it (Hauser or one of the other people in his lab).
  • At least one paper has been retracted because “the data do not support the reported findings”.
  • However, we don’t know the precise issue with the data here — e.g., whether the reported findings were bolstered by reported data that turned out to be fabricated or falsified (and are thus not being included anymore in “the data”).
  • Apparently, the editor of the journal that published the retracted paper doesn’t know the precise issue with the data, either, and found this unusual enough a situation with respect to the retraction of the paper to merit comment.
  • Other papers from the Hauser group may be under investigation for similar reasons at this point, and other researchers in the field seem to be nervous about those papers and their reliability in light of the ongoing inquiry and the retraction of the paper in Cognition.

There’s already been lots of good commentary on what might be going on with the Hauser case. (I say “might” because there are many facts still not in evidence to those of us not actually on the Harvard inquiry panel. As such, I think it’s necessary to refrain from drawing conclusions not supported by the facts that are in evidence.)

John Hawks situates the Hauser case in terms of the problem of subjective data.

Melody has a nice discussion of the political context of getting research submitted to journals, approved by peer reviewers, and anointed as knowledge.

David Dobbs wonders whether the effects of the Hauser case (and of the publicity it’s getting) will mean backing off from overly strong conclusions drawn from subjective data, or backing off too far from a “hot” scientific field that may still have a bead on some important phenomena in our world.

Drugmonkey critiques the Boston Globe reporting and reminds us that failure to replicate a finding is not evidence of scientific misconduct or fraud. That’s a hugely important point, and one that bears repeating. Repeatedly.

This is the kind of territory where we start to notice common misunderstandings about how science works. It’s usually not the case that we can cut nature at the joints along nicely dotted lines that indicate just where those cuts should be. Collecting reliable data and objectively interpreting that data is hard work. Sometimes as we go, we learn more about better conditions for collecting reliable data, or better procedures for interpreting the data without letting our cognitive biases do the driving. And sometimes, a data set we took to be reliable and representative of the phenomenon we’re trying to understand just isn’t.

That’s part of why scientific conclusions are always tentative. Scientists expect to update their current conclusions in the light of new results down the road — and in the light of our awareness that some of our old results just weren’t as solid or reproducible as we took them to be. It’s good to be sure they’re reproducible enough before you announce a finding to your scientific peers, but to be absolutely certain of total reproducibility, you have to solve the problem of induction, which isn’t terribly practical.

Honest scientific work can lead to incorrect conclusions, either because that honest work yielded wonk data from which to draw conclusions, or because good data can still be consistent with incorrect conclusions.

And, there’s a similar kind of disconnect we should watch out for. For the “corrected” 2007 paper in Proceedings of the Royal Society B, the Boston Globe article reports that videotapes and field notes (the sources of the data to support the reported conclusions) were “incomplete”. But, Hauser and a colleague redid the experiments and found data that supported the conclusions reported in this paper. One might think that as long as reported results are reproducible, they’re necessarily sufficiently ethical and scientifically sound and all that good stuff. That’s not how scientific knowledge-building works. The rules of the game are that you lay your data-cards on the table and base your findings on those data. Chancing upon an answer that turns out to be right but isn’t supported by the data you actually have doesn’t count, nor does having a really strong hunch that turns out to be right. In the scientific realm, empirical data is our basis for knowing what we know about the phenomena. Thus, doing the experiments over in the face of insufficient data is not “playing it safe” so much as “doing the job you were supposed to have done in the first place”.

Now, given the relative paucity of facts in this particular case, I find myself interested by a more general question: What are the ethical duties of a PI who discovers that he has published a paper whose findings are not, in fact, supported by the data?.

It seems reasonable that at least one of his or her duties involves correcting the scientific literature.

This could involve retracting the paper, in essence saying, “Actually, we can’t conclude this based on the data we have. Our bad!”

It could also involve correcting the paper, saying, “We couldn’t conclude this based on the data we have; instead, we should conclude this other thing,” or, “We couldn’t conclude this based on the data we originally reported, but we’ve gone and done more experiments (or have repeated the experiments we described), obtained this data, and are now confident that on the basis of these data, the conclusion in well-supported.”

If faulty data were reported, I would think that the retraction or correction should probably explain how the data were faulty — what’s wrong with them? If the problem had its source in an honest mistake, it might also be valuable to identify that honest mistake so other researchers could avoid it themselves. (Surely this would be a kindness; is it also a duty?)

Beyond correcting the scientific literature, does the PI in this situation have other relevant duties?

Would these involve ratcheting up the scrutiny of data within the lab group in advance of future papers submitted for publication? Taking the skepticism of other researchers in the field more seriously and working that much harder to build a compelling case for conclusions from the data? (Or, perhaps, working hard to identify the ways that the data might argue against the expected conclusion?) Making serious efforts to eliminate as much subjectivity from the data as possible?

Assuming the PI hasn’t fabricated or falsified the data (and that if someone in the lab group has, that person has been benched, at least for the foreseeable future), what kind of steps ought that PI to take to make things right — not just for the particular problematic paper(s), but for his or her whole research group moving forward and interacting with other researchers in the field? How can they earn back trust?

Just before I woke up this morning

… I had figured out a really elegant way to test an hypothesis, complete with two separate treatment groups and a control group. While the population under study was blog readers, I had come up with a reasonable plan to protect the human subjects, even mentally drafting the IRB short form.

I was very excited at how well it was all coming together. And then I woke up.

Which means I have no earthly recollection of either the hypothesis or the clever strategy for testing it.

Building a critical reasoning course: getting started with the external constraints.

My Fall semester is rapidly approaching and I am still in the throes of preparing to teach a course I have never taught before. The course is called “Logic and Critical Reasoning.” Here’s the catalog description of the course:

Basic concepts of logic; goals and standards of both deductive and inductive reasoning; techniques of argument analysis and assessment; evaluation of evidence; language and definition; fallacies.

The course involves some amount of symbolic logic (and truth-tables and that good stuff) but also a lot of attention to argumentation “in the wild”, in the written and spoken word. My department usually teaches multiple sections of the course each semester, but it’s not the case that we all march in lockstep with identical textbooks, syllabi, and assignments.

The downside of academic freedom, when applied to teaching a course like this, is that you have to figure out your own plan.

Nonetheless, since critical reasoning is the kind of thing I think we need more of in the world, I’m excited about having the opportunity to teach the course. And, at Tom Levenson‘s suggestion, I’m going to blog the process of planning the course. Perhaps you all will have some suggestions for me as I work through it.

Part of why my department offers multiple sections of “Logic and Critical Reasoning” is that it fulfills a lower-division general education (G.E.) requirement. In other words, there’s substantial student demand for courses that fulfill this requirement.

For this course to fulfill the G.E. requirement, of course, it has to meet certain pedagogical goals or “learning objectives”. So, where I need to start in planning this course is with the written-and-approved-by-committee learning objectives and content requirements:

Course Goals and Student Learning Objectives
“Logic and Critical Reasoning” is designed to meet the G.E. learning objectives for Area A3.

A.
Critical thinking courses help students learn to recognize, analyze, evaluate, and engage in effective reasoning.

B.
Students will demonstrate, orally and in writing, proficiency in the course goals. Development of the following competencies will result in dispositions or habits of intellectual autonomy, appreciation of different worldviews, courage and perseverance in inquiry, and commitment to employ analytical reasoning. Students should be able to:

  1. distinguish between reasoning (e.g., explanation, argument) and other types of discourse (e.g., description, assertion);
  2. identify, analyze, and evaluate different types of reasoning;
  3. find and state crucial unstated assumptions in reasoning;
  4. evaluate factual claims or statements used in reasoning, and evaluate the sources of evidence for such claims;
  5. demonstrate an understanding of what constitutes plagiarism;
  6. evaluate information and its sources critically and incorporate selected information into his or her knowledge base and value system;
  7. locate, retrieve, organize, analyze, synthesize, and communicate information of relevance to the subject matter of the course in an effective and efficient manner; and
  8. reflect on past successes, failures, and alternative strategies.

C.

  • Students will analyze, evaluate, and construct their own arguments or position papers about issues of diversity such as gender, class, ethnicity, and sexual orientation.
  • Reasoning about other issues appropriate to the subject matter of the course shall also be presented, analyzed, evaluated, and constructed.
  • All critical thinking classes should teach formal and informal methods for determining the validity of deductive reasoning and the strength of inductive reasoning, including a consideration of common fallacies in inductive and deductive reasoning. … “Formal methods for determining the validity of deductive arguments” refers to techniques that focus on patterns of reasoning rather than content. While all deductive arguments claim to be valid, not all of them are valid. Students should know what formal methods are available for determining which are which. Such methods include, but are not limited to, the use of Venn’s diagrams for determining validity of categorical reasoning, the methods of truth tables, truth trees, and formal deduction for reasoning which depends on truth functional structure, and analogous methods for evaluating reasoning which may be valid due to quantificational form. These methods are explained in standard logic texts. We would also like to make clear that the request for evidence that formal methods are being taught is not a request that any particular technique be taught, but that some method of assessing formal validity be included in course content.
  • Courses shall require the use of qualitative reasoning skills in oral and written assignments. Substantial writing assignments are to be integrated with critical thinking instruction. Writing will lead to the production of argumentative essays, with a minimum of 3000 words required. Students shall receive frequent evaluations from the instructor. Evaluative comments must be substantive, addressing the quality and form of writing.

This way of describing the course, I reckon, is not the best way to convince my students that it’s a course they’re going to want to be taking. My big task, therefore, is to plan course material and assignments that accomplish these goals while also striking the students as interesting, relevant, and plausibly do-able. In addition, I want to plan assignments that give the students enough practice and feedback but that don’t overwhelm me with grading. (The budget is still in very bad shape, so I have no expectation that there will be money to hire a grader.)

I have some ideas percolating here, which I will blog about soon. One of them is to use the blogosphere as a source of arguments (and things-that-look-like-arguments-but-aren’t) for analysis. I’m thinking, though, that I’ll need to set some good ground rules in advance.

Do these learning objectives and content requirements seem to you to call out for particular types of homework assignments or mini-lecture? If you had to skin this particular pedagogical cat, where would you start?

Professionalism, pragmatism, and the Hippocratic Oath.

In a recent post about a study of plagiarism in the personal statements of applicants for medical residency programs, the issue of professionalism reared its head. The authors of that study identified plagiarism in these application essays as a breach of professionalism, and one likely to be a harbinger of more such breaches as the applicant’s medical career progressed. Moreover, the authors noted that:

increasing public scrutiny of physicians’ ethical behavior is likely to put pressure on training programs to enforce strict rules of conduct, beginning with the application process.

I think it’s worth taking a closer look at what “professionalism” encompasses and at why it would be important to a professional community (like the professional community of physicians). To do this, let’s go way back to an era where physicians were working very hard to distinguish themselves from some of the other thinkers and purveyors of services in the public square – the time when the physicians known as the Hippocratics were flourishing in ancient Greece.

These physicians were working to make medicine a more scientific practice. They sought not just ways to heal, but an understanding of why these treatments were effective (and of how the bodies they were treating worked). But another big part of what the Hippocratics were trying to do involved establishing standards to professionalize their healing practices – and trying to establish a public reputation that would leave the public with a good opinion of learned medicine. After all, they weren’t necessarily pursuing medical knowledge for its own sake, but because they wanted to use it to help patients (and to make a living from providing these services). However, getting patients depended on being judged trustworthy by the people who might need treatment.

Professionalism, in other words, had to do not only with the relationship between members of the professional community but also with the relationship between that professional community and the larger society in which it was embedded.

The physicians in this group we’re calling the Hippocratics left a number of writings, including a statement of their responsibilities called “The Oath”. It’s worth noting that the Hippocratic corpus contains a diversity of works that reflect some significant differences of opinion among the physicians in this community – including some works (on abortion and surgery, for example) that seem to contradict some of the specific claims of “The Oath”. Still, “The Oath” gives us pretty good insight into the kind of concerns that would motivate a community of practitioners who were trying to professionalize.

We’re going to look at “The Oath” in its entirety, with my commentary interspersed. I’m using the translation of by J. Chadwick in Hippocratic Writings, edited by G.E.R. Lloyd.

I swear by Apollo the healer, by Aesculapius, by Health and all the powers of healing, and call to witness all the gods and goddesses that I may keep this Oath and Promise to the best of my ability and judgment.

In other words, it’s a serious oath.

I will pay the same respect to my master in the Science as to my parents and share my life with him and pay all my debts to him. I will regard his sons as my brothers and teach them the Science, if they desire to learn it, without fee or contract.

This is a recognition of the physician’s debt to professional community, those who taught him. It’s also a recognition of his duty to educate next generation of the profession.

I will hand on precepts, lectures and all other learning to my sons, to those of my master and to those pupils duly apprenticed and sworn, and to none other.

This part is all about keeping trade secrets secret. The assumption was that learned medicine involved knowledge that should not be shared with everyone, especially because a lot of people wouldn’t have the wisdom or intelligence or good character to use it appropriately. Also, given that these physicians wanted to be able to earn a living from their healing practices, they needed to keep something of a monopoly on this knowledge.

I will use my power to help the sick to the best of my ability and judgment; I will abstain from harming or wronging any man by it.

Here’s the recognition of the physician’s duty to his patients, the well-known commitment to do no harm. Obviously, this commitment is in the patients’ interests, but it’s also tied to the reputation of the professional community. Maintaining good stats, as it were, by not doing any harm should be expected to raise the community’s opinion of the profession of learned medicine.

I will not give a fatal draught to anyone if I am asked, nor will I suggest any such thing. Neither will I give a woman means to procure an abortion.

These two sentences forbid the physician’s participation in euthanasia or abortion. Note, however, that other writings in the Hippocratic corpus indicate that physicians in this tradition did participate in such procedures. Maybe this was a matter of local variations in what the physicians (and the public they served) found acceptable. Maybe there was a healthy debate among the Hippocratics about these practices.

I will be chaste and religious in my life and in my practice.

This part basically calls upon the physician to conduct himself as a good person. After all, the reputation of whole profession would be connected, at least in the public’s view, to the reputation of individual practitioners.

I will not cut, even for the stone, but I will leave such procedures to the practitioners of that craft.

Cutting was the turf of surgeons, not physicians. Here, too, there are other writings in the Hippocratic corpus that indicate that physicians in this tradition did some surgery. However, before the germ theory of disease or the discovery of antibiotics, you might imagine that performing surgery could lead to a lot of complications, running afoul of the precept to do no harm. Again, that was going to hurt the professional community’s stats, so it seemed reasonable just to leave it to the surgeons and let them worry about maintaining their own reputation.

Whenever I go into a house, I will go to help the sick and never with the intention of doing harm or injury.

This reads as an awareness of the physician’s power and of the responsibilities that come with it. If patients are trusting the physician and giving him this privileged access, for the good of the professional community he had better live up to that trust.

I will not abuse my position to indulge in sexual contacts with the bodies of women or men, whether they be freemen or slaves.

This is more of the same. Having privileged access means you have the opportunity to abuse it, but that kind of abuse could tarnish the reputation of the whole profession, even of physicians whose conduct met the highest standards of integrity.

Whatever I see or hear, professionally or privately, which ought not to be divulged, I will keep secret and tell no one.

To modern eyes, this part might suggest a commitment to maintain patient privacy. It’s more likely, however, that this was another admonition to protect the trade secrets of the professional community.

If, therefore, I observe this Oath and do not violate it, may I prosper both in my life and in my profession, earning good repute among all men for all time. If I transgress and forswear this Oath, may my lot be otherwise.

“Swear to God and hope to die, stick a needle in my eye!” Did we mention that it’s a serious oath?

The main thing I think is worth noticing here is the extent to which professionalism is driven by a need for the professional community to build good relations with the larger society – the source of their clients. Pick any modern code of conduct from a professional society and you will see the emphasis on duties to those clients, and to the larger public those clients inhabit, but this emphasis is at least as important for the professional community as for the people their profession is meant to serve. The code describes the conduct that members should exhibit to earn the trust of the public, without which they won’t get to practice their profession – or, at any rate, they might not be viewed as having special skills worth paying for, or as being the kind of people who could be trusted not to use those special skills against you.

Professionalism is not idealistic, then, but extremely pragmatic.

Friday Sprog Blogging: what is Friday Sprog Blogging about?

Back in January 2006, when my blog moved to ScienceBlogs, I put up a post the first Friday that I thought was going to be a one-off, a reconstruction of a conversation I had with my kids (then 4.5 and 6.5 years of age) that struck me as having a distinctly science-y nature. As it happened, almost every Friday since then, we have posted a conversation (or artwork, or something along those lines) that we have had about science.

This week, the tradition moves to Scientopia.

Dr. Free-Ride: Because the blog just moved from ScienceBlogs to Scientopia, I wanted you to explain what the Friday Sprog Blogging is about.

Younger offspring: Friday Sprog Blogging is mostly about talking about stuff scientific, and typing it down on the blog, so other people could give feedback and learn about stuff that they didn’t really learn about before.

Dr. Free-Ride: What do you think, elder offspring? What’s the Friday Sprog Blogging about?

Elder offspring: Well, it’s about you talking to your kids about something science-y, and then you type it, and then people give feedback like, “Oh my gosh, this is so cute!” or “Oh my gosh, your kids are so smart!” or one of those things.

Dr. Free-Ride: You think that’s what it’s about, an affirmation of how cute you are or how smart you are?

Elder offspring: Yes.

Dr. Free-Ride: You think that’s the only value people get out of the Friday Sprog Blog?

Elder offspring: Yes.

Younger offspring: I think that’s really wrong. I think that’s off.

Dr. Free-Ride: So, what other kind of value do you think people get from the Friday Sprog Blog?

Younger offspring: Well, they could feed back like, “I didn’t know this was true! Can you tell me more about it in the next blog?” Or something like that.

Dr. Free-Ride: So, you think it actually gets people interested in particular scientific questions or particular areas of science that they might explore further?

Younger offspring: Uh huh.

Dr. Free-Ride: Do you think it might also be of interest to people who maybe have sprogs of their own and are trying to figure out how to talk with them about stuff as their little kids are learning stuff?

Younger offspring: Uh huh.

Elder offspring: First of all, I’m not a little kid.

Dr. Free-Ride: Well, you’re not anymore, but when we started this four-and-a-half years ago, you were. You were just six-and-a-half.

Younger offspring: Can I say something?

Dr. Free-Ride: Sure.

Younger offspring: Hi, Little Isis! Hi Minnow! Hi PharmKid! Hi PalKid!

Dr. Free-Ride: OK, your shout-outs* are noted. Anyway, elder offspring, you used to be little when we started this. You were in kindergarten —

Elder offspring: First grade.

Dr. Free-Ride: Still, in January of 2006, arguably, you were littler than you are now.

Elder offspring: “Smaller” is the correct grammar.

Dr. Free-Ride: Fine, smaller. But, don’t you think that parents sometimes might have questions about whether they can really talk to their young kids about science? Don’t you think sometimes parents might be anxious and think, “Oooh, I might get this wrong. Oooh, I should probably just wait until my kid is in school and the science teachers in school can teach them all they need to know”?

Younger offspring: No, I don’t think people should do that. I think kids should start learning about science when they’re young and before they go on to science classes, like in third grade.

Dr. Free-Ride: Why do you think kids should learn while they’re young?

Younger offspring: Well, while they’re really young, and they learn more than just third grade science, then they’ll get smarter, and if you learn something when you’re older, it’s hard, ’cause you don’t have much time to get better at it.

Dr. Free-Ride: OK, I hadn’t really thought of it that way. What I was thinking — and maybe it was just because you two were my kids — my sense was that little kids seem to want to learn about everything in their world, about how everything works, and about how to figure out stuff that they don’t know yet.

Younger offspring: Well, we learned how to talk. And that’s because we’ve been listening to you, right?

Dr. Free-Ride: That’s part of it. I think there’s probably more to it than that. But elder offspring, you don’t think the Friday Sprog Blog is at all interesting or useful to people who are trying to figure out how to interact with their kids’ questions about the world and how it works?

Elder offspring: Well, we all know we can let the adults make their own decisions because, as we all know, adults are perfect and they do everything correctly and they are the supreme idols for everybody.

Dr. Free-Ride: You know what, even I can tell that that’s your sarcastic voice.

Younger offspring: Yes, mother, I’ll follow your command!

Dr. Free-Ride: I think something you guys might not realize so much is that, a lot of times adults, and especially parents, feel really nervous — feel like they’re supposed to know stuff that they don’t actually know.

Younger offspring: Is that you and [Dr. Free-Ride’s better half]?

Dr. Free-Ride: I think that’s everyone. And I think sometimes especially parents trying to figure out how to deal with really young kids, and trying to help those kids figure out the world that they’re in, those parents sometimes feel nervous about having to make it up as they go. And I guess one of the things that happened with the Friday Sprog Blog that I didn’t expect would happen is it seemed like it ended up being a little bit of a “meta” conversation about , here’s how to talk to your kids without necessarily teaching them — but here’s how to keep the conversation going about how to figure out your world. And you guys are still figuring out your world, right? Even though you know it a lot better than you did in January of 2006?

Elder offspring: I know that when I’m an adult I will know everything, and there will be no need to study now when I’m young and foolish.

Dr. Free-Ride: Again with the sarcastic voice!

Younger offspring: Hee!

Dr. Free-Ride: So, we’re going to keep up the Friday Sprog Blogging on Scientopia?

Elder offspring: Yes.

Younger offspring: Yes! But is there any other place on Scientopia for kids?

Dr. Free-Ride: Well, there’s a whole blog called Child’s Play devoted to how kids’ brains develop.

Elder offspring: As kids get to puberty, their brains grow huge, soaking up knowledge.

Dr. Free-Ride: You know what else they’re soaking up besides knowledge at puberty, kiddo? They’re soaking up the hormones that make the brain a little bit unpredictable for a few years. That’s something that we have to look forward to, and I guess the Friday Sprog Blogs might start getting into the adolescent at puberty brain chemistry wacky stage soon.
______
*Or should that be shouts-out?

When applicants for medical residencies plagiarize.

ResearchBlogging.org

Long-time readers of this blog will know that plagiarism is a topic that comes up with some regularity, sometimes fueled by “kids today!” stories from the mainstream media, and sometimes due to actual research on plagiarism in different educational and professional spheres.

Today, let’s have a look of a report of one such investigation, “Plagiarism in Residency Application Essays,” published July 20, 2010 in Annals of Internal Medicine. The investigators looked at the personal statements applicants wrote (or, in some cases, “wrote”) as part of their application to residency programs at Brigham and Women’s hospital. As they describe their study:

The primary goals of this investigation were to estimate the prevalence of plagiarism in applicants’ personal statements at our institution and to determine the association of plagiarism with demographic, educational, and experience related characteristics of the applicants. (112)

The people applying to residency programs have already successfully completed medical school. The residency is an additional part of their training to help them prepare to practice a particular medical specialty. And, the personal statement is a standard part of what’s involved in applying for a residency:

All applicants to U.S. residency programs must complete an original essay known as the “personal statement.” The format is free-form, the content is not specified, and expectations may vary by specialty. Common themes include the motivation for seeking training in a chosen specialty, the factors that affect suitability for a field or program, a critical incident that affected the applicant’s career choice, and circumstances that distinguish the applicant from others. (112)

There are some fairly commonsense reasons to expect that these personal statements ought to be original work, written by the applicant rather than copied from some other source. After all, the personal essay represents the applicant to the residency program, not as a transcript or a set of test scores but as a person. The essay gives insight into why the applicant is interested in a particular medical specialty, what training experiences and life experiences might bear on his or her motivation or likelihood of success, what kind of personal qualities he or she will bring to the table.

Also, since plagiarism is explicitly forbidden, these essays may give insight into the applicant’s personal and academic integrity, or at least into his or her grasp of rudimentary rules of scholarship:

The ERAS [Electronic Residency Application Service] also warns applicants that “any substantiated findings of plagiarism may result in reporting of such findings to the programs to which [they] apply now and in the future”. Applicants must certify that work is accurate and original before an ERAS application is complete. (112)

In the study, the investigators performed an analysis of the personal statements in residency program applications to Brigham and Women’s Hospital over an interval of about 18 months. They analyzed 4975 essays using software that compared them with a database that included previously submitted essays, published works, and Internet pages.

For the purposes of the study, the researchers defined evidence of plagiarism as a match of more than 10% of an essay to an existing work. Since the software was flagging matching strings of words between the essays and the sources in the database, this methodology may well have missed instances of plagiarism where the plagiarist changed a word here or there.

It’s also worth noting that the authors point, in the Discussion section of the paper, to the following definition of plagiarism:

Plagiarism may be defined as “the action or practice of taking someone else’s work, idea, etc., and passing it off as one’s own; literary theft”. (114)

This definition seems (at least to my eye) to make intent an element of the crime. As we’ve discussed before, this requirement is by no means a standard part of the definition of plagiarism.

What did this research find? In the 4975 essays analyzed, they detected evidence of plagiarism (i.e., a match of more than 10%) in 5.2% of the essays, for an incidence of a little more than one plagiarized paper in 20. Rather than relying solely on the software analysis, the researchers examined the essays the software flagged for plagiarism to rule out false positives. (They found none.)

I’m not sure whether this frequency of plagiarism is unusually high (or unusually low). However, for a personal statement, I reckon this is higher than it should be. Again, what better source could there be for your personal statement than yourself? Still, we might want some data on the frequency of plagiarism in personal statements for other sorts of things to get a better sense of whether the results of this study indicate a special problem with people applying for medical residencies, or whether they reflect a basic human frailty of which people applying for medical residencies also partake.

The authors also report demographic trends that emerged in their results. They found a higher incidence of plagiarism among the applicants who were:

  • international (which included non-U.S. citizens and those who had attended medical school outside the U.S.)
  • older
  • fluent in languages other than English
  • applying for a residency with previous residency training under their belts

They found a lower incidence of plagiarism among the applicants who:

  • were members of Alpha Omega Alpha (a medical honor society)
  • had research experience
  • had volunteer experience
  • had higher scores on the U.S. Medical Licensing Exam Step 1

The authors offer no hypotheses about causal mechanisms that might account for these correlations, and it seems likely that more research is required to tease out the factors that might contribute to these demographic differences, not to mention strategies that might address them. (I’m guessing that the applicants with research experience and/or volunteer experience had an easier time finding stuff to write about in their personal essays.)

One might reasonably ask whether plagiarism in these personal essays is a problem that ought to worry those training the next generation of physicians. The authors of this study argue that it is. They write:

First, residency selection committees would probably find misrepresentation on the application to be a strong negative indicator of future performance as a resident. The Accreditation Council for Graduate Medical Education has deemed professionalism 1 of the 6 core competencies to be taught and assessed in undergraduate and graduate medical education. We believe that program directors would find a breach of professionalism in an application to be an unacceptable baseline from which to begin residency. Second, lapses in professionalism in medical school and residency training can be predictive of future disciplinary action by state medical boards. Third, increasing public scrutiny of physicians’ ethical behavior is likely to put pressure on training programs to enforce strict rules of conduct, beginning with the application process. (114-115)

The presumption is that honesty is a quality that physicians (and those training to be physicians) ought to display — that there is something wrong with lying not only to the patients you are treating but also to other members of your professional community. Indeed, the “professionalism” to which the authors refer is important in large part because it allows member of the larger public to recognize the professional community of physicians as possessing the necessary skills, judgment, and trustworthiness. Without this recognition, why should your average patient trust an M.D. any more than a snake-oil salesman?

In this study, as in all studies with human subjects, the researchers were required to look out for the interests of their human subjects — here, the applicants to the residency programs who wrote the personal essays that were analyzed. Protecting their interests included maintaining the anonymity of the authors of the essays in the context of the study. This, in turn, means that it’s possible that the plagiarism identified in the study may not have been identified by the residency selection committees who were also reading these essays.

Finally, near the end of the paper, the authors offer recommendations for how to address the general problem of plagiarism in applications for residency programs:

Ideally, the submission of applicant essays for comparison in a centralized database would occur at the level of ERAS, which would make this process unavoidable for applicants.This method also would eliminate the difficulties inherent in having multiple institutions using plagiarism detection software programs simultaneously, because submitted essays become part of the database for future submissions. Furthermore, manual inspection of the similarity report itself rather than simply reporting the score would allow individual program directors to make independent judgments about the seriousness of any putative offense. Finally, the mere knowledge that essays are being screened by plagiarism-detection software may substantially deter would-be plagiarizers. (119)

These recommendations are clearly leaning toward detecting plagiarism that has been committed, rather than being weighted towards prevention efforts. As they note, and as other researchers have found, an expectation that there will be a plagiarism screening may discourage applicants from committing plagiarism, but it’s possible that prevention efforts that depend on fear of detection may just end up separating the risk averse applicants from the gamblers.

Segal S, Gelfand BJ, Hurwitz S, Berkowitz L, Ashley SW, Nadel ES, & Katz JT (2010). Plagiarism in residency application essays. Annals of internal medicine, 153 (2), 112-20 PMID: 20643991

Premeds, chemistry professors, pedagogy, and economics.

In comments on my earlier post in which I mused on the wisdom of having chemistry and physics courses serve to weed out an excess of premed students, Peter R. wrote:

1) There would be far fewer chemistry professors (albeit happier) if pre-med students did not take chemistry. Chemistry majors are always, and have always been, the minority of students in general and organic chemistry.

2) The idea that chemistry is a “weed-out” course is misleading, because it is not the chemistry instructor’s job to choose who goes to medical school. Our job is to determine how well our students learn chemistry. It was not the chemistry faculty that made chemistry a requirement, although they certainly benefit from it. The students “weed” themselves out.

These are observations worth discussing, not least because I think discussing them will help us become more aware of some of our assumptions about how colleges and universities ought to work.

Let’s start with the second observation first — that chemistry professors are really only charged with evaluating student performance in the context of the course requirements for the particular chemistry course they’re teaching.

I agree that this is what the job description is. You teach the class, you assess the students (with problem sets, exams, lab reports, and the like), and you assign the appropriate grade. As I’ve discussed before, there are differing philosophies on what it means to assign the appropriate grade — whether the grade is supposed to reflect something like the student’s distance from the Platonic form of “getting” the material, or whether instead it should reflect how many standard deviations the student has scored from the mean for the class, whether that mean is relatively high or relatively low on an absolute scale. But your garden variety chemistry professor shouldn’t also be tasked with determining which students are likely to succeed in medical school or to make good physicians* because your garden variety chemistry professor have very little basis for making that determination, having never been a physician or even a medical student.

However, there are a couple of things that complicate this picture.

One is that I cannot help but feel that some chemistry professors end up adopting the grading-on-a-strict-bell-curve model because of the relatively large number of premeds compared to chemistry majors enrolled in the classes they teach. The assumption is that the chemistry majors will make up most of the As and Bs on that curve, while the teeming masses of premeds will make up most of the Cs, Ds, and Fs. (Premeds who end up making As are sometimes actively recruited to consider majoring in chemistry and perhaps even pursuing graduate studies in chemistry rather than medicine>0

This in itself wouldn’t necessarily be worrisome — maybe it would just be a reasonable prediction about the range of competency and motivation in the student population. But sometimes the prediction that premeds won’t learn organic chemistry (for example) as well as the chemistry majors seems to manifest itself in a pedagogy that puts less onus on the professor to teach the material and more onus on the students to learn it their own selves.

At which point, the professor in question is pretty much only determining how well the student learn chemistry, but not doing the teaching that you might have assumed was part of the job.

On the other hand, however, I think it’s an open question how medical schools would respond if chemistry professors suddenly got very serious about teaching all of their students — premeds included — in such a way that the vast majority of them learned the course material, and learned it very well. The anecdotal reports I heard (while I was teaching in an MCAT preparation course to help pay the bills between graduate) suggested that a school where more premed students were getting As and Bs in chemistry was judged “easier” by medical school admission committees, while one where fewer premed students got As and Bs in chemistry was judged “more challenging”. If that’s true, that would seem to penalize students with professors who take pedagogy more seriously than the bell curve.

And that makes it seem an awful lot like medical school admission really are pushing the weeding out onto chemistry professors.

Myself, I think that the ability to master the basics of general chemistry, or organic chemistry, or physical chemistry, is not the sort of thing that is (or ought to be) perfectly congruent with one’s major.** If taught well, the underlying principles of chemistry ought to be intelligible to almost any intelligent person (or at least, to more than not). Assuming up front that a whole class of students one is teaching are constitutionally unable to learn the material is giving up at the very start. And regardless of the instrumental use that medical schools might get out of this stance, I think it rather undermines one’s teaching duty to one’s home department.

Now, onto the first observation, that there would be fewer chemistry professors if chemistry classes (whether “weeders” or not) were not required for admission to medical school.***

The situation is such that chemistry departments often exist to offer “service courses” to support pre-professional programs. In many universities (including my own), philosophy departments also justify their existence by their service courses (in our case, the large number of courses we offer that fulfill various general education requirements). It’s nice to be able to point to a curriculum that needs to be taught, not just by the lights of your own discipline (which, obviously, thinks that core material within that discipline is terribly important), but also by the lights of other disciplines — especially if those disciplines have multitudes of customers students. This kind of demand means that, when you get the staffing to teach the coursework that is being demanded, you also get colleagues who are doing interesting research, who can add breadth to the courses you offer to your majors, and with whom it is productive (and fun) for you to interact.

But, especially in science departments, and especially at research-focused universities, this increased population of professors also leads to an increased demand for research funding, equipment, and lab space, and an increased demand for graduate students and technicians to keep the professors’ research projects moving forward. (Those graduate students are also in demand to do the grading in all those well-populated premed courses.)

Down the road, of course, this will mean more people with Ph.D.s competing for those professorial posts**** (which only exist in the numbers they do on account of the demand generated premeds required to take the courses those departments’ professors teach) competing for the posts there are.

This is not a huge incentive for chemistry professors (or chemistry graduate students) to question the common wisdom that general chemistry and organic chemistry (and maybe even biochemistry and physical chemistry) are absolutely essential preparation for medical school.

Perversely, the supply and demand equation also seems to act against reexamining the quality of the teaching in those required premed chemistry courses. After all, if you turn out premeds who are too smart, what are the chances that the senior faculty will die off at a reasonable rate and open up some jobs for the Ph.D. chemists they’ve trained?

_____
*Despite this, I will confess that the slogan “Save a life: fail a premed!” gained a certain traction with the chemistry TAs in my graduate program.

**If I didn’t already think that majors and the subjects that one is good at were separable, my friend the fine arts major who took math courses for fun would have pushed me in that direction.

***The claim that these chemistry professors would be happier depends, I think, on the current state of the transaction between premeds and chemistry professors, in which the students only care instrumentally for what the professors are offering and the professors have already decided that most of those premeds won’t be able to learn the material, or that they are diluting the contact between chemistry professors and chemistry majors, or what have you. I’m not saying that the claim is false, but like most counterfactual claims, how we evaluate it depends a lot on our hunches about what other moving parts in the situation might have relevant effects.

****And before that, for postdoctoral appointments.

Sexiest scientific couple?

The other day, a friend and I were having a chat about the curious inclination people (or at least bloggers) have towards compiling lists of sexy scientists, or sexy atheists, or sexy what have you. (I’m guessing it’s related to the same impulse to rate the “hotness” of one’s professors with the handy chili pepper icon on RateMyProfessor.com.)

That these lists are often assembled by men and populated by women obviously means there’s a certain narrowness to the definition of “sexy” in play. When the list is meant to identify the sexy (female) members of a profession that is largely male dominated, the focus on traits that are not necessarily viewed as enhancing one’s professional worth — indeed, traits that can end up being used by one’s colleagues as an excuse to discount one’s professional talents — can make them much more annoying than amusing.

Still, in the course of this chat, my friend jokingly suggested, “We should put together a calendar of the sexiest scientific couples!”

“There’s only one I could nominate,” I replied.

Continue reading