Ethics and the First of April.

On the Twitters, journalist Lee Billings posted this:

In case anyone was wondering, this is an April-Fools-free zone. Misleading readers is a disreputable practice, even under auspices of fun.

I think this is a position worth pondering, especially today.

Regular readers will have noticed that I indulge in the occasional April Fool’s post.

In fact, I posted another today.

And, today’s April Fool’s post was notable (for me, anyway) in its departure from “surprising news about me and/or my blog” terrain. Instead, I was offering “commentary” on a news story that I made up — fake news that was outrageous but had just enough plausibility that the reader might entertain the possibility that it was true. (Lately, the “real” news strikes me as outrageous a lot of the time, so my suspension-of-disbelief muscles are more toned than they used to be.)

So Lee Billings is, basically, right: I was striving to mislead you, my readers, at least momentarily, for laughs.

Was it unethical for me to have done so? Have I fallen short of my duties to you by engaging in this tomfoolery?

Maybe this comes down to what we understand those duties to be.

I try always to make my own thinking on an issue clear — to explain my stand and to give you reasons for that stand. I also try to set out my uncertainties — the things I don’t know or the places I feel myself torn between different stances.

I actually did this in today’s April Fool’s post, even though I was giving my thoughts on the implications of a proposal that no one has made (yet).

When I’m responding to a news story, I accept that I have an obligation not to misrepresent the claims the story makes. This is not to say that I treat the source as authoritative — indeed, in a number of cases I have expressed my own views of the “spin” of the reporting, and of the details that are not discussed in a news story. And, I include a link to the source so readers can read it themselves, evaluate it themselves, and draw their own conclusions about whether I’ve represented the source fairly.

Today’s post had me responding to a news story that didn’t exist. Clearly, that’s a misrepresentation. Moreover, it means that the link I included to the news story didn’t actually go to the news story — more misrepresentation. However, the diligent reader who actually clicked on that link would be alerted to the fact that there was no such news story before getting into my presentation of the purported proposal or analysis of it.

Maybe this means that readers who were successfully mislead by the post actually fell short of their duties to click those links and read that source material with a critical eye.

It’s possible, though, that I’m wrong about this — that you all want me to break things down so clearly and accurately that you never have to click a hyperlink, that you’d like me to dispense with ironic phrasing (yeah, right!), and so forth. My sense is that readers of this blog have been willing to shoulder their share of the cognitive burden, but if I’m mistaken about that, please use the comments to set me straight.

The other ethical worry one might have (and some have expressed) about today’s post is that my fake proposal might be taken up and advocated as a real proposal — which, in this case, I agree would be bad. If that were to happen, would I be responsible?

I guess I might. But then so might authors of dystopian fiction whose ideas are embraced (and implemented) by people who have a different view of how the world should be. Personally, I think exploring the pitfalls of bad ideas before someone thinks to implement them could help us to actually find better ideas to implement. However, I suppose where bad ideas that get implemented come from is an empirical question.

Does anyone have a good way to get the empirical data that would answer it?

Question for the hivemind: workplace policies and MYOB.

The students in my “Ethics in Science” course have, as usual, reminded me why I love teaching. (Impressively, they manage to do this while generating ever larger quantities of the stuff I don’t love about teaching, written work that needs to be graded.) But, recently, they’ve given me some indications that my take on the world and theirs may differ in interesting ways.

For example, last week they discussed a case study in which a graduate student is trying to figure out what to do about his difficulties with his research in a lab where the most successful student is also romantically involved with the boss.

In the discussion, there was about the range of opinions you’d expect about the acceptability of this kind of relationship and its likely effects on the collegiality of the training environment.

But there was a certain flavor of response that really confused me. It boiled down to something like this: The boss and the fellow grad student are responsible adults who can date anyone they want. Get over it, get back to your research, and for goodness sake don’t go blabbing about their relationship because if the department chair finds out about it, they could both get in big trouble, maybe even losing their jobs.

Am I wrong that there seems to be a contradiction here?

If the professor and his graduate student can get in official trouble, at the hands of the department chair, for their romantic involvement, doesn’t that suggest that the relationship is, in the official work context, not OK?

Or, looking at it from the other direction, if such a romance is something that they and any of their lab members who happen to have discovered it need to keep on the down-low, doesn’t this suggest that there is some problematic issue with the relationship? Otherwise, why is the secrecy necessary?

I’m thinking the crux of this response — they can date if they want to, but no one with authority over them must know about it — may be a presumption that workplace policies are unreasonably intrusive, especially when it comes to people’s personal lives. Still, it strikes me that at least some workplace policies might exist for good reasons — and that in some instances the personal lives of coworkers (and bosses) could have real impacts on the work environment.

Is “mind your own business” a reasonable policy here, official work policies be damned?

Social Studies: The Pressure to Procreate.

The following guest post was submitted by a reader who is struggling with balancing social and familial expectations as she tries to pursue a career and delays having children. She submitted this post seeking reader feedback from others who may have experienced this situation. She has requested to remain anonymous to maintain family peace, which is at a fragile state at the moment.

I adore children. I have a very sweet goddaughter who will be a year old next month, and I love her dearly. I also have an older goddaughter about to enter those dreaded teen years, and it’s exciting to watch her navigate this portion of her life. My husband’s best friend, whom we both consider a sister, just had a bouncing baby boy and I’m looking forward to hearing him call me “Aunty.” And there are two beautifully pregnant women in the family currently—both cousins, one with her first child, the other with her second. So I am surrounded by babies. That said, I personally do not have any children of my own. This has largely been the result of careful planning on the part of myself and my husband. We have our own time line, but for many of our relatives the delay represents a huge social breach, and they are starting to bear down somewhat harshly.

I am a 28-year-old West Indian woman who married her childhood sweetheart, voluntarily, at the age of 18. He is Bengali. As a West Indian marrying into a Bengali family, you would think the transition would be easy to manage—we are from similar backgrounds after all. But it’s been surprisingly difficult. I’m not sure how much of it is a cultural difference though and how much is a generational difference. It seems to be a fair mix though there are a fair number of young women who seem to be going the traditional route (i.e., getting married, having teh babiez, staying home, etc.) Now, you may also think to yourself, well, if you were childhood sweethearts, don’t you know what you were getting into? Well, no. When I say childhood sweethearts, I mean real childhood sweethearts. He had a crush on me in the sixth grade! He brought me apple juice. We went to different high schools and reconnected in college, when we decided we wanted to get married. And we eloped, partly because we didn’t want a a huge fuss made, and partly because we knew neither set of parents would agree to letting a pair of 18-year-olds get married.

Flash forward ten years later to a recent baby shower, where the aunts were clucking as per normal when they spotted me. “When are you having babies?” I was asked. “Why don’t you want children?” “Don’t you like children?” “Your mother-in-law wants a grandbaby!” I managed to deflect all of this with good cheer as I normally do (e.g., “[The MIL] has [the family dog] to spoil!”) and for the most part my responses were met with jovial laughter. I’m a pro at this discussion, I thought. And I should be—I’m used to it.

And then one of them dropped a bomb on me: “What? Can’t you have children? You’re going to need a test tube baby!” she taunted. This declaration/announcement was made at the top of her lungs in front of a room of family and strangers, and I admit it stopped me in my tracks. It stopped most of the room too as a moment of somewhat uneasy silence unfolded. I wasn’t sure how to respond. I know I was embarrassed and angry all at once. For the record, I have nothing against IVF. I think that if it can help a couple have a baby when they’re having trouble conceiving, then they should go for it. Kate Clancy, who went through this process was actually featured on CNN a few weeks ago. Her story is amazing. However, from this aunt’s tone, you could tell that you would be less of a woman if you needed a “test tube” baby. But that’s not the point. What I was reacting to was the assumption that there was something wrong with me because I hadn’t produced a brood of children yet at the ancient age of 28.

This is just the latest jab in the mounting pressure from all sides that feel I should have borne a child by now. My waistline is closely scrutinized, and the slightest bump is reason to be questioned. And since I’m not pregnant, I have no reason to carry any extra weight, so any extra bulges are evidence that I am just fat, and just don’t care. It’s become exhausting. This shouldn’t bother me, and it hasn’t for a long time, but what is starting to bother me is the derision that accompanies their statements. “We know you’re focused on your studies,” they say as a lead in to the conversation. Studies?? What studies? I’ve been out of school for two years. I’ve been working—trying to establish a career. Do any of you actually know me? Actually know what I do?

I’m a successful blogger and published writer. I have an advanced degree. I’ve won numerous awards for academic accomplishments, been in countless science competitions, and I’m a successful professional. I help build leading websites and web tools. But none of that matters. Children to this group are a sort of cultural currency. I’ve been measured in public based on the bag I carry and the clothes I wear, and I am measured in private by the family by my apparent (lack of) fertility. And until I produce a child, I know I won’t measure up to their expectations—hell, even when I produce the child I won’t measure up. Partly because I am an outsider to their cultural background (and what will I know about raising children properly?) and partly because I plan to continue working instead of staying home and raising him or her, which is also somewhat unacceptable. (The hubby was once told that marrying a smart woman is fine, but it means the house will never be clean, that there will never be food on the table, and the children will run wild.) I feel these are personal decisions. Am I crazy?

The constant questioning adds another layer of annoyance. Will it detract from the joy when we do announce we’re expecting? Will there be a sense that we got pregnant because we were told to do so? Instead of “That’s wonderful!” will we get “It’s about time!”? Will they take credit for the fact that we’ve conceived? Again, I’m trying to see this from their perspective. This is a culture where women traditionally maintain the hearth of the home by remaining in it. I realize that I am somewhat of a puzzle to them and this may be their way of fitting me into their norms and expectations. But in trying to fit me in—if that’s what they’re doing—they’ve managed to minimize everything else that I’ve done. And I just don’t think that’s cool, man.

The hubby does not buy into the traditional view. He’s proud of me and my accomplishments and he deflects the baby question as often as I do. He does not think this should bother me, because at this point we both know that the family will not rest until we “prove” ourselves with a child. But I am exhausted from fielding comments and questions about my fertility. It’s not anyone’s business, but since it seems to be everyone’s business, I’m doing an impromptu cultural/gender study: ladies are you experiencing the same thing? Is this a cultural issue? Or a gender issue? Have you been through the same? How did you survive and when did it stop?

DonorsChoose Science Bloggers for Students Drive 2010.

Note to longtime readers: This post borrows heavily from posts I have written for past DonorsChoose drives. If you get a feeling of deja vu reading it, you’ve come by it honestly.

In the science-y sectors of the blogosphere, folks frequently bemoan the sorry state of the public’s scientific literacy and engagement. People fret about whether our children are learning what they should about science, math, and critical reasoning. Netizens speculate on the destination of the handbasket in which we seem to be riding.

In light of the big problems that seem insurmountable, we should welcome the opportunity to do something small that can have an immediate impact.

This year, from October 10th through November 9th, a number of science bloggers, whether networked, loosely affiliated, or proudly independent, will be teaming up with DonorsChoose in a philanthropic throwdown for public schools.

DonorsChoose is a site where public school teachers from around the U.S. submit requests for specific needs in their classrooms — from books to science kits, overhead projectors to notebook paper, computer software to field trips — that they can’t meet with the funds they get from their schools (or from donations from their students’ families). Then donors choose which projects they’d like to fund and then kick in the money, whether it’s a little or a lot, to help a proposal become a reality.

Over the last few years, bloggers have rallied their readers to contribute what they can to help fund classroom proposals through DonorsChoose, especially proposals for projects around math and science, raising hundreds of thousands of dollars, funding hundreds of classroom projects, and impacting thousands of students.

Which is great. But there are a whole lot of classrooms out there that still need help.

As economic experts scan the horizon for hopeful signs and note the harbingers of economic recovery, we should not forget that school budgets are still hurting (and are worse, in many cases, than they were last school year, since one-time lumps of stimulus money are gone now). Indeed, public school teachers have been scraping for resources since long before Wall Street’s financial crisis started. Theirs is a less dramatic crisis than a bank failure, but it’s here and it’s real and we can’t afford to wait around for lawmakers on the federal or state level to fix it.

The kids in these classrooms haven’t been making foolish investments. They’ve just been coming to school, expecting to be taught what they need to learn, hoping that learning will be fun. They’re our future scientists, doctors, teachers, decision-makers, care-providers, and neighbors. To create the scientifically literate world we want to live in, let’s help give these kids the education they deserve.

One classroom project at a time, we can make things better for these kids. Joining forces with each other people, even small contributions can make a big difference.

The challenge this year runs October 10 through November 9. We’re overlapping with Earth Science Week (October 10-16, 2010) and National Chemistry Week (October 17-23, 2010), a nice chance for earth science and chemistry fans to add a little philanthropy to their celebrations. There are a bunch of Scientopia bloggers mounting challenges this year (check out some of their challenge pages on our leaderboard), as well as bloggers from other networks (which you can see represented on the challenge’s motherboard). And, since today is the official kick-off, there is plenty of time for other bloggers and their readers to enter the fray!

How It Works:
Follow the links above to your chosen blogger’s challenge on the DonorsChoose website.

Pick a project from the slate the blogger has selected. Or more than one project, if you just can’t choose. (Or, if you really can’t choose, just go with the “Give to the most urgent project” option at the top of the page.)

Donate.

(If you’re the loyal reader of multiple participating blogs and you don’t want to play favorites, you can, of course, donate to multiple challenges! But you’re also allowed to play favorites.)
DonorsChoose will send you a confirmation email. Hold onto it; some bloggers (including me) will be offering donors nifty prizes. Details about the prizes and how to get them will be posted here soon!

Sit back and watch the challenges inch towards their goals, and check the leaderboards to see how many students will be impacted by your generosity.

Even if you can’t make a donation, you can still help!
Spread the word about these challenges using web 2.0 social media modalities. Link your favorite blogger’s challenge page on your MySpace page, or put up a link on Facebook, or FriendFeed, or LiveJournal (or Friendster, or Xanga, or …). Tweet about it on Twitter. Sharing your enthusiasm for this cause may inspire some of your contacts who do have a little money to get involved and give.

Here’s the permalink to my giving page.

I’ll be sharing links to other giving pages, plus details about some fabulous “thank you” prizes, soon. Thanks in advance for your generosity.

Practical chemical engineering.

It’s day two of my training course, and as I contemplate my mug of decaf, I am suddenly flashing back to a question that was rumored to be part of the chemical engineering qualifying exam in my chemistry graduate program. As it’s an intriguing problem, I thought I’d share it here:

In the dead of winter, a professor sends his grad student out into the cold to fetch him a hot beverage from the cafe. “Coffee with two creams, and make sure it’s HOT when it gets to me!” the professor barks.

Shivering from fear as much as cold, the grad student procures a 12-ounce styrofoam cup of hot coffee and two little containers (maybe 20 mL each) of half and half at the cafe. To maximize the temperature of the coffee when it is delivered to the prof, should he add the half and half to the coffee before he walks it through the cold or after?

Feel free to work together on this problem, and please show your work in the comments.

What kind of problem is it when data do not support findings?

And, whose problem is it?

Yesterday, The Boston Globe published an article about Harvard University psychologist Marc Hauser, a researcher embarking on a leave from his appointment in the wake of a retraction and a finding of scientific misconduct in his lab. From the article:

In a letter Hauser wrote this year to some Harvard colleagues, he described the inquiry as painful. The letter, which was shown to the Globe, said that his lab has been under investigation for three years by a Harvard committee, and that evidence of misconduct was found. He alluded to unspecified mistakes and oversights that he had made, and said he will be on leave for the upcoming academic year. …

Much remains unclear, including why the investigation took so long, the specifics of the misconduct, and whether Hauser’s leave is a punishment for his actions.

The retraction, submitted by Hauser and two co-authors, is to be published in a future issue of Cognition, according to the editor. It says that, “An internal examination at Harvard University . . . found that the data do not support the reported findings. We therefore are retracting this article.’’

The paper tested cotton-top tamarin monkeys’ ability to learn generalized patterns, an ability that human infants had been found to have, and that may be critical for learning language. The paper found that the monkeys were able to learn patterns, suggesting that this was not the critical cognitive building block that explains humans’ ability to learn language. In doing such experiments, researchers videotape the animals to analyze each trial and provide a record of their raw data. …

The editor of Cognition, Gerry Altmann, said in an interview that he had not been told what specific errors had been made in the paper, which is unusual. “Generally when a manuscript is withdrawn, in my experience at any rate, we know a little more background than is actually published in the retraction,’’ he said. “The data not supporting the findings is ambiguous.’’

Gary Marcus, a psychology professor at New York University and one of the co-authors of the paper, said he drafted the introduction and conclusions of the paper, based on data that Hauser collected and analyzed.

“Professor Hauser alerted me that he was concerned about the nature of the data, and suggested that there were problems with the videotape record of the study,’’ Marcus wrote in an e-mail. “I never actually saw the raw data, just his summaries, so I can’t speak to the exact nature of what went wrong.’’
The investigation also raised questions about two other papers co-authored by Hauser. The journal Proceedings of the Royal Society B published a correction last month to a 2007 study. The correction, published after the British journal was notified of the Harvard investigation, said video records and field notes of one of the co-authors were incomplete. Hauser and a colleague redid the three main experiments and the new findings were the same as in the original paper. …

“This retraction creates a quandary for those of us in the field about whether other results are to be trusted as well, especially since there are other papers currently being reconsidered by other journals as well,’’ Michael Tomasello, co-director of the Max Planck Institute for Evolutionary Anthropology in Leipzig, Germany, said in an e-mail. “If scientists can’t trust published papers, the whole process breaks down.’’ …

In 1995, he [Hauser] was the lead author of a paper in the Proceedings of the National Academy of Sciences that looked at whether cotton-top tamarins are able to recognize themselves in a mirror. Self-recognition was something that set humans and other primates, such as chimpanzees and orangutans, apart from other animals, and no one had shown that monkeys had this ability.

Gordon G. Gallup Jr., a professor of psychology at State University of New York at Albany, questioned the results and requested videotapes that Hauser had made of the experiment.

“When I played the videotapes, there was not a thread of compelling evidence — scientific or otherwise — that any of the tamarins had learned to correctly decipher mirrored information about themselves,’’ Gallup said in an interview.

A quick rundown of what we get from this article:

  • Someone raised a concern about scientific misconduct that led to the Harvard inquiry, which in turn led to the discovery of “evidence of misconduct” in Hauser’s lab.
  • We don’t, however, have an identification of what kind of misconduct is suggested by the evidence (fabrication? falsification? plagiarism? other serious deviations from accepted practices?) or of who exactly committed it (Hauser or one of the other people in his lab).
  • At least one paper has been retracted because “the data do not support the reported findings”.
  • However, we don’t know the precise issue with the data here — e.g., whether the reported findings were bolstered by reported data that turned out to be fabricated or falsified (and are thus not being included anymore in “the data”).
  • Apparently, the editor of the journal that published the retracted paper doesn’t know the precise issue with the data, either, and found this unusual enough a situation with respect to the retraction of the paper to merit comment.
  • Other papers from the Hauser group may be under investigation for similar reasons at this point, and other researchers in the field seem to be nervous about those papers and their reliability in light of the ongoing inquiry and the retraction of the paper in Cognition.

There’s already been lots of good commentary on what might be going on with the Hauser case. (I say “might” because there are many facts still not in evidence to those of us not actually on the Harvard inquiry panel. As such, I think it’s necessary to refrain from drawing conclusions not supported by the facts that are in evidence.)

John Hawks situates the Hauser case in terms of the problem of subjective data.

Melody has a nice discussion of the political context of getting research submitted to journals, approved by peer reviewers, and anointed as knowledge.

David Dobbs wonders whether the effects of the Hauser case (and of the publicity it’s getting) will mean backing off from overly strong conclusions drawn from subjective data, or backing off too far from a “hot” scientific field that may still have a bead on some important phenomena in our world.

Drugmonkey critiques the Boston Globe reporting and reminds us that failure to replicate a finding is not evidence of scientific misconduct or fraud. That’s a hugely important point, and one that bears repeating. Repeatedly.

This is the kind of territory where we start to notice common misunderstandings about how science works. It’s usually not the case that we can cut nature at the joints along nicely dotted lines that indicate just where those cuts should be. Collecting reliable data and objectively interpreting that data is hard work. Sometimes as we go, we learn more about better conditions for collecting reliable data, or better procedures for interpreting the data without letting our cognitive biases do the driving. And sometimes, a data set we took to be reliable and representative of the phenomenon we’re trying to understand just isn’t.

That’s part of why scientific conclusions are always tentative. Scientists expect to update their current conclusions in the light of new results down the road — and in the light of our awareness that some of our old results just weren’t as solid or reproducible as we took them to be. It’s good to be sure they’re reproducible enough before you announce a finding to your scientific peers, but to be absolutely certain of total reproducibility, you have to solve the problem of induction, which isn’t terribly practical.

Honest scientific work can lead to incorrect conclusions, either because that honest work yielded wonk data from which to draw conclusions, or because good data can still be consistent with incorrect conclusions.

And, there’s a similar kind of disconnect we should watch out for. For the “corrected” 2007 paper in Proceedings of the Royal Society B, the Boston Globe article reports that videotapes and field notes (the sources of the data to support the reported conclusions) were “incomplete”. But, Hauser and a colleague redid the experiments and found data that supported the conclusions reported in this paper. One might think that as long as reported results are reproducible, they’re necessarily sufficiently ethical and scientifically sound and all that good stuff. That’s not how scientific knowledge-building works. The rules of the game are that you lay your data-cards on the table and base your findings on those data. Chancing upon an answer that turns out to be right but isn’t supported by the data you actually have doesn’t count, nor does having a really strong hunch that turns out to be right. In the scientific realm, empirical data is our basis for knowing what we know about the phenomena. Thus, doing the experiments over in the face of insufficient data is not “playing it safe” so much as “doing the job you were supposed to have done in the first place”.

Now, given the relative paucity of facts in this particular case, I find myself interested by a more general question: What are the ethical duties of a PI who discovers that he has published a paper whose findings are not, in fact, supported by the data?.

It seems reasonable that at least one of his or her duties involves correcting the scientific literature.

This could involve retracting the paper, in essence saying, “Actually, we can’t conclude this based on the data we have. Our bad!”

It could also involve correcting the paper, saying, “We couldn’t conclude this based on the data we have; instead, we should conclude this other thing,” or, “We couldn’t conclude this based on the data we originally reported, but we’ve gone and done more experiments (or have repeated the experiments we described), obtained this data, and are now confident that on the basis of these data, the conclusion in well-supported.”

If faulty data were reported, I would think that the retraction or correction should probably explain how the data were faulty — what’s wrong with them? If the problem had its source in an honest mistake, it might also be valuable to identify that honest mistake so other researchers could avoid it themselves. (Surely this would be a kindness; is it also a duty?)

Beyond correcting the scientific literature, does the PI in this situation have other relevant duties?

Would these involve ratcheting up the scrutiny of data within the lab group in advance of future papers submitted for publication? Taking the skepticism of other researchers in the field more seriously and working that much harder to build a compelling case for conclusions from the data? (Or, perhaps, working hard to identify the ways that the data might argue against the expected conclusion?) Making serious efforts to eliminate as much subjectivity from the data as possible?

Assuming the PI hasn’t fabricated or falsified the data (and that if someone in the lab group has, that person has been benched, at least for the foreseeable future), what kind of steps ought that PI to take to make things right — not just for the particular problematic paper(s), but for his or her whole research group moving forward and interacting with other researchers in the field? How can they earn back trust?

Building a critical reasoning course: getting started with the external constraints.

My Fall semester is rapidly approaching and I am still in the throes of preparing to teach a course I have never taught before. The course is called “Logic and Critical Reasoning.” Here’s the catalog description of the course:

Basic concepts of logic; goals and standards of both deductive and inductive reasoning; techniques of argument analysis and assessment; evaluation of evidence; language and definition; fallacies.

The course involves some amount of symbolic logic (and truth-tables and that good stuff) but also a lot of attention to argumentation “in the wild”, in the written and spoken word. My department usually teaches multiple sections of the course each semester, but it’s not the case that we all march in lockstep with identical textbooks, syllabi, and assignments.

The downside of academic freedom, when applied to teaching a course like this, is that you have to figure out your own plan.

Nonetheless, since critical reasoning is the kind of thing I think we need more of in the world, I’m excited about having the opportunity to teach the course. And, at Tom Levenson‘s suggestion, I’m going to blog the process of planning the course. Perhaps you all will have some suggestions for me as I work through it.

Part of why my department offers multiple sections of “Logic and Critical Reasoning” is that it fulfills a lower-division general education (G.E.) requirement. In other words, there’s substantial student demand for courses that fulfill this requirement.

For this course to fulfill the G.E. requirement, of course, it has to meet certain pedagogical goals or “learning objectives”. So, where I need to start in planning this course is with the written-and-approved-by-committee learning objectives and content requirements:

Course Goals and Student Learning Objectives
“Logic and Critical Reasoning” is designed to meet the G.E. learning objectives for Area A3.

A.
Critical thinking courses help students learn to recognize, analyze, evaluate, and engage in effective reasoning.

B.
Students will demonstrate, orally and in writing, proficiency in the course goals. Development of the following competencies will result in dispositions or habits of intellectual autonomy, appreciation of different worldviews, courage and perseverance in inquiry, and commitment to employ analytical reasoning. Students should be able to:

  1. distinguish between reasoning (e.g., explanation, argument) and other types of discourse (e.g., description, assertion);
  2. identify, analyze, and evaluate different types of reasoning;
  3. find and state crucial unstated assumptions in reasoning;
  4. evaluate factual claims or statements used in reasoning, and evaluate the sources of evidence for such claims;
  5. demonstrate an understanding of what constitutes plagiarism;
  6. evaluate information and its sources critically and incorporate selected information into his or her knowledge base and value system;
  7. locate, retrieve, organize, analyze, synthesize, and communicate information of relevance to the subject matter of the course in an effective and efficient manner; and
  8. reflect on past successes, failures, and alternative strategies.

C.

  • Students will analyze, evaluate, and construct their own arguments or position papers about issues of diversity such as gender, class, ethnicity, and sexual orientation.
  • Reasoning about other issues appropriate to the subject matter of the course shall also be presented, analyzed, evaluated, and constructed.
  • All critical thinking classes should teach formal and informal methods for determining the validity of deductive reasoning and the strength of inductive reasoning, including a consideration of common fallacies in inductive and deductive reasoning. … “Formal methods for determining the validity of deductive arguments” refers to techniques that focus on patterns of reasoning rather than content. While all deductive arguments claim to be valid, not all of them are valid. Students should know what formal methods are available for determining which are which. Such methods include, but are not limited to, the use of Venn’s diagrams for determining validity of categorical reasoning, the methods of truth tables, truth trees, and formal deduction for reasoning which depends on truth functional structure, and analogous methods for evaluating reasoning which may be valid due to quantificational form. These methods are explained in standard logic texts. We would also like to make clear that the request for evidence that formal methods are being taught is not a request that any particular technique be taught, but that some method of assessing formal validity be included in course content.
  • Courses shall require the use of qualitative reasoning skills in oral and written assignments. Substantial writing assignments are to be integrated with critical thinking instruction. Writing will lead to the production of argumentative essays, with a minimum of 3000 words required. Students shall receive frequent evaluations from the instructor. Evaluative comments must be substantive, addressing the quality and form of writing.

This way of describing the course, I reckon, is not the best way to convince my students that it’s a course they’re going to want to be taking. My big task, therefore, is to plan course material and assignments that accomplish these goals while also striking the students as interesting, relevant, and plausibly do-able. In addition, I want to plan assignments that give the students enough practice and feedback but that don’t overwhelm me with grading. (The budget is still in very bad shape, so I have no expectation that there will be money to hire a grader.)

I have some ideas percolating here, which I will blog about soon. One of them is to use the blogosphere as a source of arguments (and things-that-look-like-arguments-but-aren’t) for analysis. I’m thinking, though, that I’ll need to set some good ground rules in advance.

Do these learning objectives and content requirements seem to you to call out for particular types of homework assignments or mini-lecture? If you had to skin this particular pedagogical cat, where would you start?

Paid sick leave and ethics.

I saw a story in the San Jose Mercury News that I thought raised an interesting question about sick leave, one worth discussing here.
As it turns out, all the details of the specific case reported in the article sort of obscure the general question that it initially raised for me. But since I’m still interested in discussing the more general problem, here’s a poll to tweak your intuitions.

In cash-strapped community college system, an administrator collecting paid sick leave is …online survey

Continue reading

2010 blog-reader census.

DrugMonkey’s Google calendar must have told him that it’s time for the meme in which bloggers ask their readers what they’re doing here, a meme whose originator is the esteemed Ed Yong.
Having played along myself in 2008 and 2009, I’m on-board to mount the 2010 version of this blog-reader census. Please respond to at least some of these questions in the comments so we can avoid the expense of sending people with clipboards to your front door:

Continue reading

Final grades and missing student work: what to do?

Even though I got my grades filed last Friday (hours before the midnight deadline), this week I kept encountering colleagues for whom the grading drama Would. Not. End. As you might imagine, this led to some discussions about what one should do when the grade-filing deadline approaches and you are still waiting for students to cough up the work that needs grading.
I’d like to tell you that this is a rare occurrence. Sadly, it is not. Before we get into speculation about why students may be failing to deliver the deliverables, a quick poll on your preferred professorial response:

Final grades are nearly due when you discover that a student who’s done well on most of the assignments hasn’t handed in one of the major ones. What do you do?online surveys

Continue reading