Ebola, abundant caution, and sharing a world.

Today a judge in Maine ruled that quarantining nurse Kaci Hickox is not necessary to protect the public from Ebola. Hickox, who had been in Sierra Leone for a month helping to treat people infected with Ebola, had earlier been subject to a mandatory quarantine in New Jersey upon her return to the U.S., despite being free of Ebola symptoms (and so, given what scientists know about Ebola, unable to transmit the virus). She was released from that quarantine after a CDC evaluation, though if she had stayed in New Jersey, the state health department promised to keep her in quarantine for a full 21 days. Maine state officials originally followed New Jersey’s lead in deciding that following CDC guidelines for medical workers who have been in contact with Ebola patients required a quarantine.

The order from Judge Charles C. LaVerdiere “requires Ms. Hickox to submit to daily monitoring for symptoms, to coordinate her travel with state health officials, and to notify them immediately if symptoms appear. Ms. Hickox has agreed to follow the requirements.”

It is perhaps understandable that state officials, among others, have been responding to the Ebola virus in the U.S. with policy recommendations, and actions, driven by “an abundance of caution,” but it’s worth asking whether this is actually an overabundance.

Indeed, the reaction to a handful of Ebola cases in the U.S. is so far shaping up to be an overreaction. As Maryn McKenna details in a staggering round-up, people have been asked or forced to stay home from their jobs for 21 days (the longest Ebola incubation period) for visiting countries in Africa with no Ebola cases. Someone was placed on leave by an employer for visiting Dallas (in whose city limits there were two Ebola cases). A Haitian woman who vomited on a Boston subway platform was presumed to be Liberian, and the station was shut down. Press coverage of Ebola in the U.S. has fed the public’s panic.

How we deal with risk is a pretty personal thing. It has a lot to do with what outcomes we feel it most important to avoid (even if the probability of those outcomes is very low) and which outcomes we think we could handle. This means our thinking about risk will be connected to our individual preferences, our experiences, and what we think we know.

Sharing a world with other people, though, requires finding some common ground on what level of risk is acceptable.

Our choices about how much risk we’re willing to take on frequently have an effect on the level of risk to which those around us are subject. This comes up in discussions of vaccination, of texting-while-driving, of policy making in response to climate change. Finding the common ground — even noticing that our risk-taking decisions impact anyone but us — can be really difficult.

However, it’s bound to be even more difficult if we’re guessing at risks without taking account of what we know. Without some agreement about the facts, we’re likely to get into irresolvable conflicts. (If you want to bone up on what scientists know about Ebola, by the way, you really ought to be reading what Tara C. Smith has been writing about it.)

Our scientific information is not perfect, and it is the case that very unlikely events sometimes happen. However, striving to reduce our risk to zero might not leave us as safe as we imagine it would. If we fear any contact with anyone who has come into contact with an Ebola patient, what would this require? Permanently barring their re-entry to the U.S. from areas of outbreak? Killing possibly-infected health care workers already in the U.S. and burning their remains?

Personally, I’d prefer less dystopia in my world, not more.

And even given the actual reactions to people like Kaci Hickox from states like New Jersey and Maine, the “abundance of caution” approach has foreseeable effects that will not help protect people in the U.S. from Ebola. Mandatory quarantines that take no account of symptoms of those quarantined (nor of the conditions under which someone is infectious) are a disincentive for people to be honest about their exposure, or to come forward when symptoms present. Moreover, they provide a disincentive for health care workers to help people in areas of Ebola outbreak — where helping patients and containing the spread of the virus is, arguably, a reasonable strategy to protect other countries (like the U.S.) that do not have Ebola epidemics.

Indeed, the “abundance of caution” approach might make us less safe by ramping up our stress beyond what is warranted or healthy.

If this were a spooky story, Ebola might be the virus that got in only to reveal to us, by the story’s conclusion, that it was really our own terrified reaction to the threat that would end up harming us the most. That’s not a story we need to play out in real life.

Conduct of scientists (and science writers) can shape the public’s view of science.

Scientists undertake a peculiar kind of project. In striving to build objective knowledge about the world, they are tacitly recognizing that our unreflective picture of the world is likely to be riddled with mistakes and distortions. On the other hand, they frequently come to regard themselves as better thinkers — as more reliably objective — than humans who are not scientists, and end up forgetting that they have biases and blindspots of their own which they are helpless to detect without help from others who don’t share these particular biases and blindspots.

Building reliable knowledge about the world requires good methodology, teamwork, and concerted efforts to ensure that the “knowledge” you build doesn’t simply reify preexisting individual and cultural biases. It’s hard work, but it’s important to do it well — especially given a long history of “scientific” findings being used to justify and enforce preexisting cultural biases.

I think this bigger picture is especially appropriate to keep in mind in reading the response from Scientific American Blogs Editor Curtis Brainard to criticisms of a pair of problematic posts on the Scientific American Blog Network. Brainard writes:

The posts provoked accusations on social media that Scientific American was promoting sexism, racism and genetic determinism. While we believe that such charges are excessive, we share readers’ concerns. Although we expect our bloggers to cover controversial topics from time to time, we also recognize that sensitive issues require extra care, and that did not happen here. The author and I have discussed the shortcomings of the two posts in detail, including the lack of attention given to countervailing arguments and evidence, and he understood the deficiencies.

As stated at the top of every post, Scientific American does not always share the views and opinions expressed by our bloggers, just as our writers do not always share our editorial positions. At the same time, we realize our network’s bloggers carry the Scientific American imprimatur and that we have a responsibility to ensure that—differences of opinion notwithstanding—their work meets our standards for accuracy, integrity, transparency, sensitivity and other attributes.

(Bold emphasis added.)

The problem here isn’t that the posts in question advocated sound scientific views with implications that people on social media didn’t like. Rather, the posts presented claims in a way that made them look like they had much stronger scientific support than they really do — and did so in the face of ample published scientific counterarguments. Scientific American is not requiring that posts on its blog network meet a political litmus test, but rather that they embody the same kind of care, responsibility to the available facts, and intellectual honesty that science itself should display.

This is hard work, but it’s important. And engaging seriously with criticism, rather than just dismissing it, can help us do it better.

There’s an irony in the fact that one of the problematic posts which ignored some significant relevant scientific literature (helpfully cited by commenters in the comments section of that very post) was ignoring that literature in the service of defending Larry Summers and his remarks on possible innate biological causes that make men better at math and science than women. The irony lies in the fact that Larry Summers displayed an apparently ironclad commitment to ignore any and all empirical findings that might challenge his intuition that there’s something innate at the heart of the gender imbalance in math and science faculty.

Back in January of 2005, Larry Summers gave a speech at a conference about what can be done to attract more women to the study of math and science, and to keep them in the field long enough to become full professors. In his talk, Summers suggested as a possible hypothesis for the relatively low number of women in math and science careers that there may be innate biological factors that make males better at math and science than females. (He also related an anecdote about his daughter naming her toy trucks as if they were dolls, but it’s fair to say he probably meant this anecdote to be illustrative rather than evidentiary.)

The talk did not go over well with the rest of the participants in the conference.

Several scientific studies were presented at the conference before Summers made his speech. All these studies presented significant evidence against the claim of an innate difference between males and females that could account for the “science gap”.


In the aftermath of this conference of yore, there were some commenters who lauded Summers for voicing “unpopular truths” and others who distanced themselves from his claims but said they supported his right to make them as an exercise of “academic freedom.”

But if Summers was representing himself as a scientist* when he made his speech, I don’t think the “academic freedom” defense works.


Summers is free to state hypotheses — even unpopular hypotheses — that might account for a particular phenomenon. But, as a scientist, he is also responsible to take account of data relevant to his hypotheses. If the data weighs against his preferred hypothesis, intellectual honesty requires that he at least acknowledge this fact. Some would argue that it could even require that he abandon his hypothesis (since science is supposed to be evidence-based whenever possible).


When news of Summers’ speech, and reactions to it, was fresh, one of the details that stuck with me was that one of the conference organizers noted to Summers, after he gave his speech, that there was a large body of evidence — some of it presented at that very conference — that seemed to undermine his hypothesis, after which Summers gave a reply that amounted to, “Well, I don’t find those studies convincing.”

Was Summers within his rights to not believe these studies? Sure. But, he had a responsibility to explain why he rejected them. As a part of a scientific community, he can’t just reject a piece of scientific knowledge out of hand. Doing so comes awfully close to undermining the process of communication that scientific knowledge is based upon. You aren’t supposed to reject a study because you have more prestige than the authors of the study (so, you don’t have to care what they say). You can question the experimental design, you can question the data analysis, you can challenge the conclusions drawn, but you have to be able to articulate the precise objection. Surely, rejecting a study just because it doesn’t fit with your preferred hypothesis is not an intellectually honest move.


By my reckoning, Summers did not conduct himself as a responsible scientist in this incident. But I’d argue that the problem went beyond a lack of intellectual honesty within the universe of scientific discourse. Summers is also responsible for the bad consequences that flowed from his remark.


The bad consequence I have in mind here is the mistaken view of science and its workings that Summers’ conduct conveys. Especially by falling back on a plain vanilla “academic freedom” defense here, defenders of Summers conveyed to the public at large the idea that any hypothesis in science is as good as any other. Scientists who are conscious of the evidence-based nature of their field will see the absurdity of this idea — some hypotheses are better, others worse, and whenever possible we turn to the evidence to make these discriminations. Summers compounded ignorance of the relevant data with what came across as a statement that he didn’t care what the data showed. From this, the public at large could assume he was within his scientific rights to decide which data to care about without giving any justification for this choice**, or they could infer that data has little bearing on the scientific picture of the world.

Clearly, such a picture of science would undermine the standing of the rest of the bits of knowledge produced by scientists far more intellectually honest than Summers.


Indeed, we might go further here. Not only did Summers have some responsibilities that seemed to have escaped him while he was speaking as a scientist, but we could argue that the rest of the scientists (whether at the conference or elsewhere) have a collective responsibility to address the mistaken picture of science his conduct conveyed to society at large. It’s disappointing that, nearly a decade later, we instead have to contend with the problem of scientists following in Summers’ footsteps by ignoring, rather than engaging with, the scientific findings that challenge their intuitions.

Owing to the role we play in presenting a picture of what science knows and of how scientists come to know it to a broader audience, those of us who write about science (on blogs and elsewhere) also have a responsibility to be clear about the kind of standards scientists need to live up to in order to build a body of knowledge that is as accurate and unbiased as humanly possible. If we’re not clear about these standards in our science writing, we risk misleading our audiences about the current state of our knowledge and about how science works to build reliable knowledge about our world. Our responsibility here isn’t just a matter of noticing when scientists are messing up — it’s also a matter of acknowledging and correcting our own mistakes and of working harder to notice our own biases. I’m pleased that our Blogs Editor is committed to helping us fulfill this duty.
_____
*Summers is an economist, and whether to regard economics as a scientific field is a somewhat contentious matter. I’m willing to give the scientific status of economics the benefit of the doubt, but this means I’ll also expect economists to conduct themselves like scientists, and will criticize them when they do not.

**It’s worth noting that a number of the studies that Summers seemed to be dismissing out of hand were conducted by women. One wonders what lessons the public might draw from that.

_____
A portion of this post is an updated version of an ancestor post on my other blog.

Do permanent records of scientific misconduct findings interfere with rehabilitation?

We’ve been discussing how the scientific community deals with cheaters in its midst and the question of whether scientists view rehabilitation as a live option. Connected to the question of rehabilitation is the question of whether an official finding of scientific misconduct leaves a permanent mark that makes it practically impossible for someone to function within the scientific community — not because the person who has committed the conduct is unable to straighten up and fly right, but because others in the scientific community will no longer accept that person in the scientific knowledge-building endeavor, no matter what their behavior.

A version of this worry is at the center of an editorial by Richard Gallagher that appeared in The Scientist five years ago. In it, Gallagher argued that the Office of Research Integrity should not include findings of scientific misconduct in publications that are archived online, and that traces of such findings that persist after the period of debarment from federal funding has ended are unjust. Gallagher wrote:

For the sake of fairness, these sentences must be implemented precisely as intended. This means that at the end of the exclusion period, researchers should be able to participate again as full members of the scientific community. But they can’t.

Misconduct findings against a researcher appear on the Web–indeed, in multiple places on the Web. And the omnipresence of the Web search means that reprimands are being dragged up again and again and again. However minor the misdemeanor, the researcher’s reputation is permanently tarnished, and his or her career is invariably ruined, just as surely as if the punishment were a lifetime ban.

Both the NIH Guide and The Federal Register publish findings of scientific misconduct, and are archived online. As long as this continues, the problem will persist. The director of the division of investigative oversight at ORI has stated his regret at the “collateral damage” caused by the policy (see page 32). But this is not collateral damage; it is a serious miscarriage of justice against researchers and a stain on the integrity of the system, and therefore of science.

It reminds me of the system present in US prisons, in which even after “serving their time,” prisoners will still have trouble finding work because of their criminal records. But is it fair to compare felons to scientists who have, for instance, fudged their affiliations on a grant application when they were young and naïve?

It’s worth noting that the ORI website seems currently to present information for misconduct cases where scientists haven’t yet “served out their sentences”, featuring the statement:

This page contains cases in which administrative actions were imposed due to findings of research misconduct. The list only includes those who CURRENTLY have an imposed administrative actions against them. It does NOT include the names of individuals whose administrative actions periods have expired.

In the interaction between scientists who have been found to have committed scientific misconduct and the larger scientific community, we encounter the tension between the rights of the individual scientist and the rights of the scientific community. This extends to the question of the magnitude of a particular instance of misconduct, or of whether it was premeditated or merely sloppy, or of whether the offender was young and naïve or old enough to know better. An oversight or mistake in judgment that may strike the individual scientist making it as no big deal (at least at the time) can have significant consequences for the scientific community in terms of time wasted (e.g., trying to reproduce reported results) and damaged trust.

The damaged trust is not a minor thing. Given that the scientific knowledge-building enterprise relies on conditions where scientists can trust their fellow scientists to make honest reports (whether in the literature, in grant proposals, or in less formal scientific communications), discovering a fellow scientist whose relationship with the truth is more casual is a very big deal. Flagging liars is like tagging a faulty measuring device. It doesn’t mean you throw them out, but you do need to go to some lengths to reestablish their reliability.

To the extent that an individual scientist is committed to the shared project of building a reliable body of scientific knowledge, he or she ought to understand that after a breach, one is not entitled to a full restoration of the community’s trust. Rather, that trust must be earned back. One step in earning back trust is to acknowledge the harm the community suffered (or at least risked) from the dishonesty. Admitting that you blew it, that you are sorry, and that others have a right to be upset about it, are all necessary preliminaries to making a credible claim that you won’t make the same mistake again.

On the other hand, protesting that your screw-ups really weren’t important, or that your enemies have blown them out of proportion, might be an indication that you still don’t really get why your scientific colleagues are unhappy about your behavior. In such a circumstance, although you may have regained your eligibility to receive federal grant money, you may still have some work left to do to demonstrate that you are a trustworthy member of the scientific community.

It’s true that scientific training seems to go on forever, but that shouldn’t mean that early career scientists are infantilized. They are, by and large, legal adults, and they ought to be striving to make decisions as adults — which means considering the potential effects of their actions and accepting the consequences of them. I’m disinclined, therefore, to view ORI judgments of scientific misconduct as akin to juvenile criminal records that are truly expunged to reflect the transient nature of the youthful offender’s transgressions. Scientists ought to have better judgment than fifteen-year-olds. Occasionally they don’t. If they want to stay a part of the scientific community that their bad choices may have harmed, they have to be prepared to make real restitution. This may include having to meet a higher burden of proof to make up for having misled one’s fellow scientists at some earlier point in time. It may be a pain, but it’s not impossible.

Indeed, I’m inclined to think that early career lapses in judgment ought not to be buried precisely because public knowledge of the problem gives the scientific community some responsibility for providing guidance to the promising young scientist who messed up. Acknowledging your mistakes sets up a context in which it may be easier to ask other folks for help in avoiding similar mistakes in the future. (Ideally, scientists would be able to ask each other for such advice as a matter of course, but there are plenty of instances where it feels like asking a question would be exposing a weakness — something that can feel very dangerous, especially to an early career scientist.)

Besides, there’s a practical difficulty in burying the pixel trail of a scientist’s misconduct. It’s almost always the case that other members of the scientific community are involved in alleging, detecting, investigating, or adjudicating. They know something is up. Keeping the official findings secret leaves the other concerned members of the scientific community hanging, unsure whether the ORI has done anything about the allegations (which can breed suspicion that scientists are getting away with misconduct left and right). It can also make the rumor mill seem preferable to a total lack of information on scientific colleagues prone to dishonesty toward other scientists.

Given the amount of information available online, it’s unlikely that scientists who have been caught in misconduct can fly completely under the radar. But even before the internet, there was no guarantee such a secret would stay secret. Searchable online information imposes a certain level of transparency. But if this is transparency following upon actions that deceived one’s scientific community, it might be the start of effective remediation. Admitting that you have broken trust may be the first real step in earning that trust back.

_____________
This post is an updated version of an ancestor post on my other blog.

Faith in rehabilitation (but not in official channels): how unethical behavior in science goes unreported.

Can a scientist who has behaved unethically be rehabilitated and reintegrated as a productive member of the scientific community? Or is your first ethical blunder grounds for permanent expulsion from the community?

In practice, this isn’t just a question about the person who commits the ethical violation. It’s also a question about what other scientists in the community can stomach in dealing with the offenders — especially when the offender turns out to be a close colleague or a trainee.

In the case of a hard line — one ethical strike and you’re out — what kind of decision does this place on the scientific mentor who discovers that his or her graduate student or postdoc has crossed an ethical line? Faced with someone you judge to have talent and promise, someone you think could contribute to the scientific endeavor, someone whose behavior you are convinced was the result of a moment of bad judgment rather than evil intent or an irredeemably flawed character, what do you do?

Do you hand the matter on to university administrators or federal funders (who don’t know your trainee, might not recognize or value his or her promise, might not be able to judge just how out of character this ethical misstep really was) and let them mete out punishment? Or, do you try to address the transgression yourself, as a mentor, addressing the actual circumstances of the ethical blunder, the other options your trainee should have recognized as better ones to pursue, and the kind of harm this bad decision could bring to the trainee and to other members of the scientific community?

Clearly, there are downsides to either of these options.

One problem with handling an ethical transgression privately is that it’s hard to be sure it has really been handled in a lasting way. Given the persistent patterns of escalating misbehavior that often come to light when big frauds are exposed, it’s hard not to wonder whether scientific mentors were aware, and perhaps even intervening in ways they hoped would be effective.

It’s the building over time of ethical violations that is concerning. Is such an escalation the result of a hands-off (and eyes-off) policy from mentors and collaborators? Could intervention earlier in the game have stopped the pattern of infractions and led the researcher to cultivate more honest patterns of scientific behavior? Or is being caught by a mentor or collaborator who admonishes you privately and warns that he or she will keep an eye on you almost as good as getting away with it — an outcome with no real penalties and no paper-trail that other members of the scientific community might access?

It’s even possible that some of these interventions might happen at an institutional level — the department or the university becomes aware of ethical violations and deals with them “internally” without involving “the authorities” (who, in such cases, are usually federal funding agencies). I dare say that the feds would be pretty unhappy about being kept out of the loop if the ethical violations in question occur in research supported by federal funding. But if the presumption is that getting the feds involved raises the available penalties to the draconian, it is understandable that departments and universities might want to try to address the ethical missteps while still protecting the investment they have made in a promising young researcher.

Of course, the rest of the scientific community has relevant interests here. These include an interest in being able to trust that other scientists present honest results to the community, whether in journal articles, conference presentations, grant applications, or private communications. Arguably, they also include an interest in having other members of the community expose dishonesty when they detect it. Managing an ethical infraction privately is problematic if it leaves the scientific community with misleading literature that isn’t corrected or retracted (for example).

It’s also problematic if it leaves someone with a habit of cheating in the community, presumed by all but a few of the community’s members to have a good record of integrity.

But I’m inclined to think that the impulse to deal with science’s youthful offenders privately is a response to the fear that handing them over to federal authorities has a high likelihood of ending their scientific careers forever. There is a fear that a first offense will be punished with the career equivalent of the death penalty.

As it happens, administrative sanctions imposed by Office of Research Integrity are hardly ever permanent removal. Findings of scientific misconduct are much more likely to be punished with exclusion from federal funding for three years, or five years, or ten years. Still, in an extremely competitive environment, with multitudes of scientists competing for scarce grant dollars and permanent jobs, even a three year disbarment may be enough to seriously derail a scientific career. The mentor making the call about whether to report a trainee’s unethical behavior may judge the likely fallout as enough to end the trainee’s career.

Permanent expulsion or a slap on the wrist is not much of a range of penalties. And, neither of these options really addresses the question of whether rehabilitation is possible and in the best interests of both the individual and the scientific community.

If no errors in judgment are tolerated, people will do anything to conceal such errors. Mentors who are trying to be humane may become accomplices in the concealment. The conversations about how to make better judgments may not happen because people worry that their hypothetical situations will be scrutinized for clues about actual screw-ups.

None of this is to say that ethical violations should be without serious consequences — they shouldn’t. But this need not preclude the possibility that people can learn from their mistakes. Violators may have to meet a heavy burden to demonstrate that they have learned from their mistakes. Indeed, it is possible they may never fully regain the trust of their fellow researchers (who may go forward reading their papers and grant proposals with heightened skepticism in light of their past wrongdoing).

However, it seems perverse for the scientific community to adopt a stance that rehabilitation is impossible when so many of its members seem motivated to avoid official channels for dealing with misconduct precisely because they feel rehabilitation is possible. If the official penalty structure denies the possibility of rehabilitation, those scientists who believe in rehabilitation will take matters into their own hands. To the extent that this may exacerbate the problem, it might be good if paths to rehabilitation were given more prominence in official responses to misconduct.

_____________
This post is an updated version of an ancestor post on my other blog.

Resistance to ethics instruction: considering the hypothesis that moral character is fixed.

This week I’ve been blogging about the resistance to required ethics coursework one sometimes sees in STEM* disciplines. As one reason for this resistance is the hunch that you can’t teach a person to be ethical once they’re past a certain (pre-college) age, my previous post noted that there’s a sizable body of research that supports ethics instruction as an intervention to help people behave more ethically.

But, as I mentioned in that post, the intuition that one’s moral character is fixed by one’s twenties can be so strong that folks don’t always believe what the empirical research says about the question.

So, as a thought experiment, let’s entertain the hypothesis that, by your twenties, your moral character is fixed — that you’re either ethical or evil by then and there’s nothing further ethics instruction can do about it. If this were the case, how would we expect scientists to respond to other scientists or scientific trainees who behave unethically?

Presumably, scientists would want the unethical members of the tribe of science identified and removed, permanently. Under the fixed-character hypothesis, the removal would have to be permanent, because there would be every reason to expect the person who behaved unethically to behave unethically again.

If we took this seriously, that would mean every college student who ever cheated on a quiz or made up data for a lab report should be barred from entry to the scientific community, and that every grown-up scientist caught committing scientific misconduct — or any ethical lapse, even those falling well short of fabrication, falsification, or plagiarism — would be excommunicated from the tribe of science forever.

That just doesn’t happen. Even Office of Research Integrity findings of scientific misconduct don’t typically lead to lifetime disbarment from federal research funding. Instead, they usually lead to administrative actions imposed for a finite duration, on the order of years, not decades.

And, I don’t think the failure to impose a policy of “one strike, you’re out” for those who behave unethically is because members of the tribe of science are being held back by some naïvely optimistic outside force (like the government, or the taxpaying public, or ethics professors). Nor is it because scientists believe it’s OK to lie, cheat, and steal in one’s scientific practice; there is general agreement that scientific misconduct damages the shared body of knowledge scientists are working to build.

When dealing with members of their community who have behaved unethically, scientists usually behave as if there is a meaningful difference between a first offense and a pattern of repeated offenses. This wouldn’t make sense if scientists were truly committed to the fixed-character hypothesis.

On the other hand, it fits pretty well with the hypothesis that people may be able to learn from their mistakes — to be rehabilitated rather than simply removed from the community.

There are surely some hard cases that the tribe of science view as utterly irredeemable, but graduate students or early career scientists whose unethical behavior is caught early are treated by many as probably redeemable.

How to successfully rehabilitate a scientist who has behaved unethically is a tricky question, and not one scientists seem inclined to speak about much. Actions by universities, funding agencies, or governmental entities like the Office of Research Integrity are part of the punishment landscape, but punishment is not the same thing as rehabilitation. Meanwhile, it’s unclear whether individual actions to address wrongdoing are effective at heading off future unethical behavior.

If it takes a village to raise a scientist, it may take concerted efforts at the level of scientific communities to rehabilitate scientists who have strayed from the path of ethical practice. We’ll discuss some of the challenges with that in the next post.

______
*STEM stands for science, technology, engineering, and mathematics.

Resistance to ethics instruction: the intuition that ethics cannot be taught.

In my last post, I suggested that required ethics coursework (especially for students in STEM* disciplines) are met with a specific sort of resistance. I also surmised that part of this resistance is the idea that ethics can’t be taught in any useful way, “the idea that being ethical is somehow innate, a mere matter of not being evil.”

In a comment on that post, ThomasB nicely illustrates that particular strain of resistance:

Certainly scientists, like everyone else in our society, must behave ethically. But what makes this a college-level class? From the description, it covers the basic do not lie-cheat-steal along with some anti-bullying and possibly a reminder to cite one’s references. All of which should have been instilled long before college.

So what is there to teach at this point? The only thing I can think of specific to science is the “publish or perish” pressure to keep the research dollars flowing in. Or possibly the psychological studies showing that highly intelligent and creative people are more inclined to be dishonest than ordinary people. Possibly because they are better at rationalizing doing what they want to do. Which is why I used the word “instilled” earlier: it seems to me that ethics comes more from the emotional centers of the brain than the conscious analytical part. As soon as we start consciously thinking about ethics, they seem to go out the window. Such as the study from one of the Ivy League schools where the students did worse at the ethics test at the end of the class than at the beginning.

So I guess the bottom line is whether the science shows that ethics classes at this point in a person’s life actually show an improvement in the person’s behavior. As Far as I know, there has been no such study done.

(Bold emphasis added.)

I think it’s reasonable to ask, before requiring an intervention (like ethics coursework), what we know about whether this sort of intervention is likely to work. I think it’s less reasonable to assume it won’t work without consulting the research on the matter.

As it happens, there has been a great deal of research on whether ethics instruction is an intervention that helps people behave more ethically — and the bulk of it shows that well-designed ethics instruction is an effective intervention.

Here’s what Bebeau et al. (1995) have to say about the question:

When people are given an opportunity to reflect on decisions and choices, they can and do change their minds about what they ought to do and how they wish to conduct their personal and professional lives. This is not to say that any instruction will be effective, or that all manner of ethical behavior can be developed with well-developed ethics instruction. But it is to say — and there is considerable evidence to show it — that ethics instruction can influence the thinking processes that relate to behavior. …

We do not claim that radical changes are likely to take place in the classroom or that sociopaths can be transformed into saints via case discussion. But we do claim that significant improvements can be made in reasoning about complex problems and that the effort is worthwhile. We are not alone in this belief: the National Institutes of Health, the National Science Foundation, the American Association for the Advancement of Science, and the Council of Biology Editors, among others, have called for increased attention to training in the responsible conduct of scientific research. Further, our belief is buttressed by empirical evidence from moral psychology. In Garrod (1993), James R. Rest summarizes the “several thousand” published studies on moral judgment and draws the following conclusions:

  • development of competence in ethical problem-solving continues well into adulthood (people show dramatic changes in their twenties, as in earlier years);
  • such changes reflect profound reconceptualization of moral issues;
  • formal education promotes ethical reasoning;
  • deliberate attempts to develop moral reasoning … can be demonstrated to be effective; and
  • studies link moral reasoning to moral behavior

So, there’s a body of research that supports ethics instruction as an intervention to help people behave more ethically.

Indeed, part of how ethics instruction helps is by getting students to engage analytically, not just emotionally. I would argue that making ethical decisions involves moving beyond gut feelings and instincts. It means understanding how your decisions impact others, and considering the ways your interests and theirs intersect. It means thinking through possible impacts of the various choices available to you. It means understanding the obligations set up by our relations to others in personal and professional contexts.

And methodology for approaching ethical decision making can be taught. Practice in making ethical decisions makes it easier to make better decisions. And making these decisions in conversation with other people who may have different perspectives (rather than just following a gut feeling) forces us to work out our reasons for preferring one course of action to the alternatives. These reasons are not just something we can offer to others to defend what we did, but they are things we can consider when deciding what to do in the first place.

As always, I reckon that there are some people who will remain unmoved by the research that shows the efficacy of ethics instruction, preferring to cling to their strong intuition that college-aged humans are past the point where an intervention like an ethics class could make any impact on their ethical behavior. But if that’s an intuition that ought to guide us — if, by your twenties, you’re either a good egg or irredeemably corrupt — it’s not clear that our individual or institutional responses to unethical behavior by scientists make any sense.

That’s the subject I’ll take up in my next post.

______
*STEM stands for science, technology, engineering, and mathematics.

______
Bebeau, M. J., Pimple, K. D., Muskavitch, K. M., Borden, S. L., & Smith, D. H. (1995). Moral reasoning in scientific research. Cases for teaching and assessment. Bloomington, IN: Poynter Center for the Study of Ethics and Assessment.

Garrod, A. (Ed.). (1993). Approaches to moral development: New research and emerging themes. Teachers College Press.

Resistance to ethics is different from resistance to other required courses.

For academic types like myself, the end of the semester can be a weird juxtaposition of projects that are ending and new projects that are on the horizon, a juxtaposition that can be an opportunity for reflexion.

I’ve just seen another offering of my “Ethics in Science” course to a (mostly successful) conclusion. Despite the fact that the class was huge (more than 100 students) for a course that is heavy on discussion, its students were significantly more active and engaged than those in the much smaller class I taught right after it. The students thought hard and well, and regularly floored me with their razor-sharp insights. All the evidence suggests that these students were pretty into it.

Meanwhile, I’m getting set for a new project that will involve developing ethics units for required courses offered in another college at my university — and one of the things I’ve been told is that the students required to take these courses (as well as some non-zero number of the professors in their disciplines) are very resistant to the inclusion of ethics coursework in courses otherwise focused on their major subjects.

I find this resistance interesting, especially given that the majority of the students in my “Ethics in Science” class were taking it because it was required for their majors.

I recognize that part of what’s going on may be a blanket resistance to required courses. Requirements can feel like an attack on one’s autonomy and individuality — rather than being able to choose what you will to study, you’re told what you must study to major in a particular subject or to earn a degree from a particular university. A course that a student might have been open to enjoying were it freely chosen can become a loathed burden merely by virtue of being required. I’ve seen the effect often enough that it no longer surprises me.

However, requirements aren’t usually imposed solely to constrain students’ autonomy. There’s almost always a reason that the course, or subject-matter, or problem-solving area that’s required is being required. The students may not know that reason (or judge it to be a compelling reason if they do know it), but that doesn’t meant that there’s not a reason.

In some ways, ethics is really not much different here from other major requirements or subject matter that students bemoan, including calculus, thermodynamics, writing in the major, and significant figures. On the other hand, the moaning for some of those other requirements tends to take the form of “When am I ever going to use that?”

I don’t believe I’ve ever heard a science or engineering student say, “When am I ever going to use ethics?”

In other words, they generally accept that they should be ethical, but they also sometimes voice resistance to the idea that a course (or workshop, or online training module) about how to be ethical will be anything but a massive waste of their time.

My sense is that at least part of what’s going on here is that scientists and engineers and their ilk feel like ethics are being imposed on them from without, by university administrators or funding agencies or accrediting organizations. Worse, the people exhorting scientists, engineers, et alia to take ethics seriously often seem to take a finger-wagging approach. And this, I suspect, makes it harder to get what those business types call “buy-in” from the scientists.

The typical story I’ve heard about ethics sessions in industry (and some university settings) goes something like this:

You get a big packet with the regulations you have to follow — to get your protocols approved by the IRB and/or the IACUC, to disclose potential conflicts of interest, to protect the company’s or university’s patent rights, to fill out the appropriate paperwork for hazardous waste disposal, etc., etc. You are admonished against committing the “big three” of falsification, fabrication, and plagiarism. Sometimes, you are also admonished against sexually harassing those with whom you are working. The whole thing has the feel of being driven by the legal department’s concerns: for goodness sake, don’t do anything that will embarrass the organization or get us into hot water with regulators or funders!


Listening to the litany of things you ought not to do, it’s really easy to think: Very bad people do things like this. But I’m not a very bad person. So I can tune this out, and I can kind of ignore ethics.


The decision to tune out ethics is enabled by the fact that the people wagging the fingers at the scientists are generally outsiders (from the legal department, or the philosophy department, or wherever). These outsiders are coming in telling us how to do our jobs! And, the upshot of what they’re telling us seems to be “Don’t be evil,” and we’re not evil! Besides, these outsiders clearly don’t care about (let alone understand) the science so much as avoiding scandals or legal problems. And they don’t really trust us not to be evil.


So just nod earnestly and let’s get this over with.

One hurdle here is the need to get past the idea that being ethical is somehow innate, a mere matter of not being evil, rather than a problem-solving practice that gets better with concrete strategies and repeated use. Another hurdle is the feeling that ethics instruction is the result of meddling by outsiders.


If ethics is seen as something imposed upon scientists by a group from the outside — one that neither understands science, nor values it, nor trusts that scientists are generally not evil — then scientists will resist ethics. To get “buy-in” from the scientists, they need to see how ethics are intimately connected to the job they’re trying to get done. In other words, scientists need to understand how ethical conduct is essential to the project of doing science. Once scientists make that connection, they will be ethical — not because someone else is telling them to be ethical, but because being ethical is required to make progress on the job of building scientific knowledge.
_____________
This post is an updated version of an ancestor post on my other blog, and was prompted by the Virtually Speaking Science discussion of philosophy in and of science scheduled for Wednesday, May 28, 2014 (starting 8 PM EDT/8 PM PDT). Watch the hashtags #VSpeak and #AskVS for more details.

Incoherent ethical claims that give philosophers a bad rap

Every now and then, in the course of a broader discussion, some philosopher will make a claim that is rightly disputed by non-philosophers. Generally, this is no big deal — philosophers have just as much capacity to be wrong as other humans. But sometimes, the philosopher’s claim, delivered with an air of authority, is not only a problem in itself but also manages to convey a wrong impression about the relation between the philosophers and non-philosophers sharing a world.

I’m going to examine the general form of one such ethical claim. If you’re interested in the specific claim, you’re invited to follow the links above. We will not be discussing the specific claim here, nor the larger debate of which it is a part.

Claim: To decide to do X is always (or, at least, should always be) a very difficult and emotional step, precisely because it has significant ethical consequences.

Let’s break that down.

“Doing X has significant ethical consequences” suggests a consequentialist view of ethics, in which doing the right thing is a matter of making sure the net good consequences (for everyone affected, whether you describe them in terms of “happiness” or something else) outweigh the net bad consequences.

To say that doing X has significant ethical consequences is then to assert that (at least in the circumstances) doing X will make a significant contribution to the happiness or unhappiness being weighed.

In the original claim, the suggestion is that the contribution of doing X to the balance of good and bad consequences is negative (or perhaps that it is negative in many circumstances), and that on this account it ought to be a “difficult and emotional step”. But does this requirement make sense?

In the circumstances in which doing X shifts the balance of good and bad consequences to a net negative, the consequentialist will say you shouldn’t do X — and this will be true regardless of your emotions. Feeling negative emotions as you are deciding to do X will add more negative consequences, but they are not necessary: a calculation of the consequences of doing X versus not doing X will still rule out doing X as an ethical option even if you have no emotions associated with it at all.

On the other hand, in the circumstances in which doing X shifts the balance of good and bad consequences to a net positive, the consequentialist will say you should do X — again, regardless of your emotions. Here, feeling negative emotions as you are deciding to do X will add more negative consequences. If these negative emotions are strong enough, they run the risk of reducing the net positive consequences — which makes the claim that one should feel negative emotions (pretty clearly implied in the assertion that the decision to do X should be difficult) a weird claim, since these negative emotions would serve only to reduce the net good consequences of doing something that produces net good consequences in the circumstances.

By the way, this also suggests, perhaps perversely, a way that strong emotions could become a problem in circumstances in which doing X would otherwise clearly bring more negative consequences than positive ones: if the person contemplating doing X were to get a lot of happiness from doing X.

Now, maybe the idea is supposed to be that negative feelings associated with the prospect of doing X are supposed to be a brake if doing X frequently leads to more bad consequences than good ones. But I think we have to recognize feelings as consequences — as something that we need to take into account in the consequentialist calculus with which we evaluate whether doing X here is ethical or not. And that makes the claim that the feelings ought always to be negative, regardless of other features of the situation that make doing X the right thing, puzzling.

You could avoid worries about weighing feelings as consequences by shifting from a consequentialist ethical framework to something else, but I don’t think that’s going to be much help here.

Kantian ethics, for example, won’t pin the ethics of doing X to the net consequences, but instead it will come down to something like whether it is your duty to do X (where your duty is to respect the rational capacity in yourself and in others, to treat people as ends in themselves rather than as mere means). Your feelings are no part of what a Kantian would consider in judging whether your action is ethical or not. Indeed, Kantians stress that ethical acts are motivated by recognizing your duty precisely because feelings can be a distraction from behaving as we should.

Virtue ethicists, on the other hand, do talk about the agent’s feelings as ethically relevant. Virtuous people take pleasure in doing the right things and feel pain at the prospect of doing the wrong thing. However, if doing X is right under the circumstances, the virtuous people will feel good about doing X, not conflicted about it — so the claim that doing X should always be difficult and emotional doesn’t make much sense here. Moreover, virtue ethicists describe the process of becoming virtuous as one where behaving in virtuous ways usually precedes developing emotional dispositions to feel pleasure from acting virtuously.

Long story short, it’s hard to make sense of the claim “To decide to do X is always (or, at least, should always be) a very difficult and emotional step, precisely because it has significant ethical consequences” — unless really what is being claimed it that doing X is always unethical and you should always feel bad for doing X. If that’s the claim, though, emotions are pretty secondary.

But beyond the incoherence of the claim, here’s what really bugs me about it: It seems to assert that ethicists (and philosophers more generally) are in the business of telling people how to feel. That, my friends, is nonsense. Indeed, I’m on record prioritizing changes in unethical behavior over any interference with what’s in people’s hearts. How we behave, after all, has much more impact on our success in sharing a world with each other than how we feel.

This is not to say that I don’t recognize a likely connection between what’s in people’s hearts and how they behave. For example, I’m willing to bet that improvements in our capacity for empathy would likely lead to more ethical behavior.

But it’s hard to see as empathetic telling people they should generally feel bad for making a choice which under the circumstances is an ethical choice. If anything, requiring such negative emotions is a failure of empathy, and punitive to boot.

Clearly, there exist ethicists and philosophers who operate this way, but many of us try to do better. Indeed, it’s reasonable for you all to expect and demand that we do better.

How to be ethical while getting the public involved in your science

At ScienceOnline Together later this week, Holly Menninger will be moderating a session on “Ethics, Genomics, and Public Involvement in Science”.

Because the ethical (and epistemic) dimensions of “citizen science” have been on my mind for a while now, in this post I share some very broad, pre-conference thoughts on the subject.

Ethics is a question of how we share a world with each other. Some of this is straightforward and short-term, but sometimes engaging each other ethically means taking account of long-range consequences, including possible consequences that may be difficult to foresee unless we really work to think through the possibilities ahead of time — and unless this thinking through of possibilities is informed by knowledge of some of the technologies involved and of history of what kinds of unforeseen outcomes have led to ethical problems before.

Ethics is more than merely meeting your current legal and regulatory requirements. Anyone taking that kind of minimalist approach to ethics is gunning to be a case study in an applied ethics class (probably within mere weeks of becoming a headline in a major news outlet).

With that said, if you’re running a project you’d describe as “citizen science” or as cultivating public involvement in science, here are some big questions I think you should be asking from the start:

1. What’s in it for the scientists?

Why are you involving members of the public in your project?

Are they in the field collecting observations that you wouldn’t have otherwise, or on their smart phones categorizing the mountains of data you’ve already collected? In these cases, the non-experts are providing labor you need for vital non-automatable tasks.

Are they sending in their biological samples (saliva, cheek swab, belly button swab, etc.)? In these cases, the non-experts are serving as human subjects, expanding the pool of samples in your study.

In both of these cases, scientists have ethical obligations to the non-scientists they are involving in their projects, although the ethical obligations are likely to be importantly different. In any case where a project involves humans as sources of biological samples, researchers ought to be consulting an Institutional Review Board, at least informally, before the project is initiated (which includes the start of anything that looks like advertising for volunteers who will provide their samples).

If volunteers are providing survey responses or interviews instead of vials of spit, there’s a chance they’re still acting as human subjects. Consult an IRB in the planning stages to be sure. (If your project is properly exempt from IRB oversight, there’s no better way to show it than an exemption letter from an IRB.)

If volunteers are providing biological samples from their pets or reports of observations of animals in the field (especially in fragile habitats), researchers ought to be consulting an Institutional Animal Care and Use Committee, at least informally, before the project is initiated. Again, it’s possible that what you’ll discover in this consultation is that the proposed research is exempt from IACUC oversight, but you want a letter from an IACUC to that effect.

Note that IRBs and IACUCs don’t exist primarily to make researchers’ lives hard! Rather, they exist to help researchers identify their ethical obligations to the humans and animals who serve as subjects of their studies, and to help find ways to conduct that research in ways that honor those obligations. A big reason to involve committees in thinking through the ethical dimensions of the research is that it’s hard for researchers to be objective in thinking through these questions about their own projects.

If you’re involving non-experts in your project in some other way, what are they contributing to the project? Are you involving them so you can check off the “broader impacts” box on your grant application, or is there some concrete way that involving members of the public is contributing to your knowledge-building? If the latter, think hard about what kinds of obligations might flow from that contribution.

2. What’s in it for the non-scientists/non-experts/members of the public involved in the project?

Why would members of the public want to participate in your project? What could they expect to get from such participation?

Maybe they enjoy being outdoors counting birds (and would be doing so even if they weren’t participating in the project), or looking at pictures of galaxies from space telescopes. Maybe they are curious about what’s in their genome or what’s in their belly-button. Maybe they want to help scientists build new knowledge enough to participate in some of the grunt-work required for that knowledge-building. Maybe they want to understand how that grunt-work fits into the knowledge-building scientists do.

It’s important to understand what the folks whose help you’re enlisting think they’re signing on for. Otherwise, they may be expecting something from the experience that you can’t give them. The best way to find out what potential participants are looking for from the experience is to ask them.

Don’t offer potential diagnostic benefits from participation in a project for which that information is a long, long way off. Don’t promise that tracking the health of streams by screening for the presence of different kinds of bugs will be tons of fun without being clear about the conditions your volunteers will undergo to perform those screenings.

Don’t promise participants that they will be getting a feel for what it’s like to “do science” if, in fact, they are really just providing a sample rather than being part of the analysis or interpretation of that sample.

Don’t promise them that they will be involved in hypothesis-formation or conclusion-drawing if really you are treating them as fancy measuring devices.

3. What’s the relationship between the scientists and the non-scientists in this project? What consequences will this have for relationships between scientists and the pubic more generally?

There’s a big difference in involving members of the public in your project because it will be enriching for them personally and involving them in your project because it’s the only conceivable way to build a particular piece of knowledge you’re trying to build.

Being clear about the relationship upfront — here’s why we need you, here’s what you can expect in return (both the potential benefits of participation and the potential risks) — is the best way to make sure everyone’s interests are well-served by the partnership and that no one is being deceived.

Things can get complicated, though, when you pull the focus back from how participants are involved in building the knowledge and consider how that knowledge might be used.

Will the new knowledge primarily benefit the scientists leading the project, adding publications to their CVs and helping them make the case for funding for further projects? Could the new knowledge contribute to our understanding (of ecosystems, or human health, for example) in ways that will drive useful interventions? Will those interventions be driven by policy-makers or commercial interests? Will the scientists be a part of this discussion of how the knowledge gets used? Will the members of the public (either those who participated in the project or members of the public more generally) be a part of this discussion — and will their views be taken seriously?

To the extent that participating in citizen science project, whatever shape that participation may take, can influence non-scientists’ views on science and the scientific community as a whole, the interactions between scientists and volunteers in and around these projects are hugely important. They are an opportunity for people with different interests, different levels of expertise, different values, to find common ground while working together to achieve a shared goal — to communicate honestly, deal with each other fairly, and take each other seriously.

More such ethical engagement between scientists and publics would be a good thing.

But the flip-side is that engagements between scientists and publics that aren’t as honest or respectful as they should be may have serious negative impacts beyond the particular participants in a given citizen science project. They may make healthy engagement, trust, and accountability harder for scientists and publics across the board.

In other words, working hard to do it right is pretty important.

I may have more to say about this after the conference. In the meantime, you can add your questions or comments to the session discussion forum.

The line between persuasion and manipulation.

As this year’s ScienceOnline Together conference approaches, I’ve been thinking about the ethical dimensions of using empirical findings from psychological research to inform effective science communication (or really any communication). Melanie Tannenbaum will be co-facilitating a session about using such research findings to guide communication strategies, and this year’s session is nicely connected to a session Melanie led with Cara Santa Maria at last year’s conference called “Persuading the Unpersuadable: Communicating Science to Deniers, Cynics, and Trolls.”

In that session last year, the strategy of using empirical results from psychology to help achieve success in a communicative goal was fancifully described as deploying “Jedi mind tricks”. Achieving success in communication was cast in terms of getting your audience to accept your claims (or at least getting them not to reject your claims out of hand because they don’t trust you, or don’t trust the way you’re engaging with them, or whatever). But if you have the cognitive launch codes, as it were, you can short-circuit distrust, cultivate trust, help them end up where you want them to end up when you’re done communicating what you’re trying to communicate.

Jason Goldman pointed out to me that these “tricks” aren’t really that tricky — it’s not like you flash the Queen of Diamonds and suddenly the person you’re talking to votes for your ballot initiative or buys your product. As Jason put it to me via email, “From a practical perspective, we know that presenting reasons is usually ineffective, and so we wrap our reasons in narrative – because we know, from psychology research, that storytelling is an effective device for communication and behavior change.”

Still, using a “trick” to get your audience to end up where you want them to end up — even if that “trick” is simply empirical knowledge that you have and your audience doesn’t — sounds less like persuasion than manipulation. People aren’t generally happy about the prospect of being manipulated. Intuitively, manipulating someone else gets us into ethically dicey territory.

As a philosopher, I’m in a discipline whose ideal is that you persuade by presenting reasons for your interlocutor to examine, arguments whose logical structure can be assessed, premises whose truth (or at least likelihood) can be evaluated. I daresay scientists have something like the same ideal in mind when they present their findings or try to evaluate the scientific claims of others. In both cases, there’s the idea than we should be making a concerted effort not to let tempting cognitive shortcuts get in the way of reasoning well. We want to know about the tempting shortcuts (some of which are often catalogued as “informal fallacies”) so we can avoid falling into them. Generally, it’s considered sloppy argumentation (or worse) to try to tempt our audience with those shortcuts.

How much space is there between the tempting cognitive shortcuts we try to avoid in our own reasoning and the “Jedi mind tricks” offered to us to help us communicate, or persuade, or manipulate more effectively? If we’re taking advantage of cognitive shortcuts (or switches, or whatever the more accurate metaphor would be) to increase the chances that people will accept our factual claims, our recommendations, our credibility, etc., can we tell when we’ve crossed the line between persuasion and manipulation? Can we tell when it’s the cognitive switch that’s doing the work rather than the sharing of reasons?

It strikes me as even more ethically problematic if we’re using these Jedi mind tricks while concealing the fact that we’re using them from the audience we’re using them on. There’s a clear element of deception in doing that.

Now, possibly the Jedi mind tricks work equally well if we disclose to our audience that we’re using them and how they work. In that case, we might be able to use them to persuade without being deceptive — and it would be clear to our audience that we were availing ourselves of these tricks, and that our goal was to get them to end up in a particular place. It would be kind of weird, though, perhaps akin to going to see a magician knowing full well that she would be performing illusions and that your being fooled by those illusions is a likely outcome. (Wouldn’t this make us more distrustful in our communicative interactions, though? If you know about the switches and it’s still the case that they can be used against you, isn’t that the kind of thing that might make you want to block lots of communication before it can even happen?)

As a side note, I acknowledge that there might be some compelling extreme cases in which the goal of getting the audience to end up in a particular place — e.g., revealing to you the location of the ticking bomb — is so urgent that we’re prepared to swallow our qualms about manipulating the audience to get the job done. I don’t think that the normal stakes of our communications are like this, though. But there may be some cases where how high the stakes really are is one of the places we disagree. Jason suggests vaccine acceptance or refusal might be important enough that the Jedi mind tricks shouldn’t set off any ethical alarms. I’ll note that vaccine advocates using a just-the-empirical-facts approach to communication are often accused or suspected of having some undisclosed financial conflict of interest that is motivating them to try to get everyone vaccinated — that is, they’re not using the Jedi mind trick social psychologists think could help them persuade their target audience and yet that audience thinks they’re up to something sneaky. That’s a pretty weird situation.

Does our cognitive make-up as humans make it possible to get closer to exchanging and evaluating reasons rather than just pushing each other’s cognitive buttons? If so, can we achieve better communication without the Jedi mind tricks?

Maybe it would require some work to change the features of our communicative environment (or of the environment in which we learn how to reason about the world and how to communicate and otherwise interact with others) to help our minds more reliably work this way. Is there any empirical data on that? (If not, is this a research question psychologists are asking?)

Some of these questions tread dangerously close to the question of whether we humans can actually have free will — and that’s a big bucket of metaphysical worms that I’m not sure I want to dig into right now. I just want to know how to engage my fellow human beings as ethically as possible when we communicate.

These are some of the questions swirling around my head. Maybe next week at ScienceOnline some of them will be answered — although there’s a good chance some more questions will be added to the pile!