Giving thanks.

This being the season, I’d like to take the opportunity to pause and give thanks.

I’m thankful for parents who encouraged my curiosity and never labeled science as something it was inappropriate for me to explore or pursue.

I’m thankful for teachers who didn’t present science as if it were confined within the box of textbooks and homework assignments and tests, but instead offered it as a window through which I could understand ordinary features of my world in a whole new way. A particular teacher who did this was my high school chemistry teacher, Mel Thompson, who bore a striking resemblance to Dr. Bunsen Honeydew and would, on occasion, blow soap bubbles with a gas jet as we took quizzes, setting them alight with a Bunsen burner before they reached the ceiling. Mr. Thompson always conveyed his strong conviction that I could learn anything, and on that basis he was prepared to teach me anything about chemistry that I wanted to learn.

I’m thankful for the awesome array of women who taught me science as an undergraduate and a graduate student, both for their pedagogy and for the examples they provided of different ways to be a woman in science.

I’m especially thankful for my mother, who was my first and best role model with respect to the challenges of graduate school and becoming a scientist.

I’m thankful for the mentors who have found me and believed in me when I needed help believing in myself.

I’m thankful for the opportunity graduate school gave me to make the transition from learning knowledge other people had built to learning how to build brand new scientific knowledge myself.

I’m thankful that the people who trained me to become a scientist didn’t treat it as a betrayal when I realized that what I really wanted to do was become a philosopher. I’m also thankful for the many, many scientists who have welcomed my philosophical engagement with their scientific work, and who have valued my contributions to the training of their science students.

I’m thankful for my children, through whose eyes I got the chance to relive the wonder of discovering the world and its workings all over again. I’m also thankful to them for getting me to grapple with some of my own unhelpful biases about science, for helping me to get over them.

I’m thankful for the opportunity to make a living pursuing the questions that keep me up at night. I’m thankful that pursuing some of these questions can contribute to scientific practice that builds reliable knowledge while being more humane to its practitioners, to better public understanding of science (and of scientists), and perhaps even to scientists and nonscientists doing a better job of sharing a world with each other.

And, dear readers, I am thankful for you.

Mentoring new scientists in the space between how things are and how things ought to be.

Scientists mentoring trainees often work very hard to help their trainees grasp what they need to know not only to build new knowledge, but also to succeed in the context of a career landscape where score is kept and scarce resources are distributed on the basis of scorekeeping. Many focus their protégés’ attention on the project of understanding the current landscape, noticing where score is being kept, working the system to their best advantage.

But is teaching protégés how to succeed as a scientist in the current structural social arrangements enough?

It might be enough if you’re committed to the idea that the system as it is right now is perfectly optimized for scientific knowledge-building, and for scientific knowledge-builders (and if you view all the science PhDs who can’t find permanent jobs in the research careers they’d like to have as acceptable losses). But I’d suggest that mentors can do better by their protégés.

For one thing, even if current conditions were optimal, they might well change due to influences from outside the community of knowledge-builders, as when the levels of funding change at the level of universities or of funding agencies. Expecting that the landscape will be stable over the course of a career is risky.

For another thing, it seems risky to take as given that this is the best of all possible worlds, or of all possible bundles of practices around research, communication of results, funding of research, and working conditions for scientists. Research on scientists suggests that they themselves recognize the ways in which the current system and its scorekeeping provides perverse incentives that may undercut the project of building reliable knowledge about the world. As well, the competition for scarce resources can result in a “science red in tooth and claw” dynamic that, at best, leads to the rational calculation that knowledge-builders ought to work more hours and partake of fewer off-the-clock “distractions” (like family, or even nice weather) in order not to fall behind.

Just because the scientific career landscape manifests in the particular way it does right now doesn’t mean that it must always be this way. As the body of reliable knowledge about the world is perpetually under construction, we should be able to recognize the systems and social arrangements in which scientists work as subject to modification, not carved into granite.

Restricting your focus as a mentor to imparting strategies for success given how things are may also convey to your protégés that this is the way things will always be — or that this is the way things should always be. I hope we can do better than that.

It can be a challenge to mentor with an eye to a set of conditions that don’t currently exist. Doing so involves imagining other ways of doing things. Doing it as more than a thought experiment also involves coordinating efforts with others — not just with trainees, but with established members of the professional community who have a bit more weight to throw around — to see what changes can be made and how, given the conditions you’re starting from. It may also require facing pushback from colleagues who are fine with the status quo (since it has worked well for them).

Indeed, mentoring with an eye to creating better conditions for knowledge-building and for knowledge-builders may mean agitating for changes that will primarily benefit future generations of your professional community, not your own.

But mentoring someone, welcoming them into your professional community and equipping them to be a full member of it, is not primarily about you. It is something that you do for the benefit of your protégé, and for the benefit of the professional community they are joining. Equipping your protégé for how things are is a good first step. Even better is encouraging them to imagine, to bring about, and to thrive in conditions that are better for your shared pursuit.

Faith in rehabilitation (but not in official channels): how unethical behavior in science goes unreported.

Can a scientist who has behaved unethically be rehabilitated and reintegrated as a productive member of the scientific community? Or is your first ethical blunder grounds for permanent expulsion from the community?

In practice, this isn’t just a question about the person who commits the ethical violation. It’s also a question about what other scientists in the community can stomach in dealing with the offenders — especially when the offender turns out to be a close colleague or a trainee.

In the case of a hard line — one ethical strike and you’re out — what kind of decision does this place on the scientific mentor who discovers that his or her graduate student or postdoc has crossed an ethical line? Faced with someone you judge to have talent and promise, someone you think could contribute to the scientific endeavor, someone whose behavior you are convinced was the result of a moment of bad judgment rather than evil intent or an irredeemably flawed character, what do you do?

Do you hand the matter on to university administrators or federal funders (who don’t know your trainee, might not recognize or value his or her promise, might not be able to judge just how out of character this ethical misstep really was) and let them mete out punishment? Or, do you try to address the transgression yourself, as a mentor, addressing the actual circumstances of the ethical blunder, the other options your trainee should have recognized as better ones to pursue, and the kind of harm this bad decision could bring to the trainee and to other members of the scientific community?

Clearly, there are downsides to either of these options.

One problem with handling an ethical transgression privately is that it’s hard to be sure it has really been handled in a lasting way. Given the persistent patterns of escalating misbehavior that often come to light when big frauds are exposed, it’s hard not to wonder whether scientific mentors were aware, and perhaps even intervening in ways they hoped would be effective.

It’s the building over time of ethical violations that is concerning. Is such an escalation the result of a hands-off (and eyes-off) policy from mentors and collaborators? Could intervention earlier in the game have stopped the pattern of infractions and led the researcher to cultivate more honest patterns of scientific behavior? Or is being caught by a mentor or collaborator who admonishes you privately and warns that he or she will keep an eye on you almost as good as getting away with it — an outcome with no real penalties and no paper-trail that other members of the scientific community might access?

It’s even possible that some of these interventions might happen at an institutional level — the department or the university becomes aware of ethical violations and deals with them “internally” without involving “the authorities” (who, in such cases, are usually federal funding agencies). I dare say that the feds would be pretty unhappy about being kept out of the loop if the ethical violations in question occur in research supported by federal funding. But if the presumption is that getting the feds involved raises the available penalties to the draconian, it is understandable that departments and universities might want to try to address the ethical missteps while still protecting the investment they have made in a promising young researcher.

Of course, the rest of the scientific community has relevant interests here. These include an interest in being able to trust that other scientists present honest results to the community, whether in journal articles, conference presentations, grant applications, or private communications. Arguably, they also include an interest in having other members of the community expose dishonesty when they detect it. Managing an ethical infraction privately is problematic if it leaves the scientific community with misleading literature that isn’t corrected or retracted (for example).

It’s also problematic if it leaves someone with a habit of cheating in the community, presumed by all but a few of the community’s members to have a good record of integrity.

But I’m inclined to think that the impulse to deal with science’s youthful offenders privately is a response to the fear that handing them over to federal authorities has a high likelihood of ending their scientific careers forever. There is a fear that a first offense will be punished with the career equivalent of the death penalty.

As it happens, administrative sanctions imposed by Office of Research Integrity are hardly ever permanent removal. Findings of scientific misconduct are much more likely to be punished with exclusion from federal funding for three years, or five years, or ten years. Still, in an extremely competitive environment, with multitudes of scientists competing for scarce grant dollars and permanent jobs, even a three year disbarment may be enough to seriously derail a scientific career. The mentor making the call about whether to report a trainee’s unethical behavior may judge the likely fallout as enough to end the trainee’s career.

Permanent expulsion or a slap on the wrist is not much of a range of penalties. And, neither of these options really addresses the question of whether rehabilitation is possible and in the best interests of both the individual and the scientific community.

If no errors in judgment are tolerated, people will do anything to conceal such errors. Mentors who are trying to be humane may become accomplices in the concealment. The conversations about how to make better judgments may not happen because people worry that their hypothetical situations will be scrutinized for clues about actual screw-ups.

None of this is to say that ethical violations should be without serious consequences — they shouldn’t. But this need not preclude the possibility that people can learn from their mistakes. Violators may have to meet a heavy burden to demonstrate that they have learned from their mistakes. Indeed, it is possible they may never fully regain the trust of their fellow researchers (who may go forward reading their papers and grant proposals with heightened skepticism in light of their past wrongdoing).

However, it seems perverse for the scientific community to adopt a stance that rehabilitation is impossible when so many of its members seem motivated to avoid official channels for dealing with misconduct precisely because they feel rehabilitation is possible. If the official penalty structure denies the possibility of rehabilitation, those scientists who believe in rehabilitation will take matters into their own hands. To the extent that this may exacerbate the problem, it might be good if paths to rehabilitation were given more prominence in official responses to misconduct.

This post is an updated version of an ancestor post on my other blog.

Careers (not just jobs) for Ph.D.s outside the academy.

A week ago I was in Boston for the 2013 annual meeting of the History of Science Society. Immediately after the session in which I was a speaker, I attended a session (Sa31 in this program) called “Happiness beyond the Professoriate — Advising and Embracing Careers Outside the Academy.” The discussion there was specifically pitched at people working in the history of science (whether earning their Ph.D.s or advising those who are), but much of it struck me as broadly applicable to people in other fields — not just fields like philosophy, but also science, technology, engineering, and mathematics (STEM) fields.

The discourse in the session was framed in terms of recognizing, and communicating, that getting a job just like your advisor’s (i.e., as a faculty member at a research university with a Ph.D. program in your field — or, loosening it slightly, as permanent faculty at a college or university, even one not primarily focused on research or on training new members of the profession at the Ph.D. level) shouldn’t be a necessary condition for maintaining your professional identity and place in the professional community. Make no mistake, people in one’s discipline (including those training new members of the profession at the Ph.D. level) frequently do discount people as no longer really members of the profession for failing to succeed in the One True Career Path, but the panel asserted that they shouldn’t.

And, they provided plenty of compelling reasons why the “One True Career Path” approach is problematic. Chief among these, at least in fields like history, is that this approach feeds the creation and growth of armies of adjunct faculty, hoping that someday they will become regular faculty, and in the meantime working for very low wages relative to the amount of work they do (and relative to their training and expertise), experiencing serious job insecurity (sometimes not finding out whether they’ll have classes to teach until the academic term is actually underway), and enduring all manner of employer shenanigans (like having their teaching loads reduced to 50% of full time so the universities employing them are not required by law to provide health care coverage). Worse, insistence on One True Career Path fails to acknowledge that happiness is important.

Panelist Jim Grossman noted that the very language of “alternative careers” reinforces this problematic view by building in the assumption that there is a default career path. Speaking of “alternatives” instead might challenge the assumption that all options other than the default are lesser options.

Grossman identified other bits of vocabulary that ought to be excised from these discussions. He argued against speaking of “the job market” when one really means “the academic job market”. Otherwise, the suggestion is that you can’t really consider those other jobs without exiting the profession. Talking about “job placement,” he said, might have made sense back in the day when the chair of a hiring department called the chair of another department to say, “Send us your best man!” rather than conducting an actual job search. Those days are long gone.

And Grossman had lots to say about why we should stop talking about “overproduction of Ph.D.s.”

Ph.D.s, he noted, are earned by people, not produced like widgets on a factory line. Describing the number of new Ph.D.-holders each year as overproduction is claiming that there are too many — but again, this is too many relative to a specific kind of career trajectory assumed implicitly to be the only one worth pursuing. There are many sectors in the career landscape that could benefit from the talents of these Ph.D.-holders, so why are we not describing the current situation as one of “underconsumption of Ph.D.s”? Finally, the “overproduction of Ph.D.s.” locution doesn’t seem helpful in a context where these seems to be no good way to stop departments from “producing” as many Ph.D.s as they want to. If market forces were enough to address this imbalance, we wouldn’t have armies of adjuncts.

Someone in the discussion pointed out that STEM fields have for some time had similar issues of Ph.D. supply and demand, suggesting that they might be ahead of the curve in developing useful responses which other disciplines could borrow. However, the situation in STEM fields differs in that industrial career paths have been treated as legitimate (and as not removing you from the profession). And, more generally, society seems to take the skills and qualities of mind developed during a STEM Ph.D. as useful and broadly applicable, while those developed during a history or philosophy Ph.D. are assumed to be hopelessly esoteric. However, it was noted that while STEM fields don’t generate the same armies of adjuncts as humanities field, they do have what might be described as the “endless postdoc” problem.

Given that structural stagnation of the academic job market is real (and has been reality for something like 40 years in the history of science), panelist Lynn Nyhart observed that it would be foolish for Ph.D. students not to consider — and prepare for — other kinds of jobs. As well, Nyhart argues that as long as faculty take on graduate students, they have a responsibility to help them find jobs.

Despite profession that they are essentially clueless about career paths other than academia, advisors do have resources they can draw upon in helping their graduate students. Among these is the network of Ph.D. alumni from their graduate program, as well as the network of classmates from their own Ph.D. training. Chances are that a number of people in these networks are doing a wide range of different things with their Ph.D.s — and that they could provide valuable information and contacts. (Also, keeping in contact with these folks recognizes that they are still valued members of your professional community, rather than treating them as dead to you if they did not pursue the One True Career Path.)

Nyhart also recommended, especially the PhD Career Finder tab, as a valuable resource for exploring the different kinds of work for which Ph.D.s in various fields can serve as preparation. Some of the good stuff on the site is premium content, but if your university subscribes to the site your access to that premium content may already be paid for.

Nyhart noted that preparing Ph.D. students for a wide range of careers doesn’t require lowering discipline-specific standards, nor changing the curriculum — although, as Grossman pointed out, it might mean thinking more creatively about what skills, qualities of mind, and experiences existing courses impart. After all, skills that are good training for a career in academia — being a good teacher, an effective committee member, an excellent researcher, a persuasive writer, a productive collaborator — are skills that are portable to other kinds of careers.

David Attis, who has a Ph.D. in history of science and has been working in the private sector for about a decade, mentioned some practical skills worth cultivating for Ph.D.s pursuing private sector careers. These include having a tight two-minute explanation of your thesis geared to a non-specialist audience, being able to demonstrate your facility in approaching and solving non-academic problems, and being able to work on the timescale of business, not thesis writing (i.e., five hours to write a two-page memo is far too slow). Attis said that private sector employers are looking for people who can work well on teams and who can be flexible in contexts beyond teaching and research.

I found the discussion in this session incredibly useful, and I hope some of the important issues raised there will find their way to the graduate advisors and Ph.D. students who weren’t in the room for it, no matter what their academic discipline.

Scientific training and the Kobayashi Maru: inside the frauds of Diederik Stapel (part 3).

This post continues my discussion of issues raised in the article by Yudhijit Bhattacharjee in the New York Times Magazine (published April 26, 2013) on social psychologist and scientific fraudster Diederik Stapel. Part 1 looked at how expecting to find a particular kind of order in the universe may leave a scientific community more vulnerable to a fraudster claiming to have found results that display just that kind of order. Part 2 looked at some of the ways Stapel’s conduct did harm to the students he was supposed to be training to be scientists. Here, I want to point out another way that Stapel failed his students — ironically, by shielding them from failure.

Bhattacharjee writes:

[I]n the spring of 2010, a graduate student noticed anomalies in three experiments Stapel had run for him. When asked for the raw data, Stapel initially said he no longer had it. Later that year, shortly after Stapel became dean, the student mentioned his concerns to a young professor at the university gym. Each of them spoke to me but requested anonymity because they worried their careers would be damaged if they were identified.

The professor, who had been hired recently, began attending Stapel’s lab meetings. He was struck by how great the data looked, no matter the experiment. “I don’t know that I ever saw that a study failed, which is highly unusual,” he told me. “Even the best people, in my experience, have studies that fail constantly. Usually, half don’t work.”

In the next post, we’ll look at how this other professor’s curiosity about Stapel’s too-good-to-be-true results led to the unraveling of Stapel’s fraud. But I think it’s worth pausing here to say a bit more on how very odd a training environment Stapel’s research group provided for his students.

None of his studies failed. Since, as we saw in the last post, Stapel was also conducting (or, more accurately, claiming to conduct) his students’ studies, that means none of his students’ studies failed.

This is pretty much the opposite of every graduate student experience in an empirical field that I have heard described. Most studies fail. Getting to a 50% success rate with your empirical studies is a significant achievement.

Graduate students who are also Trekkies usually come to recognize that the travails of empirical studies are like a version of the Kobayashi Maru.

Introduced in Star Trek II: The Wrath of Khan, the Kobayashi Maru is a training simulation in which Star Fleet cadets are presented with a civilian ship in distress. Saving the civilians requires the cadet to violate treaty by entering the Neutral Zone (and in the simulation, this choice results in a Klingon attack and the boarding of the cadet’s ship). Honoring the treaty, on the other hand, means abandoning the civilians and their disabled ship in the Neutral Zone. The Kobayashi Maru is designed as a “no-win” scenario. The intent of the test is to discover how trainees face such a situation. Owing to James T. Kirk’s performance on the test, Wikipedia notes that some Trekkies also view the Kobayashi Maru as a problem whose solution depends on redefining the problem.

Scientific knowledge-building turns out to be packed with particular plans that cannot succeed at yielding the particular pieces of knowledge the scientists hope to discover. This is because scientists are formulating plans on the basis of what is already known to try to reveal what isn’t yet known — so knowing where to look, or what tools to use to do the looking, or what other features of the world are there to confound your ability to get clear information with those tools, is pretty hard.

Failed attempts happen. If they’re the sort of thing that will crush your spirit and leave you unable to shake it off and try it again, or to come up with a new strategy to try, then the life of a scientist will be a pretty hard life for you.

Grown-up scientists have studies fail all the time. Graduate students training to be scientists do, too. But graduate students also have mentors who are supposed to help them bounce back from failure — to figure out the most likely sources of failure, whether it’s worth trying the study again, whether a new approach would be better, whether some crucial piece of knowledge has been learned despite the failure of what was planned. Mentors give scientific trainees a set of strategies for responding to particular failures, and they also give reassurance that even good scientists fail.

Scientific knowledge is built by actual humans who don’t have perfect foresight about the features of the world as yet undiscovered, humans who don’t have perfectly precise instruments (or hands and eyes using those instruments), humans who sometimes mess up in executing their protocols. Yet the knowledge is built, and it frequently works pretty well.

In the context of scientific training, it strikes me as malpractice to send new scientists out into the world with the expectation that all of their studies should work, and without any experience grappling with studies that don’t work. Shielding his students from their Kobayashi Maru is just one more way Diederik Stapel cheated them out of a good scientific training.

Failing the scientists-in-training: inside the frauds of Diederik Stapel (part 2)

In this post, I’m continuing my discussion of the excellent article by Yudhijit Bhattacharjee in the New York Times Magazine (published April 26, 2013) on social psychologist and scientific fraudster Diederik Stapel. The last post considered how being disposed to expect order in the universe might have made other scientists in Stapel’s community less critical of his (fabricated) results than they could have been. Here, I want to shift my focus to some of the harm Stapel did beyond introducing lies to the scientific literature — specifically, the harm he did to the students he was supposed to be training to become good scientists.

I suppose it’s logically possible for a scientist to commit misconduct in a limited domain — say, to make up the results of his own research projects but to make every effort to train his students to be honest scientists. This doesn’t strike me as a likely scenario, though. Publishing fraudulent results as if they were factual is lying to one’s fellow scientists — including the generation of scientists one is training. Moreover, most research groups pursue interlocking questions, meaning that the questions the grad students are working to answer generally build on pieces of knowledge the boss has built — or, in Stapel’s case “built”. This means that at minimum, a fabricating PI is probably wasting his trainees’ time by letting them base their own research efforts on claims that there’s no good scientific reason to trust.

And as Bhattacharjee describes the situation for Stapel’s trainees, things for them were even worse:

He [Stapel] published more than two dozen studies while at Groningen, many of them written with his doctoral students. They don’t appear to have questioned why their supervisor was running many of the experiments for them. Nor did his colleagues inquire about this unusual practice.

(Bold emphasis added.)

I’d have thought that one of the things a scientist-in-training hopes to learn in the course of her graduate studies is not just how to design a good experiment, but how to implement it. Making your experimental design work in the real world is often much harder than it seems like it will be, but you learn from these difficulties — about the parameters you ignored in the design that turn out to be important, about the limitations of your measurement strategies, about ways the system you’re studying frustrates the expectations you had about it before you were actually interacting with it.

I’ll even go out on a limb and say that some experience doing experiments can make a significant difference in a scientist’s skill conceiving of experimental approaches to problems.

That Stapel cut his students out of doing the experiments was downright weird.

Now, scientific trainees probably don’t have the most realistic picture of precisely what competencies they need to master to become successful grown-up scientists in a field. They trust that the grown-up scientists training them know what these competencies are, and that these grown-up scientists will make sure that they encounter them in their training. Stapel’s trainees likely trusted him to guide them. Maybe they thought that he would have them conducting experiments if that were a skill that would require a significant amount of time or effort to master. Maybe they assumed that implementing the experiments they had designed was just so straightforward that Stapel thought they were better served working to learn other competencies instead.

(For that to be the case, though, Stapel would have to be the world’s most reassuring graduate advisor. I know my impostor complex was strong enough that I wouldn’t have believed I could do an experiment my boss or my fellow grad students viewed as totally easy until I had actually done it successfully three times. If I had to bet money, it would be that some of Stapel’s trainees wanted to learn how to do the experiments, but they were too scared to ask.)

There’s no reason, however, that Stapel’s colleagues should have thought it was OK that his trainees were not learning how to do experiments by taking charge of doing their own. If they did know and they did nothing, they were complicit in a failure to provide adequate scientific training to trainees in their program. If they didn’t know, that’s an argument that departments ought to take more responsibility for their trainees and to exercise more oversight rather than leaving each trainee to the mercies of his or her advisor.

And, as becomes clear from the New York Times Magazine article, doing experiments wasn’t the only piece of standard scientific training of which Stapel’s trainees were deprived. Bhattacharjee describes the revelation when a colleague collaborated with Stapel on a piece of research:

Stapel and [Ad] Vingerhoets [a colleague of his at Tilburg] worked together with a research assistant to prepare the coloring pages and the questionnaires. Stapel told Vingerhoets that he would collect the data from a school where he had contacts. A few weeks later, he called Vingerhoets to his office and showed him the results, scribbled on a sheet of paper. Vingerhoets was delighted to see a significant difference between the two conditions, indicating that children exposed to a teary-eyed picture were much more willing to share candy. It was sure to result in a high-profile publication. “I said, ‘This is so fantastic, so incredible,’ ” Vingerhoets told me.

He began writing the paper, but then he wondered if the data had shown any difference between girls and boys. “What about gender differences?” he asked Stapel, requesting to see the data. Stapel told him the data hadn’t been entered into a computer yet.

Vingerhoets was stumped. Stapel had shown him means and standard deviations and even a statistical index attesting to the reliability of the questionnaire, which would have seemed to require a computer to produce. Vingerhoets wondered if Stapel, as dean, was somehow testing him. Suspecting fraud, he consulted a retired professor to figure out what to do. “Do you really believe that someone with [Stapel’s] status faked data?” the professor asked him.

“At that moment,” Vingerhoets told me, “I decided that I would not report it to the rector.”

Stapel’s modus operandi was to make up his results out of whole cloth — to produce “findings” that looked statistically plausible without the muss and fuss of conducting actual experiments or collecting actual data. Indeed, since the thing he was creating that needed to look plausible enough to be accepted by his fellow scientists was the analyzed data, he didn’t bother making up raw data from which such an analysis could be generated.

Connecting the dots here, this surely means that Stapel’s trainees must not have gotten any experience dealing with raw data or learning how to apply methods of analysis to actual data sets. This left another gaping hole in the scientific training they deserved.

It would seem that those being trained by other scientists in Stapel’s program were getting some experience in conducting experiments, collecting data, and analyzing their data — since that experimentation, data collection, and data analysis became fodder for discussion in the ethics training that Stapel led. From the article:

And yet as part of a graduate seminar he taught on research ethics, Stapel would ask his students to dig back into their own research and look for things that might have been unethical. “They got back with terrible lapses­,” he told me. “No informed consent, no debriefing of subjects, then of course in data analysis, looking only at some data and not all the data.” He didn’t see the same problems in his own work, he said, because there were no real data to contend with.

I would love to know the process by which Stapel’s program decided that he was the best one to teach the graduate seminar on research ethics. I wonder if this particular teaching assignment was one of those burdens that his colleagues tried to dodge, or if research ethics was viewed as a teaching assignment requiring no special expertise. I wonder how it’s sitting with them that they let a now-famous cheater teach their grad students how to be ethical scientists.

The whole “those who can’t do, teach” adage rings hollow here.

Whither mentoring?

Drugmonkey takes issue with the assertion that mentoring is dead*:

Seriously? People are complaining that mentoring in academic science sucks now compared with some (unspecified) halcyon past?


What should we say about the current state of mentoring in science, as compared to scientific mentoring in days of yore? Here are some possibilities:

Maybe there has been a decline in mentoring.

This might be because mentoring is not incentivized in the same way, or to the same degree, as publishing, grant-getting, etc. (Note, though, that some programs require evidence of successful mentoring for faculty promotion. Note also that some funding mechanisms require that the early-career scientist being funded have a mentor.)

Or it might be because no one trained the people who are expected to mentor (such as PIs) in how to mentor. (In this case, though, we might take this as a clue that the mentoring these PIs received in days of yore was not so perfect after all.)

Or, it might be that mentoring seems to PIs like a risky move given that it would require too much empathetic attachment with the trainees who are also one’s primary source of cheap labor, and whose prospects for getting a job like the PI’s are perhaps nowhere near as good as the PI (or the folks running the program) have led the trainees to believe.

Or, possibly PIs are not mentoring so well because the people they are being asked to mentor are increasingly diverse and less obviously like the PIs.

Maybe mentoring is no worse than it has ever been.

Perhaps it has always been a poorly defined part of the advisor’s job duties, not to mention one for which hardly anyone gets formal training in how to do. Moreover, the fact that it may depend on inclination and personal compatibility might make it more chancy than things like joining a lab or writing a dissertation.

Maybe mentoring has actually gotten better than it used to be.

It’s even possible that increased diversity in training populations might tend to improve mentoring by forcing PIs to be more conscious of their interactions (since they recognize that the people they are mentoring are not just like them). Similarly, awareness that trainees are facing a significantly different employment landscape than the one the mentor faced might help the mentor think harder about what kind of advice could actual be useful.

Here, I think that we might also want to recognize the possibility that what has changed is not the level of mentoring being delivered, but rather the expectations the trainees have for what kind of mentoring they should receive.

Pulling back from the question of whether mentoring has gotten better, worse, or stayed the same, there are two big issues that prevent us from being able to answer that question. One is whether we can get our hands on sensible empirical data to make anything like an apples-to-apples comparison of mentoring in different times (or, for that matter, in different places). The other is whether we’re all even talking about the same thing when we’re holding forth about mentoring and its putative decline.

Let’s take the second issue first. What do we have in mind when we say that trainees should have mentors? What exactly is it that they are supposed to get out of mentoring.

Vivian Weil [1], among others, points us to the literary origin of the term mentor, and the meanings this origin suggests, in the relationship between the characters Mentor and Telemachus in Homer’s epic poem, the Odyssey. Telemachus was the son of Odysseus; his father was off fighting the Trojan war, and his mother was busy fending off suitors (which involved a lot of weaving and unweaving), so the kid needed a parental surrogate to help him find his way through a confusing and sometimes dangerous world. Mentor took up that role.**

At the heart of mentoring, Weil argues, is the same kind of commitment to protect the interests of someone just entering the world of your discipline, and to help the mentee to develop skills sufficient to take care of himself or herself in this world:

All the activities of mentoring, but especially the nurturing activities, require interacting with those mentored, and so to be a mentor is to be involved in a relationship. The relationships are informal, fully voluntary for both members, but at least initially and for some time thereafter, characterized by a great disparity of experience and wisdom. … In situations where neophytes or apprentices are learning to “play the game”, mentors act on behalf of the interests of these less experienced, more vulnerable parties. (Weil, 473)

In the world of academic science, the guidance a mentor might offer would then be focused on the particular challenges the mentee is likely to face in graduate school, the period in which one is expected to make the transition from being a learner of scientific knowledge to being a maker of new knowledge:

On the traditional model, the mentoring relationship is usually thought of as gradual, evolving, long-term, and involving personal closeness. Conveying technical understanding and skills and encouraging investigative efforts, the mentor helps the mentee move through the graduate program, providing feedback needed for reaching milestones in a timely fashion. Mentors interpret the culture of the discipline for their mentees, and help them identify good practices amid the complexities of the research environment. (Weil, 474)

A mentor, in other words, is a competent grown-up member of the community in which the mentee is striving to become a grown-up. The mentor understands how things work, including what kinds of social interactions are central to conducting research, critically evaluating knowledge claims, and coordinating the efforts of members of the scientific community more generally.

Weil emphasizes that the the role of mentor, understood in this way, is not perfectly congruent with the role of the advisor:

While mentors advise, and some of their other activities overlap with or supplement those of an advisor, mentors should not be confused with advisors. Advising is a structured role in graduate education. Advisors are expected to perform more formal and technical functions, such as providing information about the program and degree requirements and periodic monitoring of advisees’ progress. The advisor may also have another structured role, that of research (dissertation) director, for advisors are often principal investigators or laboratory directors for projects on which advisees are working. In the role of research director, they “may help students formulate research projects and instruct them in technical aspects of their work such as design, methodology, and the use of instrumentation.” Students sometimes refer to the research or laboratory director as “boss”, conveying an employer/employee relationship rather than a mentor/mentee relationship. It is easy to see that good advising can become mentoring and, not surprisingly, advisors sometimes become mentors. Nevertheless, it is important to distinguish the institutionalized role of advisor from the informal activities of a mentor. (Weil, 474)

Mentoring can happen in an advising relationship, but the evaluation an advisor needs to do of the advisee may be in tension with the kind of support and encouragement a mentor should give. The advisor might have to sideline an advisee in the interests of the larger research project; the mentor would try to prioritize the mentee’s interests.

Add to this that the mentoring relationship is voluntary to a greater degree than the advising relationship (where you have to be someone’s advisee to get through), and the interaction is personal rather than strictly professional.

Among other things, this suggests that good advising is not necessarily going to achieve the desired goal of providing good mentoring. It also suggests that it’s a good idea to seek out multiple mentors (e.g., so in situations where an advisor cannot be a mentor due to the conflicting duties of the advisor, another mentor without these conflicts can pick up the slack).

So far, we have a description of the spirit of the relationship between mentor and mentee, and a rough idea of how that relationship might advance the welfare of the mentee, but it’s not clear that this is precise enough that we could use it to assess mentoring “in the wild”.

And surely, if we want to do more than just argue based on subjective anecdata about how mentoring for today’s scientific trainees compares to the good old days, we need to find some way to be more precise about the mentoring we have in mind, and to measure whether it’s happening. (Absent a time machine, or some stack of data collected on mentoring in the halcyon past, we probably have to acknowledge that we just don’t know how past mentoring would have measured up.)

A faculty team from the School of Nursing at Johns Hopkins University, led by Roland A. Berk [2], grappled with the issue of how to measure whether effective mentoring was going on. Here, the mentoring relationships in question were between more junior and more senior faculty members (rather than between graduate students and faculty members), and the impetus for developing a reliable way to measure mentoring effectiveness was the fact that evidence of successful mentoring activities was a criterion for faculty promotion.

Finding no consistent definition of mentoring in the literature on medical faculty mentoring programs, Berk et al. put forward this one:

A mentoring relationship is one that may vary along a continuum from informal/short-term to formal/long-term in which faculty with useful experience, knowledge, skills, and/or wisdom offers advice, information, guidance, support, or opportunity to another faculty member or student for that individual’s professional development. (Note: This is a voluntary relationship initiated by the mentee.) (Berk et al., 67)

Then, they spelled out central responsibilities within this relationship:

[F]aculty must commit to certain concrete responsibilities for which he or she will be held accountable by the mentees. Those concrete responsibilities are:

  • Commits to mentoring
  • Provides resources, experts, and source materials in the field
  • Offers guidance and direction regarding professional issues
  • Encourages mentee’s ideas and work
  • Provides constructive and useful critiques of the mentee’s work
  • Challenges the mentee to expand his or her abilities
  • Provides timely, clear, and comprehensive feedback to mentee’s questions
  • Respects mentee’s uniqueness and his or her contributions
  • Appropriately acknowledges contributions of mentee
  • Shares success and benefits of the products and activities with mentee

(Berk et al., 67)

These were then used to construct a “Mentorship Effectiveness Scale” that mentees could use to share their perceptions of how well their mentors did on each of these responsibilities.

Here, one might raise concerns that there might be a divergence between how effective a mentee thinks the mentor is in each of these areas and how effective the mentor actually is. Still, tracking the perceptions of the mentees with the instrument developed by Berk et al. provides some kind of empirical data. In discussions about whether mentoring is getting better or worse, such data might be useful.

And, if this data isn’t enough, it should be possible to work out strategies to get the data you want: Survey PIs to see what kind of mentoring they want to provide and how this compares to what kind of mentoring they feel able to provide. (If there are gaps here, follow-up questions might explore the perceived impediments to delivering certain elements of mentoring.) Survey the people running graduate programs to see what kind of mentoring they think they are (or should be) providing and what kind of mechanisms they have in place to ensure that if it doesn’t happen informally between the student and the PI, it’s happening somewhere.

To the extent that successful mentoring is already linked to tangible career rewards in some places, being able to make a reasonable assessment of it seems appropriate.

It’s possible that making it a standard thing to evaluate mentoring and to tie it to tangible career rewards (or penalties, if one does an irredeemably bad job of it) might help focus attention on mentoring as an important thing for grown-up members of the scientific community to do. This might also lead to more effort to help people learn how to mentor effectively and to offer support and remediation for people whose mentoring skills are not up to snuff.

But, I have a worry (not a huge one, but not nanoscale either). Evaluation of effective mentoring seems to rely on breaking out particular things the mentor does for the mentee, or particular kinds of interactions that take place between the two. In other words, the assessment tracks measurable proxies for a more complicated relationship.

That’s fine, but there’s a risk that a standardized assessment might end up reducing the “mentorship” that mentors offer, and that mentees seek, to these proxies. Were this to happen, we might lose sight of the broader, richer, harder-to-evaluate thing that mentoring can be — an entanglement of interests, a transmission of wisdom, and of difficult questions, and of hopes, and of fears, in what boils down to a personal relationship based on a certain kind of care.

The thing we want the mentorship relationship to be is not something that you could force two people to be in — any more than we could force two people to be in love. We feel the outcomes are important, but we cannot compel them.

And obviously, the assessable outcomes that serve as proxies for successful mentoring are better than nothing. Still, it’s not unreasonable for us to hope for more as mentees, nor to try to offer more as mentors.

After all, having someone on the inside of the world of which you are trying to become a part, someone who knows the way and can lead you through, and someone who believes in you and your potential even a little more than you believe in them yourself, can make all the difference.

*Drugmonkey must know that my “Ethics in Science” class will be discussing mentoring this coming week, or else he’s just looking for ways to distract me from grading.

**As it happened, Mentor was actually Athena, the goddess of wisdom and war, in disguise. Make of that what you will.

[1] Weil, V. (2001) Mentoring: Some Ethical Considerations. Science and Engineering Ethics. 7 (4): 471-482.

[2] Berk, R. A., Berg, J., Mortimer, R., Walton-Moss, B., and Yeo, T. P. (2005) Measuring the Effectiveness of Faculty Mentoring Relationships. Academic Medicine. 80: 66-71.

Everyday mentors: a tribute to Dr. James E. Lu Valle.

People talk a lot about the importance of mentors, and scientific trainees are regularly encouraged to find strong mentors to help them find their way as they work to become grown-up scientists. Sometimes, though, mentoring doesn’t happen in explicit coaching sessions but in casual conversations. And sometimes, when you’re not looking for them, mentors find you.

Back in the spring and autumn of 1992, I was a chemistry graduate student starting to believe that I might actually get enough of my experiments to work to get my Ph.D. As such, I did what senior graduate students in my department were supposed to do: I began preparing myself to interview with employers who came to my campus (an assortment of industry companies and national labs), and I made regular visits to my department’s large job announcement binder (familiarly referred to as “The Book of Job”).

What optimism successes in the lab giveth, the daunting terrain laid out in “The Book of Job” taketh away.

It wasn’t just the announcements of postdoctoral positions (positions, I had been told, which provided the standard path by which to develop research experience in an area distinct from the one that was the focus of the doctoral research) that listed as prerequisites three or more years of research experience in that very area. The very exercise of trying to imagine myself meeting the needs of an academic department looking for a certain kind of researcher was … really hard. It sounded like they were all looking for researchers significantly more powerful than I felt myself to be at that point, and I wasn’t sure if it was realistic to expect that I could develop those powers.

I was having a crisis of faith, but I was trying to keep it under wraps because I was pretty sure that having that crisis was a sign that my skills and potential as a chemist were lacking.

It was during my regularly scheduled freak-out over the binder in the department lobby that I really got to know Dr. Lu Valle. While I was in the department, his official position was as a “visiting scholar”, but since he had been the director of undergraduate labs in the department for years before he retired, he wasn’t really visiting, he was at home. And Dr. Lu Valle took it upon himself to make me feel at home, too — not just in the department, but in chemistry.

It started with light conversation. Dr. Lu Valle would ask what new listings had turned up in the binder since the last time he had seen me. Then he’d ask about what kind of listings I was hoping would turn up there. Soon, we were talking about what kind of things I hoped for in a chemical career, and about what scared me in my imagination of a chemical career.

That he bothered to draw me out and let me talk about my fears made those fears a lot more manageable.

But Dr. Lu Valle went even further than just getting me to voice my fears. He reassured me that it was normal for good chemists to have these fears, and that everyone had to get across the chasm between knowing you could be a good student and believing you could be a successful grown-up scientist. And he took it as an absolute given that I could get across this chasm.

Now, I should note for the record that my advisor did much to encourage me (along with pressing me to think harder, to make sure my data was as good as it could be, to anticipate flaws in my interpretations, and so forth). But the advisor-advisee relationship can be fraught. When you’ve been busting your hump in the lab, showing weakness of any sort in your interactions with your PI can feel, viscerally, like a bad idea. I think that for a good stretch of time in my graduate lab, I put a spin on many of my interactions with my PI that was significantly more optimistic than I felt inside. (Then, I worked like mad so that my optimistic projections of what I would be able to accomplish had a reasonable chance of coming true.)

Being able to voice some of my worries to a senior chemist who didn’t need me to make headway on one of his research projects — and for whom reassuring me wasn’t part of the official job description — really helped. Dr. Lu Valle didn’t need to mentor me. He didn’t need to interact with me at all. But he did.

Somewhere in the course of our discussions, as we were talking about the frustrations of getting experiments to work, Dr. Lu Valle mentioned that his advisor had made him completely disassemble, then completely reassemble, complex apparatus — not just to get an experiment under control, but to persuade him that taking the whole thing apart and putting it all back together (even repeatedly) was within his powers.

That was the conversation in which that I learned that Dr. Lu Valle’s advisor had been Linus Pauling.

Now, maybe it amped up the pep-talks a little that a senior scientist who seemed to have complete faith that I was going to do fine had been trained by a guy who won two Nobel Prizes. But mostly, I think it reassured me that Dr. Lu Valle remembered what it was like to be a graduate student and to have to get over the chasm of not knowing if you can do it to believing that you can.

After the season of job interviews passed, I drifted away from “The Book of Job” and back to my lab to get some more experiments done and to get writing. Then, in January of 1993, while he was on vacation in New Zealand, Dr. Lu Valle died.

It was at his memorial service (which happened to be on my twenty-fifth birthday) that I learned the remarkable details of Dr. Lu Valle’s life that didn’t come up in our conversations in the department lobby. A press release from the Stanford University News Office describes some of the high points:

James E. Lu Valle, a visiting scholar at Stanford and retired director of undergraduate laboratories in the Chemistry Department, died Jan. 30 in Te Anau, New Zealand, while on vacation. He was 80.

During a long and varied career, Lu Valle’s research covered electron diffraction, photochemistry, magnetic susceptibility, reaction kinetics and mechanisms, photographic theory, magnetic resonance, solid-state physics, neurochemistry and the chemistry of memory and learning.

Lu Valle was well known in track circles as the 400- meter bronze medal winner of the 1936 Olympics in Berlin. …

Lu Valle ran in the Olympics the same year he graduated Phi Beta Kappa with a bachelor’s degree in chemistry from the University of California-Los Angeles. He then returned for his master’s degree in chemistry and physics, during which time he helped found the graduate student association and served as its first president. In 1983, UCLA named its new Graduate Student Union in his honor.

Lu Valle’s career in chemistry started at age 8, when he found a chemistry set under the Christmas tree. He tried every experiment possible, and eventually filled the house with smoke. At his mother’s insistence, the rest of his childhood experiments took place on the porch.

In 1940, Lu Valle earned a doctorate in chemistry and math under the tutelage of Linus Pauling at the California Institute of Technology. He then taught at Fisk University in Tennessee, after which he spent 10 years at Eastman Kodak working on color photography.

He was the first African American to be employed in the Eastman Kodak laboratories. While there, Lu Valle went on loan to the National Defense Research Committee to conduct research at the University of Chicago and the California Institute of Technology on devices for monitoring carbon dioxide in planes.

He later served as director of research at Fairchild Camera and Instrument and became director of physical and chemical research at Smith-Corona Merchant Labs in Palo Alto in 1969.

During that time, he made extensive use of the Chemistry Department library, in the process getting to know faculty members. When SCM closed its Palo Alto operations, the Chemistry Department asked him to head the freshman labs.

“He was eminently qualified, a first-class chemist,” Professor Douglas Skoog recounted in 1984, “and we were glad to have him. In fact, he was overqualified for the job.”

As head of the labs for seven years, his task was to assign teaching assistants and make sure that the right equipment was always ready.

In practice, he became a friend and counselor to the chemistry majors and pre-med students passing through the department. In an average year, 900 students would start freshman chemistry.

Lu Valle is survived by his wife of 47 years, Jean Lu Valle, of Palo Alto, and three children. Son John Vernon Lu Valle is an engineer with Allied Signal under contract to the Jet Propulsion Laboratory in Pasadena, and Michael James Lu Valle is associated with Bell Laboratories in New Jersey. Daughter Phyllis Ann Lu Valle- Burke is a molecular biologist at Harvard Medical School. A sister, Mayme McWhorter of Los Angeles, also survives.

Dr. LuValle never talked to me about what it was like to be an African American athlete competing in Hitler’s Olympics. He didn’t share with me his experience of being the first African American scientist working at Eastman Kodak labs. We didn’t discuss the details of the research that he did across so many different scientific areas.

If I had known these facets of his past while he was alive, I would have liked to ask him about them.

But Dr. Lu Valle was, I think, more concerned with what I needed as someone trying to imagine myself taking on the role of a grown-up chemist. His success as the director of undergraduate labs had a lot to do with his ability and willingness to tune into what students needed, and then to provide it. With all of those accomplishments under his belt — accomplishments which potentially might have made a student like me think, “Well of course an exceptional person with so much talent and drive succeeded at science, but I’m not that exceptional!” — he wasn’t afraid to dig back to his experience of what it was like to be a graduate student, to remember the uncertainty, frustration, and fear that are a part of that experience, and to say, “I got through it, and I have every reason to believe that you will, too.”

I don’t know whether personal experience is what developed Dr. Lu Valle’s awareness of how important this kind of mentoring can be, but it wouldn’t surprise me a bit. As an African American graduate student at Caltech in the 1930s, I’m sure he had lots of people expecting him to fail. Having people in his life who expected that of course he would succeed — whether his parents, his advisor, or someone else with standing as a grown-up scientist — may have helped him propel himself through the inescapable moments of self-doubt to the distinguished trajectory his professional life took.

It may not be accidental, though, that in a very white, very male chemistry department, Dr. Lu Valle was the one who put himself in my path when I was doubting myself most and reassured me that I would do just fine. Maybe he knew what it was like to have someone provide that kind of support when you need it.

I count myself as lucky that, in his retirement, Dr. Lu Valle still felt that the chemistry department was a home to him. Because of him, that department and the larger community of chemists felt like more of a home to me.