Reflections on being part of a science blogging network.

This is another post following up on a session at ScienceOnline Together 2014, this one called Blog Networks: Benefits, Role of, Next Steps, and moderated by Scientific American Blogs Editor Curtis Brainard. You should also read David Zaslavsky’s summary of the session and what people were tweeting on the session hashtag, #scioBlogNet.

My own thoughts are shaped by writing an independent science blog that less than a year later became part of one of the first “pro” science blogging networks when it launched in January 2006, moving my blog from that network to a brand new science blogging community in August 2010, and keeping that blog going while starting Doing Good Science here on the Scientific American Blog Network when it launched in July 2011. This is to say, I’ve been blogging in the context of science blogging networks for a long time, and have seen the view from a few different vantage points.

That said, my view is also very particular and likely peculiar — for example, I’m a professional philosopher (albeit one with a misspent scientific youth) blogging about science while trying to hold down a day-job as a professor in a public university during a time of state budget terror and to maintain a reasonable semblance of family life. My blogging is certainly more than a hobby — in many ways it provides vital connective tissue that helps knit together my weirdly interdisciplinary professional self into a coherent whole (and has thus been evaluated as a professional activity for the day-job) — but, despite the fact that I’m a “pro” who gets paid to blog here, it’s not something I could live on.

In my experience, a science blogging network can be a great place to get visibility and to build an audience. This can be especially useful early in one’s blogging career, since it’s a big, crowded blogosphere out there. Networks can also be handy for readers, since they deliver more variety and more of a regular flow of posts than most individual bloggers can do (especially when we’re under the weather and/or catching up on grading backlogs). It’s worth noting, though, that very large blog networks can provide a regular flow of content that frequently resembles a firehose. Some blog networks provide curation in the form of featured content or topical feeds. Many provide something like quality control, although sometimes it’s exercised primarily in the determination of who will blog in the network.

Blog networks can also have a distinctive look and feel, embodied in shared design elements, or in an atmosphere set within the commenting community, for example. Bloggers within blog networks may have an easier time finding opportunities for productive cross-pollination or coordination of efforts with their network neighbors, whether to raise political awareness or philanthropic dollars or simply to contribute many distinctive perspectives to the discussion of a particular topic. Bloggers sharing networks can also become friends (although sometimes, being humans, they develop antagonisms instead).

On a science blogging network, bloggers seem also to regularly encounter the question of what counts as a proper “science blog” — about whose content is science-y enough, and what exactly that should mean. This kind of policing of boundaries happens even here.

While the confluence of different people blogging on similar terrain can open up lots of opportunities for collaboration, there are moments when the business of running a blog network (at least when that blog network is a commercial enterprise) can be in tension with what the bloggers value about blogging in the network. Sometimes the people running the network aren’t the same as the people writing the blogs, and they end up having very different visions, interests, pressing needs, and understandings of their relationships to each other.

Sometimes bloggers and networks grow apart and can’t give each other what they need for the relationship to continue to be worthwhile going forward.

And, while blogging networks can be handy, there are other ways that online communicators and consumers of information can find each other and coordinate their efforts online. Twitter has seen the ride of tremendously productive conversations around hashtags like #scistuchat and #BlackandSTEM, and undoubtedly similarly productive conversations among science-y folk regularly coalesce on Facebook and Tumblr and in Google Hangouts. Some of these online interactions lead to face-to-face collaborations like the DIY Science Zone at GeekGirlCon and conference proposals made to traditional professional societies that get their start in online conversations.

Networks can be nice. They can even help people transition from blogging into careers in science writing and outreach. But even before blog networks, awesome people managed to find each other and to come up with awesome projects to do together. Networks can lower the activation energy for this, but there are other ways to catalyze these collaborations, too.

Brief thoughts on uncertainty.

For context, these thoughts follow upon a very good session at ScienceOnline Together 2014 on “How to communicate uncertainty with the brevity that online communication requires.” Two of the participants in the session used Storify to collect tweets of the discussion (here and here).

About a month later, this does less to answer the question of the session title than to give you a peek into my thoughts about science and uncertainty. This may be what you’ve come to expect of me.

Humans are uncomfortable with uncertainty, at least in those moments when we notice it and where we have to make decisions that have more than entertainment value riding on them. We’d rather have certainty, since that makes it easier to enact plans that won’t be thwarted.

Science is (probably) a response to our desire for more certainty. Finding natural explanations for natural phenomena, stable patterns in our experience, gives us a handle on our world and what we can expect from it that’s less capricious than “the gods are in a mood today.”

But the scientific method isn’t magic. It’s a tool that cranks out explanations of what’s happened, predictions of what’s coming up, based on observations made by humans with our fallible human senses.

The fallibility of those human senses (plus things like the trickiness of being certain you’re awake and not dreaming) was (probably) what drove philosopher RenĂ© Descartes in his famous Meditations, the work that yielded the conclusion “I think, therefore I am” and that featured not one but two proofs of the existence of a God who is not a deceiver. Descartes was not pursuing a theological project here. Rather, he was trying to explain how empirical science — science relying on all kinds of observations made by fallible humans with their fallible senses — could possibly build reliable knowledge. Trying to put empirical science on firm foundations, he engaged in his “method of doubt” to locate some solid place to stand, some thing that could not be doubted. That something was “I think, therefore I am” — in other words, if I’m here doubting that my experience is reliable, that I’m awake instead of dreaming, that I’m a human being rather than a brain in a vat, I can at least me sure that there exists a thinking thing that’s doing the doubting.

From this fact that could not be doubted, Descartes tried to climb back out of that pit of doubt and to work out the extent to which we could trust our senses (and the ways in which our sense were likely to mislead us). This involved those two proofs of the existence of a God who is not a deceiver, plus a whole complicated story of minds and brain communicating with each other (via the wiggling of our pineal glands) — which is to say, it was not entirely persuasive. Still, it was all in the service of getting us more certainty from our empirical science.

Certainty and its limits are at the heart of another piece of philosophy, “the problem of induction,” this one most closely associated with David Hume. The problem here rests on our basic inability to be certain that what we have so far observed of our world will be a reliable guide to what we haven’t observed yet, that the future will be like the past. Observing a hundred, or a thousand, or a million ravens that are black is not enough for us to conclude with absolute certainty that the ravens we haven’t yet observed must also be black. Just because the sun rose today, and yesterday, and everyday through recorded human history to date does not guarantee that it will rise tomorrow.

But while Hume pointed out the limits of what we could conclude with certainty from our observations at any given moment — limits which impelled Karl Popper to assert that the scientific attitude was one of trying to prove hypotheses false rather than seeking support for them — he also acknowledged our almost irresistible inclination to believe that the future will be like the past, that the patterns of our experience so far will be repeated in the parts of the world still waiting for us to experience them. Logic can’t guarantee these patterns will persist, but our expectations (especially in cases where we have oodles of very consistent observations) feel like certainty.

Scientists are trained to recognize the limits of their certainty when they draw conclusions, offer explanations, make predictions. They are officially on the hook to acknowledge their knowledge claims as tentative, likely to be updated in the light of further information.

This care in acknowledging the limits of what careful observation and logical inference guarantee us can make it appear to people who don’t obsess over uncertainties in everyday life that scientists don’t know what’s going on. But the existence of some amount of uncertainty does not mean we have no idea what’s going on, no clue what’s likely to happen next.

What non-scientists who dismiss scientific knowledge claims on the basis of acknowledged uncertainty forget is that making decisions in the face of uncertainty is the human condition. We do it all the time. If we didn’t, we’d make no decisions at all (or else we’d be living a sustained lie about how clearly we see into our future).

Strangely, though, we seem to have a hard time reconciling our everyday pragmatism about everyday uncertainty with our suspicion about the uncertainties scientists flag in the knowledge they share with us. Maybe we’re making the jump from viewing scientific knowledge as reliable to demanding that it be perfect. Or maybe we’re just not very reflective about how easily we navigate uncertainty in our everyday decision-making.

I see this firsthand when my “Ethics in Science” students grapple with ethics case studies. At first they are freaked out by the missing details, the less-than-perfect information about what will happen if the protagonist does X or if she does Y instead. How can we make good decisions about what the protagonist should do if we can’t be certain about those potential outcomes?

My answer to them: The same way we do in real life, whose future we can’t see with any more certainty.

When there’s more riding on our decisions, we’re more likely to notice the gaps in the information that informs those decisions, the uncertainty inherent in the outcomes that will follow on what we decide. But we never have perfect information, and neither do scientists. That doesn’t mean our decision-making is hopeless, just that we need to get comfortable making do with the certainty we have.

Engagement with science needs more than heroes

Narratives about the heroic scientist are not what got me interested in science.

It was (and still is) hard for me to connect with a larger-than-life figure when my own aspirations have always been pretty life-sized.

Also, there’s the fact that the scientific heroes whose stories have been told have mostly been heroes, not heroines, just one more issue making it harder for me to relate to their experiences. And when the stories of pioneering women of science are told, these stories frequently emphasize how these heroines made it against big odds, how exceptional they are. Having to be exceptional even to succeed in scientific work is not a prospect I find inviting.

While tales of great scientific pioneers never did much for me, I am enraptured with science. The hook that drew me in is the process of knowledge-building, the ways in which framing questions and engaging in logical thinking and methodical observation of a piece of the world can help us learn quite unexpected things about that world’s workings. I am intrigued by the power of this process, by the ways that it frequently rewards insight and patience.

What I didn’t really grasp when I was younger but appreciate now is the inescapably collaborative nature of the process of building scientific knowledge. The plan of attack, the observations, the troubleshooting, the evaluation of what the results do and do not show — that all comes down to teamwork of one sort or another, the product of many hands, many eyes, many brains, many voices.

We take our perfectly human capacities as individuals and bring them into concert to create a depth of understanding of our world that no heroic scientist — no Newton, no Darwin, no Einstein — could achieve on his own.

The power of science lies not in individual genius but in a method of coordinating our efforts. This is what makes me interested in what science can do — what makes it possible for me to see myself doing science. And I’m willing to bet I’m not the only one.

The heroes of science are doubtless plenty inspiring to a good segment of the population, and given the popularity of heroic narratives, I doubt they’ll disappear. But in our efforts to get people engaged with science, we shouldn’t forget the people who connect less with great men (and women) and more with the extraordinarily powerful process of science conducted by recognizably ordinary human beings. We should remember to tell the stories about the process, not just the heroes.

Incoherent ethical claims that give philosophers a bad rap

Every now and then, in the course of a broader discussion, some philosopher will make a claim that is rightly disputed by non-philosophers. Generally, this is no big deal — philosophers have just as much capacity to be wrong as other humans. But sometimes, the philosopher’s claim, delivered with an air of authority, is not only a problem in itself but also manages to convey a wrong impression about the relation between the philosophers and non-philosophers sharing a world.

I’m going to examine the general form of one such ethical claim. If you’re interested in the specific claim, you’re invited to follow the links above. We will not be discussing the specific claim here, nor the larger debate of which it is a part.

Claim: To decide to do X is always (or, at least, should always be) a very difficult and emotional step, precisely because it has significant ethical consequences.

Let’s break that down.

“Doing X has significant ethical consequences” suggests a consequentialist view of ethics, in which doing the right thing is a matter of making sure the net good consequences (for everyone affected, whether you describe them in terms of “happiness” or something else) outweigh the net bad consequences.

To say that doing X has significant ethical consequences is then to assert that (at least in the circumstances) doing X will make a significant contribution to the happiness or unhappiness being weighed.

In the original claim, the suggestion is that the contribution of doing X to the balance of good and bad consequences is negative (or perhaps that it is negative in many circumstances), and that on this account it ought to be a “difficult and emotional step”. But does this requirement make sense?

In the circumstances in which doing X shifts the balance of good and bad consequences to a net negative, the consequentialist will say you shouldn’t do X — and this will be true regardless of your emotions. Feeling negative emotions as you are deciding to do X will add more negative consequences, but they are not necessary: a calculation of the consequences of doing X versus not doing X will still rule out doing X as an ethical option even if you have no emotions associated with it at all.

On the other hand, in the circumstances in which doing X shifts the balance of good and bad consequences to a net positive, the consequentialist will say you should do X — again, regardless of your emotions. Here, feeling negative emotions as you are deciding to do X will add more negative consequences. If these negative emotions are strong enough, they run the risk of reducing the net positive consequences — which makes the claim that one should feel negative emotions (pretty clearly implied in the assertion that the decision to do X should be difficult) a weird claim, since these negative emotions would serve only to reduce the net good consequences of doing something that produces net good consequences in the circumstances.

By the way, this also suggests, perhaps perversely, a way that strong emotions could become a problem in circumstances in which doing X would otherwise clearly bring more negative consequences than positive ones: if the person contemplating doing X were to get a lot of happiness from doing X.

Now, maybe the idea is supposed to be that negative feelings associated with the prospect of doing X are supposed to be a brake if doing X frequently leads to more bad consequences than good ones. But I think we have to recognize feelings as consequences — as something that we need to take into account in the consequentialist calculus with which we evaluate whether doing X here is ethical or not. And that makes the claim that the feelings ought always to be negative, regardless of other features of the situation that make doing X the right thing, puzzling.

You could avoid worries about weighing feelings as consequences by shifting from a consequentialist ethical framework to something else, but I don’t think that’s going to be much help here.

Kantian ethics, for example, won’t pin the ethics of doing X to the net consequences, but instead it will come down to something like whether it is your duty to do X (where your duty is to respect the rational capacity in yourself and in others, to treat people as ends in themselves rather than as mere means). Your feelings are no part of what a Kantian would consider in judging whether your action is ethical or not. Indeed, Kantians stress that ethical acts are motivated by recognizing your duty precisely because feelings can be a distraction from behaving as we should.

Virtue ethicists, on the other hand, do talk about the agent’s feelings as ethically relevant. Virtuous people take pleasure in doing the right things and feel pain at the prospect of doing the wrong thing. However, if doing X is right under the circumstances, the virtuous people will feel good about doing X, not conflicted about it — so the claim that doing X should always be difficult and emotional doesn’t make much sense here. Moreover, virtue ethicists describe the process of becoming virtuous as one where behaving in virtuous ways usually precedes developing emotional dispositions to feel pleasure from acting virtuously.

Long story short, it’s hard to make sense of the claim “To decide to do X is always (or, at least, should always be) a very difficult and emotional step, precisely because it has significant ethical consequences” — unless really what is being claimed it that doing X is always unethical and you should always feel bad for doing X. If that’s the claim, though, emotions are pretty secondary.

But beyond the incoherence of the claim, here’s what really bugs me about it: It seems to assert that ethicists (and philosophers more generally) are in the business of telling people how to feel. That, my friends, is nonsense. Indeed, I’m on record prioritizing changes in unethical behavior over any interference with what’s in people’s hearts. How we behave, after all, has much more impact on our success in sharing a world with each other than how we feel.

This is not to say that I don’t recognize a likely connection between what’s in people’s hearts and how they behave. For example, I’m willing to bet that improvements in our capacity for empathy would likely lead to more ethical behavior.

But it’s hard to see as empathetic telling people they should generally feel bad for making a choice which under the circumstances is an ethical choice. If anything, requiring such negative emotions is a failure of empathy, and punitive to boot.

Clearly, there exist ethicists and philosophers who operate this way, but many of us try to do better. Indeed, it’s reasonable for you all to expect and demand that we do better.