Scientific knowledge and “what everyone knows”.

Those of you who read the excellent blog White Coat Underground have probably had occasion to read PalMD’s explanation of the Quack Miranda Warning, the disclaimer found on various websites and advertisements that reads, “These statements have not been evaluated by the Food and Drug Administration. This product is not intended to diagnose, treat, cure or prevent any disease.” When found on a website that seems actually to be offering diagnosis, treatment, cure, or prevention, PalMD notes, this language seems like a warning that the big change that will be effected is that your wallet will be lightened.

In response to this, Lawrence comments:

This statement may be on every quack website but is on every legitamate website and label as well. Take vitamin C for example. Everyone knows that it can help treat & cure diseases. Vitamin C has been used for centuries to cure disease by eating various foods that are high in it. Even doctors tell you it is good to take when you are sick because it helps your body fight off the disease. So the fact that this statement is required to be on even the most obviously beneficial vitamins pretty much means that the FDA requires a companies to lie to the public and that they have failed in their one duty to encouraging truth in health. Once I realized this, it totally discredits everything the FDA says.

Sure if something is not approved by a big organization whose existance is supposed to safeguard health it makes it easier for the little con artest to step in at every opportunity, but that doesn’t mean that the big con artests arn’t doing the same thing

PalMD’s reply is succinct:

“Everyone knows…”

A phrase deadly to science.

I’m going to add my (less succinct) two cents.

There are plenty of things that people take to be something everyone knows. (The “everyone” is tricky, because there are enough people on the planet that it’s usually (always?) possible to find someone who doesn’t know X.). And, I’m happy to grant that, for some values of X, there are indeed many people who believe X.

But belief is not the same as knowledge.

What “everyone knows” about celebrities should help us notice the difference. Richard Gere? Jamie Lee Curtis? Even in the event that everyone has heard the same rumors, the extent of what we actually know is that there are rumors. Our propensity to believe rumors is why the team at Snopes will never want for material.

This is not to say that we have to do all of our epistemic labor ourselves. Indeed, we frequently rely on the testimony of others to help us know more than we could all by ourselves, But, this division of labor introduces risks if we accept as authoritative the testimony of someone who is mistaken — or who is trying to sell us snake-oil. Plus, when we’re accepting the testimony of someone who knows X on the basis of someone else’s testimony, our connection to the actual coming-to-know of X (through a mode other than someone else’s say-so) becomes more attenuated.

At least within the realm of science, the non-testimony route to knowledge involves gathering empirical evidence under conditions that are either controlled or at least well characterized. Ideally, the effects that are observed are both repeatable in relevantly similar conditions and observable by others. Science, in its methodology, strives to ground knowledge claims in observational evidence that anyone could come to know (assuming a standard set of properly functioning sense organs). Part of how we know that we know X is that the evidence in support of X can be inspected by others. At this basic level, we don’t have to take anyone else’s word for X; the testimony of our senses (and the fact that others who are pointing their sense organs at the same bits of the world and seeing the same things) gives us the support for our beliefs that we need.

Claims without something like empirical support might inspire belief, but they don’t pass scientific muster. To the extent that an agency like the FDA is committed to evaluating claims in a scientific framework, this means that they want to evaluate the details of the experiments used to generate the empirical data that are being counted as support for those claims. In other contexts, folks may be expecting, or settling for, other standards of evidence. In scientific contexts, including biomedical ones, scientific rules of evidence are what you get.

Why then, one might ask, might a physician suggest vitamin C to a patient with a cold if there isn’t sufficient scientific evidence to say we know vitamin C cures cold?

There are a few possibilities here. One is that the physician judges (on the basis of a reasonable body of empirical evidence) that taking vitamin C is unlikely to do harm to the patient with a cold. If the physician’s clinical experience is that cold patients will feel better with some intervention than with no intervention, recommending vitamin C may seem like the most benign therapeutic option.

It’s also possible that some of these physicians accept the testimony of someone else who tells the there is good reason to believe that vitamin C cures colds. Being human, physicians sometimes get burned by testimony that turns out to be unreliable.

It’s even possible that some physicians are not so clear on scientific rules of evidence, and that they make recommendations on the basis of beliefs that haven’t been rigorously tested. The more high profile of these physicians are the kinds of folks about whom PalMD frequently blogs.

Fake journals versus bad journals.

By email, following on the heels of my post about the Merck-commissioned, Elsevier-published fake journal Australasian Journal of Bone and Joint Medicine, a reader asked whether the Journal of American Physicians and Surgeons (JPandS) also counts as a fake journal.
I have the distinct impression that folks around these parts do not hold JPandS in high esteem. However, it seems like there’s an important distinction between a fake journal and a bad one.

Continue reading

A warning for the herpeto-unctuous.

It seems that some people respond to public concern about swine flu and its spread by trying to sell you stuff. This stuff is not limited to face masks and duct tape, but includes products advertised to prevent, diagnose, or treat swine flu, but whose claims of safety and efficacy do not have a basis in evidence.
In other words, snake oil.

Continue reading

Facts and their interpretation.

Over at DrugMonkey, PhysioProf has written a post on the relative merits of “correct” and “interesting”, at least as far as science is concerned. Quoth PhysioProf:

It is essential that one’s experiments be “correct” in the sense that performing the same experiment in the same way leads to the same result no matter when the experiment is performed or who performs it. In other words, the data need to be valid.
But it is not at all important that one’s interpretation of the data–from the standpoint of posing a hypothesis that is consistent with the data–turns out to be correct or not. All that matters is that the hypothesis that is posed be “interesting”, in the sense of pointing the way to further illuminating experiments.
I spend a lot of time with my trainees on this distinction, because some of them tend to be so afraid of being “wrong” in their interpretations that they effectively refuse to interpret their data at all, and their hypotheses are nothing more than restatements of the data themselves. This makes it easy to be “correct”, but impossible to think creatively about where to go next.
Some tend in the opposite direction, going on flights of fancy that are so unmoored from the data as to result in hypotheses that are also useless in leading to further experiments with a reasonable likelihood of yielding interpretable results.

I think this is a really good description of a central feature of scientific activity.

Continue reading

Science and belief.

Given that in my last post I identified myself as playing for Team Science, this seems to be as good a time as any to note that not everyone on the team agrees about every little thing. Indeed, there are some big disagreements — but I don’t think these undermine our shared commitment to scientific methodology as a really good way of understanding our world.
I’m jumping into the fray of one of the big disagreements with this repost of an essay I wrote for the dear departed WAAGNFNP blog.
There’s a rumor afoot that serious scientists must abandon what, in the common parlance, is referred to as “faith”, that “rational” habits of mind and “magical thinking” cannot coexist in the same skull without leading to a violent collision.
We are not talking about worries that one cannot sensibly reconcile one’s activities in a science which relies on isotopic dating of fossils with one’s belief, based on a literal reading of one’s sacred texts, that the world and everything on it is orders of magnitude younger than isotopic dating would lead us to conclude. We are talking about the view that any intellectually honest scientist who is not an atheist is living a lie.
I have no interest in convincing anyone to abandon his or her atheism. However, I would like to make the case that there is not a forced choice between being an intellectually honest scientist and being a person of faith.

Continue reading

Audience participation: help me flag good posts for non-scientists trying to understand science.

A regular reader of the blog emailed me the following:

Have you ever considered setting up a section for laymen in your blog where posts related to the philosophy of science, how research is conducted, how scientists think etc. are archived? An example of what I think might be a good article to include would be your post on Marcus Ross.
Part of why I like reading your blog is because you analyze these fundamental issues in science, and I believe that this will help any laymen who stumble upon your blog for the first time quite a bit. It certainly helped me! I had to trawl through tons of posts to get to posts related to these fundamental issues though (not that the other posts are not interesting!).

Continue reading

Does writing off philosophy of science cost the scientists anything?

In my last post, I allowed as how the questions which occupy philosophers of science might be of limited interest or practical use to the working scientist.* At least one commenter was of the opinion that this is a good reason to dismantle the whole discipline:

[T]he question becomes: what are the philosophers good for? And if they don’t practice science, why should we care what they think?

And, I pretty much said in the post that scientists don’t need to care about what the philosophers of science think.

Then why should anyone else?

Scientists don’t need to care what historians, economists, politicians, psychologists, and so on think. Does this mean no one else should care?

If those fields of study had no implications for people taking part in the endeavors being studied, then no, I don’t think anyone should care about them. Not the people endeavoring, nor anyone else. The process of study wouldn’t lead to practical applications or even a better understanding of what was being studied – it would be completely worthless.

Let me take a quick pass at the “why care?” question.

Continue reading

A branch of learning that ‘need not be learned’?

Prompted by my discussion of Medawar and recalling that once in the past I called him a gadfly (although obviously I meant it in the good way), Bill Hooker drops another Medawar quotation on me and asks if I’ll bite:

If the purpose of scientific methodology is to prescribe or expound a system of enquiry or even a code of practice for scientific behavior, then scientists seem to be able to get on very well without it. Most scientists receive no tuition in scientific method, but those who have been instructed perform no better as scientists than those who have not. Of what other branch of learning can it be said that it gives its proficients no advantage; that it need not be taught or, if taught, need not be learned?

Bill’s take is “scientific methodology” here can be read “philosophy of science”. So, what do I think?

Continue reading

Resisting scientific ideas.

In the May 18th issue of Science, there’s a nice review by Paul Bloom and Deena Skolnick Weisberg [1] of the literature from developmental psychology that bears on the question of why adults in the U.S. are stubbornly resistant to certain scientific ideas.
Regular readers will guess that part of my interest in this research is connected to my habit of trying to engage my kids in conversations about science. Understanding what will make those conversations productive, in both the short-term and the long-term, would be really useful. Also, I should disclose that I’m pals with Deena (and with her spouse). When a friend coauthors an interesting paper (published in Science), why wouldn’t I blog about it?
I’ll run through the main points from developmental psychology research that the review identifies as important here, and then I’ll weigh in with some thoughts of my own.

Continue reading

What scientists believe and what they can prove (with a flowchart for Sir Karl Popper).

On the post in which I resorted to flowcharts to try to unpack people’s claims about the process involved in building scientific knowledge, Torbjörn Larsson raised a number of concerns:

The first problem I have was with “belief”. I have seen, and forgotten, that it is used in two senses in english – for trust, and for conviction. Rather like for theory, the weaker term isn’t appropriate here. I would say that theories gives us trust in repeatability of predicted observations, and that kind of trust counts as knowledge. In fact, already the trust repeated observations gives count as knowledge.
The second problem I have is with “the problem of induction”. Science has a set of procedures that observably generates robust knowledge, and the alleged problem is seldom seen. When the terrain and the map doesn’t agree, junk the map.
The third problem I have is with the specific diagrams. Real scientific knowledge production will not yield to any one diagram. So for the philosopher that raises a hypothetical “problem of induction” we could turn around the question and ask why the obvious “problem of description” (which ironically is a real problem of induction :-) isn’t bothersome. The scientist answer would probably be as above: “e puor si muove”.
… Without feeling like testability is the end-all of science the diagram is slanted away from testing towards a weaker and in the end nonfunctional descriptive science. Whether we call tested knowledge “a conclusion” or “a tentative conclusion” is irrelevant IMHO, it is a conclusion we will (have to) trust in.
The fourth (oy!) problem I have is with the conflated description the diagram alludes to. In the text there is a distinction between individual scientists and the scientific enterprise. Different entities will obviously use different approaches to knowledge, and if the individual doesn’t need to trust her findings the enterprise relies on such a trust.

These are reasonable concerns, so let me say a few words to address them.

Continue reading