There are two features of science that I think a lot of people (myself included) find attractive.
One is that scientific representations of the world (theories and other theory-like things) give you powerful ways to organize lots of diverse phenomena and to find what unifies them. They get you explanatory principles that you can apply to different situations, set-ups, or critters.
The other is the empirical basis of much of our knowledge: by pointing your sense organs (and your mind) at a particular piece of the world, you can learn something about how that bit behaves, or about how it’s put together.
Lately, I’ve been thinking about the way these two attractive features can pull a person in opposite directions.
The tension that gets discussed a lot — by philosophers of science and by scientists themselves — concerns what relationship is supposed to exist between scientific representations of the world and the empirical evidence scientists have amassed. Frederick Grinnell [1] explains the crux of the matter quite aptly:
Carrying out an experiment requires one to anticipate what the result will be like and to choose methods suitable to observe them. Stated in another way, the choice of methods limits the results that can be obtained. Results that appear to contradict expectations might indicate either that the hypothesis was wrong or that the choice of methods was inadequate. These alternatives have led to the commonplace adage not to give up a good hypothesis just because the data do not support it.
Henry H. Bauer [2] notes that many (though not all) chemists trust theory more than experiment:
A few years ago, a review article in Science listed many instances in which calculations had been right while experiments had been wrong: for the energy required to break molecules of hydrogen into atoms; for the geometry and energy content of CH2 (the unstable “molecule” in which two hydrogen atoms are linked to a carbon atom); for the energy required to replace the hydrogen atom in HF (hydrogen fluoride) by a different hydrogen atom; and for others as well. The author, H.F. Schaefer, argued that good calculations — in other words, theory — may quite often be more reliable than experiments …
Some people look at claims like this and decide that scientists are horrible hypocrites, people who cling dogmatically to their pet hypotheses in the face of falsifying evidence. The reality is that experiments can be wicked-hard to set up and run. Even if you’ve got a good plan for getting at information about the phenomena, executing that plan successfully can take a lot of practice. Since the experiments usually have more moving parts than the theories (and are much more vulnerable to “butterfingers”), scientists are in the habit of ruling out experimental flubs before they toss out the hypothesis that the experimental result seems to contradict.
In the long run, of course, a theory needs something like experimental result. How chemists can recognize instances where calculations were right and experiments were wrong is that later experiments are in agreement with the calculations — maybe because researchers figured out a better method of performing the necessary experiments, maybe because the experimentalists just got better at executing the same old protocols. When there’s a disagreement between theory and data, scientists usually call for an explanation of the disagreement. Why are we entitled to treat the data with suspicion? What other empirical data might speak in favor of keeping the theory?
There are all manner of complications we could get into here (e.g., holism in theory-testing and Kuhnian theory-laden observations, just to name two), but scientists remains committed to the idea that their theories are accountable to the world they aim to explain.
However, this isn’t quite the tension I’ve been mulling over. Instead, I’ve been thinking about how little scientific indoctrination it takes to shift someone to using theory to answer questions rather than, say, doing a quick experiment.
For example, lots of cookbooks call for boiling vegetables in lightly-salted water, and some of them go so far as to claim that you should add the salt because it raises the boiling point of the water. Adding about a teaspoon of salt (which is around 7 or 8 grams tops) to a quart of water (which is in the ballpark of 1000 g), the intro chem student can turn to the equation for the boiling point elevation of water to see if this claim makes any sense. 8 g NaCl = 0.14 mol NaCl = 0.28 mol ions in the 1 kg. of water. Multiply that by 0.52, the molal boiling point elevation constant for water, and we get a whopping 0.142 C increase in the boiling point of the cooking water. Having the water boil at 100.142 C rather than 100 C hardly seems like it should make a difference to your potatoes … and so, we conclude, the cookbook explanation is probably bogus.
Is it even worth trying an experiment to see if the teaspoon of salt makes a measurable difference in the quart of tapwater we’re boiling?
Other examples surface daily. Someone makes a comment about hot water making ice cubes faster than cold water. Someone else, familiar with thermodynamics, explains in detail why this cannot be the case. No actual ice cube trays risk harm, since none are ever deployed in resolving the dispute.
I loves me some thermodynamics. But, why not clear some space in the freezer to do a side-by-side comparison of the ice cube tray filled with hot water and that filled with cold water? Doing an experiment certainly doesn’t preclude making a confident prediction of the outcome from the theory. And, in the event that the results don’t turn out the way you predicted they would, it might help you notice a way that the real system departs from your assumptions about it.
I’ve been thinking about recourse to theory versus recourse to experiment a lot lately because I have children who ask lots of questions about how various pieces of the world work. My scientific training got me in the habit of making theory my first stop and doing back of the envelope calculations when necessary. My offsping are much less satified with answers from theory than they are with actually seeing what happens. After the experiment, they’re happy to listen to a theoretical explanation of the outcome (within reason, of course — they’re still young). But before they’ve seen what the outcome of the experiment is, theory does not move them.
_____
[1] Frederick Grinnell (1999) “Ambiguity, Trust, and the Responsible Conduct of Research,” Science and Engineering Ethics, Vol. 5, Iss. 2, 205-214; p. 207.
[2] Henry H. Bauer (1997) “The So-called Scientific Method,” in John Hatton and Paul B. Plouffe (eds.), Science and Its Ways of Knowing. Prentice-Hall, 25-37; pp. 26-27.
This is a great point — we often say that a particularly surprising experimental result needs to be “confirmed by theory.” Deep down, scientists don’t operate according to a naive falsificationist model, but by some sort of coherence theory of truth. The whole picture has to fit together. Since your children have a less full picture, they are understandably more interested in direct empirical testing. (It goes without saying that theories must adapt to surprising experimental results if they are repeated with high confidence.)
Contrariwise, one often sees theories with claims to grandiose explanatory power — especially from crackpots. It’s hard to explain to non-experts why these ideas aren’t worth the effort to debunk, even though their claims are so impressive. They just don’t fit into the wider scheme that we trust, and we can’t spend all of our time finding flaws in bad ideas. A matter of scientific intuition, which is what makes the practice of science closer to an art than a science.
Good Experimentalists Never Grow Up
Janet Stemwedel over at Adventures in Science and Ethics has a new post on experiment vs. theory: Someone makes a comment about hot water making ice cubes faster than cold water. Someone else, familiar with thermodynamics, explains in detail why…
For the ice freezing, see http://en.wikipedia.org/wiki/Mpemba_effect
One winter the scientists at the National Research Council in Ottawa decided to do this experiment for the media. As I recall it, (I doubt it was ever published and I didn’t do the experiment) the trays were filled with hot and cold water and put outside to freeze (the normal winter temperature in Ottawa being particularly conducive to this). They found, again trusting my memory, that the hot water did freeze slightly faster, because more of it quickly evaporated, leaving less to freeze.
I for one would absolutely encourage the youngsters to check and see for themselves if hot water in the ice cube tray freezes faster than cold. I have tried this experiment, but in my case I used a walk-in freezer and observed the freezing process continuously albeit uncomfortably. I think they have the right idea … good experimentation trumps theory. Naturally, any experimentation should be parent approved and supervised.
Afterwards, you can encourage them to think about what happened. At first glance the problem seems to involve simply only a temperature difference … but also involved may be factors like the geometry of the ice tray, differences in the amount of water, the purity of the water, supercooling, and the differences in evaporation, convection and conduction of heat between the samples. Some of these could be varied and the experiment repeated (e.g. putting the trays on a bit of styrofoam or adding more water to one). But others can be saved for another day.
I saw a posting once in which somebody actually did the experiment on freezing hot water. The result?
The hot water freezes more quickly. Why? Because a lot of it evaporates, so you end up actually freezing a smaller mass of water than if you started with colder water.
However, when the experiment was repeated with covered containers, the colder water froze first.
Lesson? There are sometimes confounding variables that you may not have considered.
This experiment has been performed many times Ahcuah – sometimes the cold water freezes first, sometimes the warm. The point being that what appears to be a simple question is actually quite complex and involves many variables (including the one you mentioned); so that different experimental arrangements can lead to one result or the other.
With the ice freezing experiment lots of research has been done on the Mpemba effect. My food chemistry students a year ago tried to repeat it to model it. Mmm mostly unsuccessfully in an ice cream maker.
It isn’t just to do with evaporation of the water in the ice cube tray, but also, in a freezer, causes the ice to melt giving the tray a closer contact with the colder freezer shelf. There is also the effect of convection currents in hot water vs the cold.
It is a very interesting problem where theory has to be reconsidered after looking at the experimental data.
The kids have it right. The vast majority of humans learn by DOING, and then thinking about the result. But the DOING gets them involved in the problem enough to be interested in the outcome.
As I have watched math education of our masses over the past several (well, many) years, it seems to be most effective with those who “get their hands dirty”, by continually manipulating quantities, or proxies, and having a vested interest in the outcome. Back at the dawn of creation, when learning math facts involved a lot of rote, many students “discovered” commutivity, associativity, distribution by practicing with real qualities, comparing problem solutions with peers, and figuring out why slightly different methods gave the same answer.
Today, we seem to relegate manipulatives and practice for the lower grades (if used at all), and if a student is not paying attention, by the time the student’s lack of understanding comes to light in middle school we’ve moved past manipulatives to theory, and we say it over and over, SLOWER AND LOUDER, while the student still has developed no “real life” experience with the problem at hand.
We cop-out by insisting that students buy/use calculators from early on, so that math becomes a test of typing accuracy.
I’ve always thought that remedial math in middle school would go farther toward developing usable math skills if the course were organized around math needed to buy, maintain and feed a car using newspapers and bank flyers as source material, with the math textbooks relegated to the bookshelve for student “research” in how to calculate the actual cost of the loan from the rate given.
There have been wonderful pockets of educational experimentation, such as the post-Sputnik PSSC physics, where theory flowed from lab experience. These always seem to be abandoned long before the experiment is repeated on a large enough scale (with enough control over the delivery (lab protocol) to measure any long-term effect on outcome. Apparently textbooks are still available on this methodology (see a review from 1991 that nicely describes history and methodology PSSC physics text review).
We lose so many who might have developed science and math potential AND interest by teaching as though scientific thought and learning flows only from theory, not from experiment. While experimentation is messy* (sometimes it “works” other times not) it is still the best way for students to actually get involved in the process of science.
But for all students to learn science by experimentation requires well trained teachers dedicated to their scientific discipline–a luxury most high schools, much less elementary and middle schools are without.
* We recall your summary of 4 weeks at the NJ Governor’s School for the Sciences: “If it’s green and slimy, it’s biology; if it smells bad, it’s chemistry; if it doesn’t work, it’s physics”.
OOPS. In the 2nd paragraph it should read “by practicing with real quantities”… (not qualities). Another case of spell-check run amok.
In the long run, of course, a theory needs something like experimental result.
I think this is crucial, and is exactly right except for the “something like”.
I must say, I’ve never heard a working scientist (who wasn’t joking) say anything like “(don’t) give up a good hypothesis just because the data do not support it”. The data are what they are, and you only go so far into “the long run” — that is, you only keep looking for ways the data could be an artefact, or include unknown variables, or whatever — before you take a hammer to the original hypothesis. At least, that’s the way I learned to do science.
In re: Sean’s comment upthread, I don’t think day-to-day science pays much attention to naive falsificationist models OR coherence theories of truth. (Paging Peter Medawar…) As a practical matter, I think most scientists fall somewhere between those two extremes: we only cling to our beautiful theories for as long as it takes the ugly facts to destroy them. The closer one is to the falsificationist end of the spectrum, the fewer ugly facts one insists on having before altering the model.
“The map is not the terrain.”
Theory and reference books represent our best-to-date maps of reality. Sometimes the maps are incomplete or even inaccurate around the edges, but they do get better over time. The better maps can tell us what prior “expeditions” have found as the “ground truth”, even when we’re stymied by the foliage of “confounding events”.
But for all of this, the map is merely a representation and summary of our knowledge. The real world has precedence, and is not bound by our opinions. The scientific mentality respects this; the scientific method implements it; the scientfic worldview (materialism) enshrines it.
And yet… it’s hard to keep that in mind, because so much of our own minds consist of, more or less, maps. Maps of the world outside, models of other humans, theories about unhuman-but-complex things, social scripts and dominance charts, and many other abstractions are a good deal of how we perceive the world around us. It’s a wonder that we can conceive of a fundamental reality, independent of our forest of mental constructions. But hey, that’s why science is hard for most folks, and why relatively few people have the talent for it.
As a student of computational chemistry, I do want to caution that calculations of chemical properties and structures always have to be taken with a big pinch of salt. With the large array of canned software packages out there, it is all too easy for someone to apply an inappropriate level of theory to a problem of interest. No one is going to believe you if you did a semiempirical geometry optimization of methylene (:CH2) and then loudly proclaim that the experimentally determined bond lengths and bond angle is wrong.
For the teacher (and I guess for Galileo), the challenge is to show observers with experiments or demos that their “theories” are wrong. Discrepant events are great for helping knock misconceptions from students’ minds. It does take repeated applications, though. Some ideas just want to stay stuck in the mind.
That’s why we scientists will forever have such a tough job convincing dissidents that evolution is a valid theory. They will never “believe” until they see a demonstration of one organism evolving into another. (Well, I know viruses qualify, but I’m not too sure anti-evolution people accept viruses as evidence. They’re not “biblical,” you know.)