Rob St. Amant

Rob St. Amant
Birthday
December 31
Bio
My roots are in San Francisco and later Baltimore, where I went to high school and college. I stayed on the move, living for a while in Texas, several years in a small town in Germany, and then several more in Massachusetts, working on a Ph.D. in computer science. I'm now a professor at North Carolina State University, in Raleigh. My book, Computing for Ordinary Mortals, will appear this fall from Oxford University Press. http://goo.gl/hQBHy

MY RECENT POSTS

JULY 24, 2010 11:25AM

Repost Saturday: Birth, death, and the philosophy of science

Rate: 7 Flag

This post has been sitting around in draft form for a few weeks. It's from January 2, 2009; I'm reposting it now simply to make it visible on my blog.


Ignaz Semmelweis is a famous name in the history of medicine. This is his story, in brief, selectively paraphrased from Carl Hempel's Philosophy of Natural Science. Semmelweis's findings have been described elsewhere on Open Salon, but here we'll review how he reached his conclusions.


In the late 1840s, Dr. Semmelweis oversaw activities in the First Obstetrical Clinic of the Vienna General Hospital. His supervisory duties included teaching medical students and examining patients. He was troubled: the mortality rate for women giving birth in the First Clinic was very high, reaching 10% and even 15% in some years. He was also puzzled: the adjacent Second Clinic, staffed with midwives rather than medical students, managed to keep their mortality rate below 3%. Semmelweis undertook a careful, systematic study to find out why.

He began by ruling out various explanations that were inconsistent with his observations and inferences. Could the problem be overcrowding? No. The Second Clinic was more crowded than the first. In fact, the First Clinic had become so notorious that women preferred to give birth in the street rather than be assigned there--and fewer deaths accompanied those so-called street births than births in the First Clinic. Could it be roughness in how the mothers were handled by medical students in the First Clinic versus by midwives in the Second Clinic? No. There were differences, but these paled in comparison to the trauma of the birth process itself. Could the problem lie in some general epidemic spread of disease? Semmelweis was working a few decades before the germ theory of disease was to become firmly established, and thus it made sense to think of bad air, a miasma, that might possibly affect women's health. But such a miasma would certainly affect people living in the city outside the hospital, and this was not the case.

Semmelweis also eliminated some possible explanations through experimentation. The most interesting of these had to do with a psychological explanation: A priest would visit the dead and dying in the Hospital; he could directly and quietly enter the Second Clinic but could only reach the First Clinic with some fanfare--an attendant preceding him through several rooms, ringing a bell. Could the fear and anxiety provoked by this practice lead to more deaths during child birth? No. A change to the priest's route and procedures had no effect.

In 1847, Semmelweis's colleague Jakob Kolletschka died; he had been accidentally cut with a scalpel during a dissection in the autopsy room. Semmelweis saw similarities between Kolletschka's death and those of the women in the First Clinic, and he made an important connection. It was common practice for medical students to move from the autopsy room to the First Clinic to do their rounds, washing their hands superficially if at all. (Midwives in the Second Clinic never came near cadavers, of course.) What if...? Semmelweis instituted a new regimen that involved hand-washing with a chlorinated lime solution, and the mortality rate in the First Clinic immediately dropped by a factor of ten.


Hempel, my main source for the information I've recounted above, uses Semmelweis's story to introduce the basics of the scientific method: a reliance on modus tollens, the nature of hypothesis testing, the importance of induction. I'm going to be a bit more expansive in my gloss of this piece of history, to touch on other issues in the philosophy of science.
 
The importance of existing theory. One of the difficulties that Semmelweis faced was the lack of an appropriate theory to apply in interpreting his observations. While he could recognize that some of his hypotheses were more plausible than others, he needed to test many, many more ideas than we would today. Every modern schoolchild is told that washing our hands eliminates germs--but we wouldn't know about this without the incremental march of scientific progress.

The importance of explanatory mechanisms. A closely related difficulty for Semmelweis was the lack of a mechanism for explaining his success. "Okay, fine, your proposal worked--but why did it work?" Science isn't satisfied with regularities, even predictive regularities; we want an understanding of causal relationships. (For an example from a completely different field, the lack of a plausible mechanism delayed acceptance of Alfred Wegener's ideas about plate tectonics for about half a century.) It wasn't until Pasteur's time that a sufficient theoretical framework was established to explain Semmelweis's results.

Distinguishing between causal and associative factors. Semmelweis was able to reject some explanations because they were inconsistent with what he observed, but others only by running experiments. One way to think about this distinction is that everything that we observe in a given situation is consistent with that situation, associated with it, but only a few factors can be thought of as causing it. It's possible, under some conditions, to draw conclusions about causality without so-called manipulation, but often a manipulation experiment is the most straightforward: we make changes to see whether some factor causes a specific effect--for example, by removing it and seeing whether the effect recurs.
 
Accounting for chance. Semmelweis was lucky in one respect: his change to how medical students washed their hands produced a dramatic effect. What about a situation in which some change causes a small effect? When are we justified in concluding that an experiment has given us some insight into a causal relationship? This is part of what's sometimes called the logic of hypothesis testing. Imagine that Semmelweis has run an experiment and seen mortality reduced from 12.5% in one month to 11.0% in the next. Ignoring all his other sources of information (e.g., symptoms), he might say, "But this amount of variation is consistent with what has been observed over the past few years--it might just be due to chance." In general, if chance could have plausibly produced the same effect as some action we've taken, we refuse to attribute that effect to our action. After all, it could have come out differently.
 
Now, not all of us are scientists. But there's interesting evidence that some of what we do in our everyday lives, even in infancy, can be thought of as running experiments, evaluating results, and learning about the world around us. It's worth thinking about.

I've blogged this by request. Following my own dictates on playing expert, I should say that I'm not a philosopher. I've taken courses in logic, the philosophy of science, and even the philosophy of space and time, though, and I regularly talk to philosophers in a professional capacity. I hope I've handled the basics well enough to convey the flavor of a few issues in the area. If I've made any mistakes or oversights, please feel free to correct me or make additions in comments.

Your tags:

TIP:

Enter the amount, and click "Tip" to submit!
Recipient's email address:
Personal message (optional):

Your email address:

Comments

Type your comment below:
"But there's interesting evidence that some of what we do in our everyday lives, even in infancy, can be thought of as running experiments, evaluating results, and learning about the world around us"
All the time. I used to work in a Montessori and it's more obvious to watch kids try this action and then that for results, but the same process goes on in adults I think. We are just mostly running actions we know work and rarely tweaking them. It is certainly more efficient, but I wonder at how much knowledge is lost, since we only ever go down the paths we've already trod.
As I read this, I'm stuck by the role chance played, and plays, in the big events of history. Fascinating.
Hi, Julie. That's interesting that you saw this in real life; not having kids, it's all just theory for me. (I think Piaget is commonly credited with the observation that children are scientists.) Also, you're exactly right that there's a tradeoff between actively solving problems and relying on routines to get through the day. I think most people would be surprised at how much they can do without thinking very deeply about what they're doing. I can drive to work in the morning, for example, and not remember a single thing about it afterwards, even though I was presumably conscious, or mostly so, for the entire trip. And we know that older people whose cognitive facilities have declined tend to do better in familiar environments than otherwise; they find it easier to follow their patterned behavior.

Hey, Steve. It's suprising to me, too. There are the well-known fortuitious inventions and discoveries in history, like Goodyear's vulcanized rubber and Fleming's penicillin in a neglected Petrie dish. More generally, though, an awareness of chance and probability really is important in deciding what we can conclude and what we can't.
Distinguishing causation and correlation is central to medicine. Nowadays, all medical students study "evidence-based medicine," in which they learn how to evaluate the quality of a scientific study. Randomization, double-blinding, statistical significance, power, confidence intervals -- these are the bread and butter of scientific analysis. It's interesting to see how our understanding of therapeutics has changed as study designs have improved. Take hormone replacement therapy, for example. Our understanding of the potential dangers of HRT has been revolutionized by studies with sound methodology. Great post, Rob.
Hi, Steve! Thanks for bringing the story into modern times. It's great to learn about the perspective of a real doctor.

I do some experimental work, and fortunately I usually have colleagues in statistics or psychology to talk to about experiment design issues. They're amazingly subtle, sometimes.
Imagine that Semmelweis has run an experiment and seen mortality reduced from 12.5% in one month to 11.0% in the next. Ignoring all his other sources of information (e.g., symptoms), he might say, "But this amount of variation is consistent with what has been observed over the past few years--it might just be due to chance." .

On the one hand, I'm a pretty big believer in the power of random. I disagree with Einstein, God does indeed play dice with the Universe.

But, on the other hand, random doesn't rule. Semmelweis could have noted the mortality rate over a longer period, a year say, and if it stayed down even that small bit, I think he could have been satisfied that it was not an effect of chance, which one would expect to have a random distribution, so a chance-induced mortality reduction would have been transient and even reversed in some samplings.

And on yet another hand (say, how many hands has this guy got, anyways?), Kolletschka did cut himself, the scalpel was carrying sufficient bacterial load to infect him, and Semmelweis noticed, drawing a logical, negatable hypothesis. Happily for us, it wasn't negated.

So, I guess that chance must be both courted and accounted for. Remember, we don't learn anywhere near as much when an experiment produces the predicted results as when the results make us scratch our heads an mutter "WTF?".
Hi, John!

Semmelweis could have noted the mortality rate over a longer period, a year say, and if it stayed down even that small bit, I think he could have been satisfied that it was not an effect of chance, which one would expect to have a random distribution, so a chance-induced mortality reduction would have been transient and even reversed in some samplings.

Right. One way to think of this scenario is that when the effect you're looking for is small, you need to draw larger samples for comparison (even if in this case we're talking about a larger number of samples rather than the size of the samples themselves; the details are different, but I think the principle is the same). Power analysis can help with these sorts of judgments.

Also, even if we know that some factor causes another, it can happen that it's not a deterministic cause, and then we might see the kind of pattern you describe--most of the time something works, but sometimes it doesn't. This is pretty unintuitive, I think, for most of us.
... most of the time something works, but sometimes it doesn't ...

Sounds like some of the things I used to wake up at 3 AM worrying about back in my software days. The more complex a system, the more likely it is to exhibit just such behavior. If only because it gets harder and harder to duplicate conditions as complexity increases. So a larger universe of samples is definitely your friend when you're looking for a pattern in a high-noise system.

Sometimes the root cause of a problem lies far below the apparent proximate cause.
As a former philosophy major, my view is that philosophers would be scientists if they could handle the math. Which I can't.
It's funny, isn't it? How hard even deterministic systems can be to figure out, if they're complex enough. Sometimes computer systems even seem to behave randomly. (I think some philosophers distinguish between apparent and actual randomness, though I'd have to check on that.)

Hi, Con! I'd love to be a philosopher, but I can't handle the abstract thinking, or perhaps it's the scope of thought needed. Something. I've tried my hand at it (through writing papers, at least) and I recognize my limitations. It was much harder than I had expected.
Thanks for your comment, Inquisitive Canuck! That's very interesting. I'm one of those who came across a few news articles about this and ended up with a slightly skewed (so to speak) understanding of the situation.

Also, thanks for the pointer to the CMAJ commentary. I'm often surprised at how accessible the editorials in medical journals can be for the lay reader. It nicely lays out the complexities, touching on statistical, clinical, regulatory, and social issues, even including a bit of the philosophy of science. This line especially caught my eye:

Fortunately, drug regulatory agencies are now forcing us to face the evidence.

That's encouraging to read; I think there's a tendency for experts of all kinds (sometimes I'm in this category too) to say, "Just leave me to do my work." But watchdogs play an important role, even when they're watching us.

Shameless plug: Coincidentally, I happen to have written about a book called Snake Oil Science, by biostatistician R. Barker Bausell. Just in case you might be interested.