Wednesday, July 18, 2007

The Importance of Comparison

` I welcome all who come here to my critical thinking and science blog, and encourage my readers to comment on any of my posts.
` The only downside to this is that some visitors probably won't be familiar with how critical thinking and science really work. I know for a fact that most people don't quite understand these things and even worse, many are intimidated by them!

` If you will indulge me, I'd like to try writing about why we have scientific methodology:

` Human beings are able to come up with any idea they like to explain any phenomenon they choose, so goes our abstract nature. The problem is, there can be many sensible-sounding explanations for the same thing; plus, these different explanations can contradict one another!

` Even more, it can be easy to 'prove' many of them right, because quite often we can find what we're looking for as evidence to support our ideas: As long as we ignore anything that goes against our idea, it is easy to convince ourselves that we are correct even when we may not be - and even when other ideas actually make more sense than our own!

` I'm sure most of you know exactly the type of thing I'm talking about.

` Just for kicks, let's say you have this idea that cats are vile, nasty creatures. Therefore, every time you see a cat hiss and spit at someone, you might think to yourself; "Good for nothing cats!"
` But what if you believe that cats are quite nice by nature? Seeing the same cat hissing and spitting would have you thinking; "Aw, the poor kitty feels threatened!"
` Needless to say, that's why two people can see the same thing and have two different opinions about what is going on!

` The fact that more than one interpretation is possible for one event is arguably the main reason for the scientific method: You can make all the observations you like, but that is only the first step in figuring out what is actually going on.

` If you have two potential explanations for something - hypotheses - and you find just as much evidence for either one, how can you decide on one over the other? For a common experience we'll say your car won't start, and you figure it's because the battery is dead. So, you install a new battery and it starts up just fine.
` But, you ask, what caused the battery to go dead to begin with? One hypothesis is that there was a problem with the battery. Another hypothesis is that your voltage regulator went haywire at some point and drained your battery.
` How can you tell which one is right just by thinking about it? Face it, you're stuck! The only way to resolve the issue is to try to falsify each hypothesis - in other words, do your best to prove them wrong! In this case, examine the dead battery and the voltage regulator, and run the engine to see if the battery goes dead again.
` Of course, you can't just try proving one wrong, because proving one wrong does not automatically make the other right; the real explanation could be something you didn't even think of! (Perhaps one of your map lights was on all night? What about the alternator?)
` Clearly, you must try to prove both of them wrong! And if you find nothing wrong with either the old battery or the voltage regulator, well, let's just say you may find that cars are very complicated things.

` In a way, science is all about creating a large number of hypotheses with our big imaginations (and our ability to make logical inferences) and then 'weeding out' the wrong ones: Most of the ideas scientists think up in fact do turn out to be wrong.
` Though one cannot actually prove anything right, at least the hypotheses that are left are most likely to be true!
` So, how does one go about this 'weeding' business?

` Just the other day, I had been reading a bit of Keith Stanovich's How to Think Straight about Psychology. It gave me a few ideas about how to explain this:

` Stanovich says that one thing you need to understand is the importance of comparing one thing to another. If we are looking for patterns in the world, we cannot rely on one isolated event. In other words, if you're walking down the street and see a cat flying through the air, does it make sense to assume that you'll be likely to see this again, or that any cat could fly?
` Of course not, you say!
` That's why we must be careful of jumping to conclusions with our interpretations of events. A bit further investigation might show that the 'flying' cat was actually flung from the window of a nearby house by a person who is of the opinion that cats are nasty and vile!

` So, the way scientists find patterns is - drumroll, please! - observe a lot of instances of the same type of thing. That way, they can compare all their data and thus have a better idea of whether or not something is particularly unusual. It's a commonsense thing - really, it's the best way to figure out what to expect from the world.
` Patterns, in other words.
` The way scientists generally do this is to create (or find) very similar situations to observe, so that the events they are comparing really are comparable; apples to apples, rather than apples to oranges.
` Within those confines, the difference that one change makes is more noticeable. Also, by isolating events from various types of influences, you narrow the possibilities of what can happen.
` This is referred to as control, which Stanovich notes is the second main thing one needs to understand about scientific thinking.
` A typical example is lab experiments with hapless rodents. Let's say we have sixty lab rats of a particular genetic strain, all of which have a problem; someone has severed a nerve in the left hind leg (which the rats are probably thrilled about). So, they're all very similar in that way. They are also similar in that they all live in the same kind of cage and eat the same amount of the same type of food.

` What we have here is a controlled situation in which the only real hypothesis that would explain any healing of the rats' nerves would be that it just healed by itself. Not much chance of any interference, is there?

` That is, unless those people in white lab coats did their own interfering: In this experiment, twenty of the lab rats are left alone, twenty are injected with Drug X, and twenty more are injected only with saline solution (which basically does nothing).
` Why would scientists pump a third of the rats full of IV fluid? Because, the mere act of injecting the rats with something has an impact in itself! By injecting one group with the drug and another with an inert substance, they should be able to tell if the act of injecting the drug does anything different from only the act of injecting.

` This is especially important in medical studies using human beings; taking a fake medication or even a fake dietary supplement can have drastic effects on one's well-being. It can not only make pain go away, but it can even make one's condition improve! And yet, the person has actually swallowed nothing but sugar or saline!
` That's called the placebo effect, if you've never heard of it. Now, the main reason I refer to Stanovich in particular is because he brought up a very good example of what happens when you're not good enough at creating comparable groups:
` It is the case in which the portacaval shunt - a device which lowers blood pressure in the liver - was recommended for treating cirrhosis. Many doctors (and patients) swore by it, and it was quite popular until the mid-1960's.

` What happened?

` In 1966, a pattern was found among all the various studies which demonstrated the shunt's effectiveness: The conditions were not very well narrowed down.
` Many of the studies had no control group, so there wasn't anyone who thought they had the shunt implanted when they didn't. Among those studies, 96.9% were judged to show that the shunt was at least moderately effective.
` Some other studies did have a control group, though the patients were not assigned randomly to each group. Since people aren't as alike as lab rats, it's important to randomly assign people for treatment to make sure that similar patients tend to wind up in different groups, preventing a selection bias.
` In other words, if the people selected for the 'real' treatment have a lot of help and support from their families or are chosen specifically because they are 'good candidates', that's not very random is it? You're just rounding up the ones that have a better chance!
` In fact, it seems this really did happen because in 86.7% of these studies the shunt was deemed at least moderately effective: And yet, in the studies that employed random selection and a control group, doctors found that only 25% of them showed at least moderate effectiveness.
` That's not nearly enough evidence to show that there's any more to the shunt than a placebo effect or being implanted in people who had an advantage, so it stopped being used for treating cirrhosis.

` So you see, if the scientists involved hadn't realized that the only studies that had shown good results were ones without proper controls - and thus having poorer comparisons - medical doctors would continue installing the portacaval shunt in a situation where it wouldn't have done anything!
` More than that, if other, similar mistakes had not been caught in medical science... well, I personally don't care to think about that! But as I've said, the scientific method is to help us figure out which hypotheses don't make sense.
` In this case, the hypothesis that the portacaval shunt helps treat cirrhosis is one that doesn't make sense (though it has been shown effective in treating other disorders). The only really convincing explanation for why so many doctors held this hypothesis is that it was based on flawed studies.

` Scientists (especially medical scientists) have a lot of pressure on them just to make comparisons, don't they? It's easy to be wrong in this world. And, if anyone is going to get good at being objective, this is one thing they have to do.
` It is also one thing that separates science (or even simply critical thinking) from something that isn't. I find this to be particularly evident for things that tend to shock and confuse people: Say you're walking through a field one day and come across a dead cow whose eyes, mouth, tongue, udders and sex organs seem to have been removed with surgical precision.
` Disturbing reactions aside, the question you ask is "Why?"
` One can come up with any number of hypotheses for what has happened to the cow. I know this because it's been done.
` First of all, you must make comparisons: Is this somewhat unsettling sight very common? As it turns out, it is: This is merely what happens when animals such as crows and maggots eat out the softest tissue of an animal carcass. (Cow hide is particularly tough to poke a hole through, so they just eat what they can get.)
` Then, typically the carcass swells up to the point where the edges of the holes are stretched until they appear even more clean-cut than they had.
` In other words, this is a fairly normal condition of the remains of a cow (or other large animal) that has died of natural causes - something that just happens occasionally.

` Many other people don't take that view, however, commonly preferring to believe that extraterrestrial beings have been picking up cows all around the world, cutting certain parts of them out, and for some odd reason, dropping the carcasses back in the field where the cow once lived.

` Sure, I suppose the evidence seems to fit both explanations equally well - especially to people who think cows with missing soft parts is unusual - but seeing as this has spurred some people to actually watch these same changes happen to animal carcasses left out in a field, is there anything else to explain?
` In this instance, though, no hypotheses were technically proved wrong - instead, this is a case of the fact that simple and straightforward explanations tend to make more sense than ones in which all kinds of speculative and unnecessary factors are introduced. I'll have to write about that some other time.
` The case of the Swiss cheese cow, of course, is a bit more complicated than that: Nevertheless, I think it's a good - if morbid - example of something which spurred two very different hypotheses to leave you with.


` ...One more thing; if anyone is curious, a similar study to the one I described involving rats with severed nerves was actually carried out. The drug used was a chemical that stops scar tissue from forming, and the rats injected with it had a near-complete recovery while the other two groups only had a fairly good recovery.
` So, the scientists have pinpointed, scar tissue (at least in rats and probably in humans) seems to prevent nerves from healing.

30 comments:

Charles said...

Yay! Your first post (hopefully of many) is up. Congratulations. I have enjoyed reading it.
I will ask a question not related to the content, though. Why does each paragraph begin with a grave mark at the beginning?

Anonymous said...

S.E.E. Quine:

This post is remarkably clear and logical. I truly appreciate the effort you put into it. I am going to direct a lot of my friends here and can not wait for future posts from you. Again a truly remarkable first time effort. Congratulations.

A+ :)

Lance Osadchey

Mercury said...

S. E. E. Quine:

Your "battery" example does smack of the critical realist Karl Popper and is a widely accepted and a viable approach to scientific methodology and epistemology--"falsifying statements". There are however alternative perspectives and one in particular is the analytic philosopher Willard Van Orman Quine who is best know for his attack against logical positivists' where he separates "analytic truths" and "synthetic truths". The "analytic truths" being self-evident and true by definition of the words employed such as the famous example "all bachelors are unmarried". [Wittgenstein would love this.] The "synthetic truths" are supported by empirical evidence--the "facts" if you like.

Here is his paper "Two Dogmas of Empiricism":

http://www.ditext.com/quine/quine.html

"So, the way scientists find patterns is - drumroll, please! - observe a lot of instances of the same type of thing. That way, they can compare all their data and thus have a better idea of whether or not something is particularly unusual. It's really a commonsense thing - really, it's the best way to figure out what to expect from the world." This is common sense epistemology and often called a "statement drawn from historical regularity" such as the sun always rising in the East.

S. E. E. Quine said...

` Charles: Good to know I haven't disappointed anyone so far!
` The reason for the grave mark is paragraph indentation; Blogger does not allow typing spaces at the beginning of paragraphs.

` Lance: Looks like my three or four days spent writing this has paid off!

` Mercury: Thanks for your illuminations! I'll definitely read what that other Quine has to say.

Mercury said...

S. E. E. Quine:

"That is, unless those people in white lab coats did their own interfering...." This is a very true and problematic experience for "bias" is very real in the laboratory...intentional or not. That is why there is the relevancy of the "scientific method"; the ability to duplicate the experiment by other researchers and the need for "peer review". In case there are those that would like to know more about the "scientific method" here are some links at:

http://www.worthysciencesources.com/page11.html

Mercury said...

S. E. E. Quine:

More on scientific bias:

An epistemological question: Is there personal bias by scientists in analyzing a set of experiments? There is a recent book review ["The Evidence for the Top Quark: Objectivity and Bias in Collaborative Experimentation" by, Kent W Staley] at "physicsweb" by Bill Carithers discussing the interesting search for the sixth component of the Standard Model of particle physics at Fermilab.

"I was particularly interested in Staley's examination of possible bias in the methodology and how the CDF collaboration dealt with it. When particle physicists try to find a particular set of events among the trillions of collisions that occur in an accelerator, they have to focus their search by ignoring data outside a certain range. In the case of the top quark, the CDF physicists knew that they could select their data in two different ways. Although both approaches were valid, the one they chose turned out to produce a stronger signal."--Bill Carithers

From: physicsweb

"The top quark: an unbiased tale" by, Bill Carithers

http://www.physicsweb.org/articles/review/17/10/1

Book:

"The Evidence for the Top Quark: Objectivity and Bias in Collaborative Experimentation" by, Kent W Staley

ISBN: 0521827108

And:

It is a difficult question for sure for "bias" will color any result even if it does turn out to be of value. I suppose a group of scientists would weigh the probability of the "best" methodology to use in an experiment...the one that would yield the most or most accurate data: Qualitatively and qualitatively. But even that may be "bias". The majority rules in science too. Things in addition to a "good guess" on the best methodology would include serendipity, colleague consensus, previous experiments and data, external pressures from non-scientific sources, and a host of personal traits. We would hope that the "peer review" process would consist of a body of experts of a heterogeneous mix that "bias" would be an irrelevant issue. "Bias" may well be neutral but the results may be one sided.

Finally:

A spooky situation for a rush to print for whatever reasons may just produce bad science.

From: Science Daily

"Most Published Research Findings May Be False"

http://www.sciencedaily.com/releases/2005/08/050831071025.htm

Kingcover said...

In your experience with scientific studies, experiments, theories, etc have you been in many discussions and/or read any articles where someone who believed their point of view about something was correct has been persuaded to the completely opposite stand point. Have you ever done a 180 on something yourself in the past? :)

Mercury said...

kingcover:

I assume you are addressing me? If so, the answer is that researchers that I have encountered [and this includes philosophers] are extremely reluctant to relinquish their research or perspective. The human psyche of ownership of proprietary materials is extremely strong despite the core of opposing evidence either empirical [evidentiary] or logical. Usually, they are dismissed by peers. One case in point was the so-called development of "cold fusion"...room temperature energy source. And there are scores of other examples--just Google "crackpot science".

S. E. E. Quine said...

` Mercury said:

That is why there is the relevancy of the "scientific method"; the ability to duplicate the experiment by other researchers and the need for "peer review".

` Ah, yes, I would have actually gotten into that in this post if it weren't for the fact that it was so long already! Another time, I assure you!
` Bias is a very interesting subject, and very important as well! I shall have to do some more homework and write more on the subject.

` Kingcover: I have heard of many scientists completely reversing their views on things as well, though generally these cases don't stick in my memory.
` Cold fusion definitely is one of them.
` Another one that comes to mind is the time Stephen Hawking discovered that his own idea of information inside of black holes shooting off into other universes is not necessary to make sense of them.
` Black holes, he decided, don't get rid of information from our universe - instead it leaks back out in a mangled form.
` Well maybe, who knows? But he has changed his viewpoint.

` Hmmm. I could probably make a post all about scientists realizing they are very wrong, even holding the polar opposite position than what they later find.
` Maybe I can tie that into other subjects, such as bias?

Kingcover said...

Ummm no Mercury, I was addressing SeeQuine as a matteroffact but I guess two answers are better than one lol.

Mercury said...

S. E. E. Quine:

Can bias be beneficial? Would a personal slant in research or thinking have a "positive" effect? And how would randomness and accident play a part?

locomocos said...

this blog rawks.

just thought you'd like to know.

Anonymous said...

Bravo, Spoony! I think this may be your clearest, most straightforward post on science/skepticism!
Your examples were really well-placed!

Anonymous said...

I think the economic pressures on the scientist as an individual in this cut-throat capitalist world contribute heavily to bias in both laboratory design and observation. A scientist must be goal directed to keep food on the table, and the rewards for being 'right' are great, even in the most academic of settings.

Anonymous said...

I confess right up front to not reading the entire post. Please don't hate me. I remember in our natural science class, the prof told me that Creationism is not a good theory because it can't be proven wrong. That really threw me for awhile and I had to think on it, so you have just helped me tons.

I love to observe things and make a hypothesis, but there is a blurry line between a scientific hypothesis and a mere "opinion". Everyone has opinions, so Why do people get in such a bind over what others think and set about arguing themselves blue in the face to change that other opinion? as you said, everyone could have a different take on the same thing, so it's funny that the human race continues on trying to prove theirselves right and everyone else wrong.

I know you are coming from a scientific stance here, but in the field of human behaviour, people would be a heck of a lot happier if they stopped worrying about what everyone else thinks. Especially in a marriage, Know what I mean?

S. E. E. Quine said...

` Mercury: Yes, I have heard that being driven by a bias in the face of opposition, messing around with something randomly, or even making a mistake, etc. has often inadvertently led scientists into a new discovery!
` That's the funny thing about scientific discovery - you never know how you'll advance science because that can happen outside of the guidelines!

` Ah, Lou! I know what you are talking about. In fact, I read a survey that directly addressed this problem:
` The findings were that a good number of scientists admitted to cheating to get positive results within the past year just to keep their lab running.
` It's disturbing, I know. I'd imagine it's especially prevalent in the pharmaceutical industry and any lab in the country that Bush Jr. threatens to withhold funding to if he doesn't get his way.
` Thankfully, their messed-up findings are often later discredited, as in the portacaval shunt case. Thus, the importance of things such as independent duplication, which I'll be sure to write about!

` Glad to help, Da Boozie! I know, this 'not being able to prove it wrong' business used to confuse me, too!
` I think that perhaps the main reason people argue over opinions so much is because they often don't know how to argue, nor do they recognize when not to argue, as with an idea that can't be falsified.

` Oh, and thanks, Cassie and Galtron!

Mercury said...

S. E. E. Quine:

Indeed, serendipity does happen in the laboratory and often greases the wheels of commerce where great fortunes can be made. Polytetrafluoroethylene [PTFE] aka "Teflon" and Diphenyldimethyl Siloxane Copolymer aka "Silly Putty" are two accidents that come to mind. Discoveries come my many venues: Hardcore science of "cause and effect" and all the grunts [usually graduate students] that do the meticulous and tedious laboratory work and the accidental.

S. E. E. Quine said...

` Indeed, Silly Putty and I think Super Glue are the two examples that foremost pop into my mind.
` Never underestimate the power of accidents!

Anonymous said...

What if there were thumbs in space and they got really really mad at each other?

S. E. E. Quine said...

` My guess is that they would have Thumb Wars.

Anonymous said...

Readers of this Blog:

It is gratifying to see a blog of this nature. Why? Because it is an attempt to disseminate knowledge. What is knowledge? It probably varies from person to person. It is a complicated subject, as most issues are.

Often when I talk to people of various disciplines who have achieved some status in that area, I hear the big words that they used to impress people, to define what they are talking about, and to show that they are in command of their field. Doctors, lawyers, scientists, musicians, educators, ministers, play writers, and just about every field has its own language.

Well the philosophers have their own language and when it comes to the field of knowledge they use the word epistemology. That simply means the study of knowledge. And that is what I see happening at this site. Knowledge can be a dangerous thing, it can demystify complex issues. If someone sees a word they do not know they simply have to go to a dictionary and learn that new word. This is a way to acquire knowledge and it is a dangerous thing because it demystifies and clarifies the word. Once this process starts and the person wants to know more he or she can simply go to the library, God forbid, and pick up a book on any topic they want and learn. Courses are available at schools and colleges. There are television shows radio shows devoted to learning.

I guess to sum this up I want to say that anybody can achieve knowledge beyond what they have and that can be a dangerous thing because that person will change and start evaluating things in a different light and that can lead to a change in behavior.

Mercury said...

S. E. E. Quine:

This idea of yours about relevancy of "patterns" of events is somewhat correct especially in "common sense" knowledge but falls short when it comes to a solid base of knowledge even though there is definitely a branch of physics that is concerned with statistics and this is important when describing and discussing quantum mechanics where all sorts of weird things happen and do not fall into normal predictable physical analysis. "Common sense" base knowledge is in a statement like "the sun always rises in the East every twenty four hours"--hasn't failed as long as man has observed the phenomena. Attempting to cross a street does contain risk and the "common sense" base knowledge of I run a certain probability of being hit. So here is a blend of "common sense" blended with statistical evidence drawn from by experiences and knowledge of crossing a street. I weigh the time of the day, the busyness of the street, the posted speed limit, my physical dexterity, etc. I have this knowledge and will draw my conclusion based on "common sense". But in the realm of the science, save those strange ones from the quantum world, there cannot be an epistemology based on randomness...events are defined in specific causal relationships and obviously subject to the scientific method.

Mercury said...

S. E. E. Quine:

Since the theme of this essay is "comparison" what do you think would be the criteria involved in the "act of comparison"? And, first of all what is the primary goal of "comparison"? What will it yield? Suppose, for example, you were comparing two pieces of fruit [an apple and an orange]. What is the purpose of the "act of comparison"? Was it a decision to determine which one you wished to purchase? And based on what: The memory of the pleasures of eating either one of them based on the senses, one is more nutritious than the other, one is aesthetically more attractive than the other? "Comparison" must have a goal.

In the realm of the sciences the same situation occurs. What would be the ultimate goal of comparing hypothesis "A" with hypothesis "B"? Is it to find the ultimate truth of the hypothesis and fills all the criteria of the scientific method...or something else?

Your opinion?

S. E. E. Quine said...

` I would tend to think that the main reason for scientific comparison of phenomena is to determine what hypothesis or theory has the best predictive value.
` On the other hand, superstring theory has plenty of ad hoc predictive value, yet it is not falsifiable, so I must be missing something here.

Mercury said...

S. E. E. Quine:

Maybe what you are missing is the fact that not all tools of analysis are applicable everywhere.

Mercury said...

S. E. E. Quine:

"String theory or superstring theory"--makes no difference, for these theoretical theories and the theoretical physicists are closer to philosophy, metaphysics, and even science fiction than true, established methodologies of scientific epistemology. Their statements make the headlines more often than genuine science and are the fuel for hundreds of television shows that foster the likes of a Brian Greene or Stephen Hawking.

"Predictive value"? Yes.

S. E. E. Quine said...

` This is what I hear about string theory, and hence was born a book called Not Even Wrong.
` I hope it can predict something useful one of these days.
` So, I was wondering; does it have much to do with harnessing wormholes?
` That could be useful!

Mercury said...

S. E. E. Quine:

Being facetious? If you subscribe to "wormholes", then you must subscribe to time warps, blackholes, 11 dimensions, parallel universes--aliens. It was my impression, in your striving to be logical and critical, that you have adopted an empirical epistemology or at the very least leaning in that direction and that a belief in the above was a "tongue-in-cheek" proclamation.

"Not Even Wrong"...give credit to Wolfgang Pauli for coining the phrase and taking on the formal critical realism of Karl Popper. Poor scientists in the lab are always bothered by the philosophers of science.

Egads...Peter Woit's "Not Even Wrong: The Failure of String Theory & the Continuing Challenge to Unify the Laws of Physics" retails for about $110--USED!

S. E. E. Quine said...

` Weird! I could get that book for a lot less!

` Indeed, I was not being entirely serious, seeing as no on has discovered any wormholes.
` Of course, if we did... well, string theory might be useful, right?

` But what's this about black holes? Have they not been photographed tearing apart stars, or is that a dubious interpretation for what is going on?

Mercury said...

S. E. E. Quine:

Well, of those items that I mentioned [wormholes, 11 dimensions, etc.], "blackholes" are becoming closer to evidentiary status. What is needed is the employment of the scientific method. The evidence is mounting for some phenomena resembling the common understanding of blackholes. Some substantial and novel work has been by a young woman from California Institute of Technology--Andrea Ghez. [And yes, there is material at WSS: http://www.worthysciencesources.com/page7.html ...links to lectures she has given and her website.]

"Of course, if we did... well, string theory might be useful, right?"

It is very difficult to use the physics of quantum mechanics/string theory in predictions...such physics negate quantification.

[You must be dating the publisher to get a good deal on books.]