You must turn off your ad blocker to use Psych Web; however, we are taking pains to keep advertising minimal and unobtrusive (one ad at the top of each page) so interference to your reading should be minimal.




If you need instructions for turning off common ad-blocking programs, click here.

If you already know how to turn off your ad blocker, just hit the refresh icon or F5 after you do it, to see the page.

Psi man mascot

Experimental Research and its Pitfalls

We concentrated on observational research so far, giving special attention to problems with surveys, because questionnaires are so common. Even undergraduates are some­times asked to conduct survey research for journalism or business classes. Now we turn to the experimental research.

Experimental research is defined by the presence of a manipulation. A manipulation is a change that a researcher deliberately produces in a system.

This conforms to the common meaning of experiment , which is "to make a change and see what happens." Experimental research is sometimes called manipulational research.

The special strength of experimental research is that it can be used to test cause-effect claims. Cause-effect claims are those that assert, "A makes B happen."

What defines experimental research? What is its "special strength"?

The implication is that if you want to change B, you can do it by changing A. The only ways to test this claim is to do an experiment. Change A to see if B changes.

In any field of study, not just psychology, cause-effect claims are some of the most important and hotly debated. Here are some examples of cause-effect claims in different areas.

Law : "Mandatory minimum sentencing would lower the crime rate" I.e. mandatory minimum sentencing would cause a reduction in crime. Or: "The death penalty doesn't work." I.e. the death penalty does not cause a reduction in violent crime.

Medicine : "Passive inhalation of tobacco smoke can increase risk of lung cancer." I.e. inhaling somebody else's smoke causes lung cancers in some people. Or: "Alcohol consumed during pregnancy can harm the fetus." I.e. alcohol can cause disorders in the unborn baby.

Athletics : "Visualizing yourself making a great performance will increase your chances of making a great performance." I.e. visualization causes great plays. Or: "Stretching before a contest reduces the likelihood of muscle and ligament sprains." I.e. athletes who stretch will suffer fewer injuries.

Economics : "Deficit spending increases inflation." (i.e. causes prices to go up). Or: "Lowering taxes causes greater economic activity" (i.e. cutting taxes causes more spending or production).

Nutrition : "Saturated fats cause heart disease." Or: "Eating eggs cause increased cholesterol levels."

The reason cause-effect claims are so hotly debated is that they have implications for policy. If the death penalty does not work, why have one?

If passive inhalation of smoke causes cancer, should public smoking be allowed? Cause-effect claims raise vital economic and political questions. They influence our decisions on matters ranging from law making to personal health.

Why are cause-effect claims so hotly debated? How can such claims be tested?

The only way to test a cause-effect claim is to manipulate the suspected cause and measure the suspected effects. Like observa­tional research, it sounds simple. But, as with observational research, there are many pitfalls: many ways to make mistakes.

Independent, Dependent, and Subject Variables

In experimental research, there is always at least one variable actively changed or manipulated by the experimenter. Generally it is the suspected cause in a cause-effect relationship. This is the indepen­dent variable.

To remember this term, think of the independent variable as the one that can be manipulated independently (by the experimenter) while changes are observed in a host of other variables. Or think of the experimenter as an independent force who manipulates this variable, to see what happens.

Other variables are measured to see possible effects of a manipulation. They are called dependent variables. The values of these variables might (or might not) depend upon the manipulation of the independent variables.

The experiment is designed to find out whether such dependencies exist. Using the language of cause and effect, the independent variable is suspected of being a cause, while dependent variables are possible effects. There can be many dependent variables examined by an experimenter–many possible effects of a manipulation.

What are independent and dependent variables? Can there be more than one dependent variable?

Subject variables are a third category of variables commonly found in psychology research. The subject's age, sex, height, and weight are subject variables.

They are not manipulated as part of the research (thus they are not independent variables). They are not measured to see changes after a manipulation (thus they are not dependent variables).

A researcher keeps track of subject variables to see if they bear any relationship to the results. For example, an experimenter might want to find out if effects of a treatment vary with the age or gender of the subject.

What are subject variables?

A common type of experiment used to gather evidence about cause-effect claims is the two-group comparison. One group, the experimental group, receives a treatment designed to produce some effect (a manipulation of the independent variable).

The other group, the control group, is left alone or given a fake treatment. Data is gathered: dependent variables are measured. Results from the two groups are compared and analyzed to see if the experimental treatment made any difference.

How does a two-group comparison work? What is an experimental group? A control group?

This type of experiment is called a between-subjects design The comparison is between two or more different groups of subjects.

By contrast, a within-subject design compares changes within the same subject on different occasions. An example of a within-subject design is a "before and after" comparison.

In a within-subjects design, each subject serves as his or her own control or standard of comparison. For example, we might evaluate ten different eyeglass designs by giving samples of all ten frames to each of 50 subjects, having them rank the 10 from best to worst.

That is a within-subjects comparison, because each participant makes several judgments, and those judgments are compared. A researcher can collect all the data to get average ratings for each frame.

In a different, between-subjects type experiment, we might be interested in seeing if different personality types pick different frames. For example, does a more extroverted person pick brighter colors? The comparison is between different subjects so this is a between-subjects design.

What is the difference between a between-subjects and within-subject design?

Within-subject designs are subject to practice effects (described earlier). A person being tested repeatedly on an eye chart might remember some answers from the earlier test.

Unless the effects of practice are the focus of research, an experimenter will want to avoid practice effects. This means the same subjects cannot be tested more than once using the same sort of test. If practice effects are a problem, comparisons must be made between different groups of subjects, not by testing the same people repeatedly.

Those Confounded Variables!

In a between-subjects comparison, different groups receive different manipulations. The experimenter tries to create a situation in which there is only one consistent difference between experimental and control groups.

The ideal is to isolate the effects of a single causal variable. (By the way, that word is causal not casual... "causal" is pronounced CAW-zal and means "suspected of being a cause.")

The reason a researcher manip­ulates a particular independent variable is to find out if that variable causes some effect. The effect, if any, shows up in measurements of the dependent variables.

What is the goal of a simple two-group comparison?

Ideally, an experimenter wants the manipulation of the independent variable to be the only difference between groups. But this ideal is seldom attained.

There are almost always other, unintended differences between experimental and control groups. They are called confounded variables.

The word "confounded" means confused or mingled together. Unintended differences are mingled in with the difference an experi­menter intends to create between groups. They are unwanted because they make it impossible to interpret the research.

What are confounded variables? What does "confounded" mean?

One psychology major did not appreciate the importance of confounded variables until he did a research project for an experi­mental psychology course. His hypothesis was that different types of background music would have different effects on performance of complex tasks.

Subjects tried to solve anagrams (word puzzles) while music played in the background. Group #1 heard the soothing strains of an old Allman Brothers song, "Dedicated to Elizabeth Reed." Group #2 heard an unbelievably jarring and repetitious song: "Brain­wash" by the punk band Flipper.

Unfortunately, the psychology student recorded the Allman Brothers song at a higher volume than the other. During his experiment he noticed the subjects were doing worse during the Allman Brothers song. This was not what he expected. As he put it:

"Suddenly I realized my experiment was ruined. I could not tell if they were doing worse because of the difference in music or the difference in volume. Then I really understood the concept of confounded variables for the first time." [Author's files]

How did a confounded variable foul up the student's experiment on music and puzzle-solving?

In comparisons between groups, a confounded variable is any difference between groups other than the one an experimenter deliberately creates. A confounded variable frustrates any attempt to pin down a cause-effect relationship.

A confounded variable offers an alternative explanation for any observed differences between the groups. If an effect (a difference between groups) is observed, nobody can tell if it is due to manipulation of the independent variable or the presence of the confounding variable. Therefore researchers try to eliminate confounding (or "confounded") variables whenever possible.


Write to Dr. Dewey at psywww@gmail.com.


Don't see what you need? Psych Web has over 1,000 pages, so it may be elsewhere on the site. Do a site-specific Google search using the box below.