Subsection 1.2.1 Sampling Methods
As we mentioned in
Section 1.1, the first thing we should do before conducting a survey is to identify the population that we want to study. Suppose we are hired by a politician to determine the amount of support he has among the electorate should he decide to run for another term. What population should we study? Every person in the district? Not every person is eligible to vote, and regardless of how strongly someone likes or dislikes the candidate, they don’t have much to do with him being re-elected if they are not able to vote.
What about eligible voters in the district? That might be better, but if someone is eligible to vote but does not register by the deadline, they won’t have any say in the election either. What about registered voters? Many people are registered but choose not to vote. What about "likely voters?"
This is the criteria used in much political polling, but it is sometimes difficult to define a "likely voter." Is it someone who voted in the last election? In the last general election? In the last presidential election? Should we consider someone who just turned 18 a "likely voter?" They weren’t eligible to vote in the past, so how do we judge the likelihood that they will vote in the next election?
In November 1998, former professional wrestler Jesse "The Body" Ventura was elected governor of Minnesota. Up until right before the election, most polls showed he had little chance of winning. There were several contributing factors to the polls not reflecting the actual intent of the electorate:
Ventura was running on a third-party ticket and most polling methods are better suited to a two-candidate race.
Many respondents to polls may have been embarrassed to tell pollsters that they were planning to vote for a professional wrestler.
The mere fact that the polls showed Ventura had little chance of winning might have prompted some people to vote for him in protest to send a message to the major-party candidates.
But one of the major contributing factors was that Ventura recruited a substantial amount of support from young people, particularly college students, who had never voted before and who registered specifically to vote in the gubernatorial election. The polls did not deem these young people likely voters (since in most cases young people have a lower rate of voter registration and a turnout rate for elections) and so the polling samples were subject to sampling bias: they omitted a portion of the electorate that was weighted in favor of the winning candidate.
Sampling bias.
A sampling method is biased if every member of the population doesn’t have equal likelihood of being in the sample.
So even identifying the population can be a difficult job, but once we have identified the population, how do we choose an appropriate sample? Remember, although we would prefer to survey all members of the population, this is usually impractical unless the population is very small, so we choose a sample. There are many ways to sample a population, but there is one goal we need to keep in mind: we would like the sample to be representative of the population.
Returning to our hypothetical job as a political pollster, we would not anticipate very accurate results if we drew all of our samples from among the customers at a Starbucks, nor would we expect that a sample drawn entirely from the membership list of the local Elks club would provide a useful picture of district-wide support for our candidate.
One way to ensure that the sample has a reasonable chance of mirroring the population is to employ randomness. The most basic random method is simple random sampling.
Simple random sample.
A random sample is one in which each member of the population has an equal probability of being chosen. A simple random sample is one in which every member of the population and any group of members has an equal probability of being chosen.
Example 1.2.1.
If we could somehow identify all likely voters in the state, put each of their names on a piece of paper, toss the slips into a (very large) hat and draw 1000 slips out of the hat, we would have a simple random sample.
In practice, computers are better suited for this sort of endeavor than millions of slips of paper and extremely large headgear.
It is always possible, however, that even a random sample might end up not being totally representative of the population. If we repeatedly take samples of 1000 people from among the population of likely voters in the state of Washington, some of these samples might tend to have a slightly higher percentage of Democrats (or Republicans) than does the general population; some samples might include more older people and some samples might include more younger people; etc. In most cases, this sampling variability is not significant.
Sampling variability.
The natural variation of samples is called sampling variability.
This is unavoidable and expected in random sampling, and in most cases is not an issue. (In
Section 4.2, we will learn one way this effect can be quantified, and we will see why it is usually insignificant.)
To help account for variability, pollsters might instead use a stratified sample.
Stratified sampling.
In stratified sampling, a population is divided into a number of subgroups (or strata). Random samples are then taken from each subgroup with sample sizes proportional to the size of the subgroup in the population.
Example 1.2.2.
Suppose in a particular state that previous data indicated that the electorate was comprised of 39% Democrats, 37% Republicans and 24% independents. In a sample of 1000 people, they would then expect to get about 390 Democrats, 370 Republicans and 240 independents. To accomplish this, they could randomly select 390 people from among those voters known to be Democrats, 370 from those known to be Republicans, and 240 from those with no party affiliation.
Stratified sampling can also be used to select a sample with people in desired age groups, a specified mix ratio of males and females, etc. A variation on this technique is called quota sampling.
Quota sampling.
Quota sampling is a variation on stratified sampling, wherein samples are collected in each subgroup until the desired quota is met.
Example 1.2.3.
Suppose the pollsters call people at random, but once they have met their quota of 390 Democrats, they only gather people who do not identify themselves as a Democrat.
You may have had the experience of being called by a telephone pollster who started by asking you your age, income, etc. and then thanked you for your time and hung up before asking any "real" questions. Most likely, they already had contacted enough people in your demographic group and were looking for people who were older or younger, richer or poorer, etc. Quota sampling is usually a bit easier than stratified sampling, but also does not ensure the same level of randomness.
Another sampling method is cluster sampling, in which the population is divided into groups, and one or more groups are randomly selected to be in the sample.
Cluster sampling.
In cluster sampling, the population is divided into subgroups (clusters), and a set of subgroups are selected to be in the sample.
Example 1.2.4.
If the college wanted to survey students, since students are already divided into classes, they could randomly select 10 classes and give the survey to all the students in those classes. This would be cluster sampling.
Other sampling methods include systematic sampling.
Systematic sampling.
In systematic sampling, every \(n^{th}\) member of the population is selected to be in the sample.
Example 1.2.5.
To select a sample using systematic sampling, a pollster calls every 100th name in the phone book.
Systematic sampling is not as random as a simple random sample (if your name is Albert Aardvark and your sister Alexis Aardvark is right after you in the phone book, there is no way you could both end up in the sample) but it can yield acceptable samples.
Perhaps the worst types of sampling methods are convenience samples and voluntary response samples.
Convenience sampling and voluntary response sampling.
Convenience sampling is samples chosen by selecting whoever is convenient.
Voluntary response sampling is allowing the sample to volunteer.
Example 1.2.6.
A pollster stands on a street corner and interviews the first 100 people who agree to speak to him. This is a convenience sample.
Example 1.2.7.
A website has a survey asking readers to give their opinion on a tax proposal. This is a self-selected sample, or voluntary response sample, in which respondents volunteer to participate.
Usually voluntary response samples are skewed towards people who have a particularly strong opinion about the subject of the survey or who just have way too much time on their hands and enjoy taking surveys.
Exploration 1.2.1.
In each case, indicate what sampling method was used.
Every 4th person in the class was selected
A sample was selected to contain 25 men and 35 women
Viewers of a new show are asked to vote on the show’s website
A website randomly selects 50 of their customers to send a satisfaction survey to
To survey voters in a town, a polling company randomly selects 10 city blocks, and interviews everyone who lives on those blocks.
Solution.
Systematic
Stratified or Quota
Voluntary response
Simple random
Cluster
Subsection 1.2.2 How to Mess Things Up Before You Start (Sampling Bias)
There are number of ways that a study can be ruined before you even start collecting data. The first we have already explored – sampling or selection bias, which is when the sample is not representative of the population. One example of this is voluntary response bias, which is bias introduced by only collecting data from those who volunteer to participate. This is not the only potential source of bias.
Sources of bias.
Sampling bias – when the sample is not representative of the population
Voluntary response bias – the sampling bias that often occurs when the sample is volunteers
Self-interest study – bias that can occur when the researchers have an interest in the outcome
Response bias– when the responder gives inaccurate responses for any reason
Perceived lack of anonymity – when the responder fears giving an honest answer might negatively affect them
Loaded questions – when the question wording influences the responses
Non-response bias – when people refusing to participate in the study can influence the validity of the outcome
Example 1.2.8.
Consider a recent study which found that chewing gum may raise math grades in teenagers . This study was conducted by the Wrigley Science Institute, a branch of the Wrigley chewing gum company. This is an example of a self-interest study; one in which the researches have a vested interest in the outcome of the study. While this does not necessarily ensure that the study was biased, it certainly suggests that we should subject the study to extra scrutiny.
Example 1.2.9.
A survey asks people “when was the last time you visited your doctor?” This might suffer from response bias, since many people might not remember exactly when they last saw a doctor and give inaccurate responses.
Sources of response bias may be innocent, such as bad memory, or as intentional as pressuring by the pollster. Consider, for example, how many voting initiative petitions people sign without even reading them.
Example 1.2.10.
A survey asks participants a question about their interactions with members of other races. Here, a perceived lack of anonymity could influence the outcome. The respondent might not want to be perceived as racist even if they are, and give an untruthful answer.
Example 1.2.11.
An employer puts out a survey asking their employees if they have a drug abuse problem and need treatment help. Here, answering truthfully might have consequences; responses might not be accurate if the employees do not feel their responses are anonymous or fear retribution from their employer.
Example 1.2.12.
A survey asks “do you support funding research of alternative energy sources to reduce our reliance on high-polluting fossil fuels?” This is an example of a loaded or leading question – questions whose wording leads the respondent towards an answer.
Loaded questions can occur intentionally by pollsters with an agenda, or accidentally through poor question wording. Also a concern is question order, where the order of questions changes the results. A psychology researcher provides an example:
“My favorite finding is this: we did a study where we asked students, “How satisfied are you with your life? How often do you have a date?” The two answers were not statistically related - you would conclude that there is no relationship between dating frequency and life satisfaction. But when we reversed the order and asked, “How often do you have a date? How satisfied are you with your life?” the statistical relationship was a strong one. You would now conclude that there is nothing as important in a student’s life as dating frequency.”
Example 1.2.13.
A telephone poll asks the question “Do you often have time to relax and read a book?”, and 50% of the people called refused to answer the survey. It is unlikely that the results will be representative of the entire population. This is an example of non-response bias, introduced by people refusing to participate in a study or dropping out of an experiment. When people refuse to participate, we can no longer be so certain that our sample is representative of the population.
Exploration 1.2.2.
In each situation, identify a potential source of bias.
A survey asks how many sexual partners a person has had in the last year.
A radio station asks readers to phone in their choice in a daily poll.
A substitute teacher wants to know how students in the class did on their last test. The teacher asks the 10 students sitting in the front row to state their latest test score.
High school students are asked if they have consumed alcohol in the last two weeks.
The Beef Council releases a study stating that consuming red meat poses little cardiovascular risk.
A poll asks “Do you support a new transportation tax, or would you prefer to see our public transportation system fall apart?”
Solution.
Response bias – historically, men are likely to over-report, and women are likely to under-report to this question.
Voluntary response bias – the sample is self-selected.
Sampling bias – the sample may not be representative of the whole class.
Lack of anonymity
Self-interest study
Loaded question
Subsection 1.2.3 Experiments
So far, we have primarily discussed observational studies — studies in which conclusions would be drawn from observations of a sample or the population. In some cases these observations might be unsolicited, such as studying the percentage of cars that turn right at a red light even when there is a “no turn on red” sign. In other cases the observations are solicited, like in a survey or a poll.
In contrast, it is common to use experiments when exploring how subjects react to an outside influence. In an experiment, some kind of treatment is applied to the subjects and the results are measured and recorded. By applying some treatment to the subjects, the researchers are controlling one of the variables, which does not occur in an observational study. While the term “treatment” comes from the field of medicine, we are using it to refer to any effect controlled by the researchers.
Observational studies and experiments.
An observational study is a study based on observations or measurements. The researchers do not control any variable being studied, but rather measure a population as it is.
An experiment is a study in which the effects of a treatment are measured. The treatment is some effect that the researchers can control.
Here are some examples of experiments:
Example 1.2.14.
A pharmaceutical company tests a new medicine for treating Alzheimer’s disease by administering the drug to 50 elderly patients with recent diagnoses. The treatment here is the new drug.
A gym tests out a new weight loss program by enlisting 30 volunteers to try out the program. The treatment here is the new program.
You test a new kitchen cleaner by buying a bottle and cleaning your kitchen. The new cleaner is the treatment.
A psychology researcher explores the effect of music on temperament by measuring people’s temperament while listening to different types of music. The music is the treatment.
Exploration 1.2.3.
Is each scenario describing an observational study or an experiment?
The weights of 30 randomly selected people are measured.
Subjects are asked to do 20 jumping jacks, and then their heart rates are measured.
Twenty people are told to drink coffee and twenty are told to drink tea. They are then given a concentration test.
Researchers survey 100 students, asking whether they drink coffee or tea. They then give these 100 people a concentration test.
Solution.
Observational study
Experiment; the treatment is the jumping jacks
Experiment; the treatments are coffee and tea
Observational study
Experiments can often yield more robust results than observational studies; however, observational studies are sometimes necessary for ethical or logistical reasons. For example, suppose researches are studying the effects of smoking. They could not ethically ask an experimental group to start smoking, so they would have to perform an observational study instead.
The design of an experiment will influence its accuracy. Let’s start to investigate this more.
Example 1.2.15.
Suppose a middle school (junior high) finds that their students are not scoring well on the state’s standardized math test. They decide to run an experiment to see if an alternate curriculum would improve scores. To run the test, they hire a math specialist to come in and teach a class using the new curriculum. To their delight, they see an improvement in test scores.
The difficulty with this scenario is that it is not clear whether the curriculum is responsible for the improvement, or whether the improvement is due to a math specialist teaching the class. This is called confounding – when it is not clear which factor or factors caused the observed effect. Confounding is the downfall of many experiments, though sometimes it is hidden.
Confounding.
Confounding occurs when there are two potential variables that could have caused the outcome, and it is not possible to determine which actually caused the result.
Example 1.2.16.
A drug company study about a weight loss pill might report that people lost an average of 4 kg while using their new drug. However, in the fine print you find a statement saying that participants were encouraged to also diet and exercise. It is not clear in this case whether the weight loss is due to the pill, to diet and exercise, or a combination of both. In this case confounding has occurred.
Example 1.2.17.
Researchers conduct an experiment to determine whether students will perform better on an arithmetic test if they listen to music during the test. They first give the student a test without music, then give a similar test while the student listens to music. In this case, the student might perform better on the second test, regardless of the music, simply because it was the second test and they were warmed up.
There are a number of measures that can be introduced to help reduce the likelihood of confounding. The primary measure is to use a control group.
Control Group.
When using a control group, the participants are divided into two or more groups, typically a control group and a treatment group. The treatment group receives the treatment being tested; the control group does not receive the treatment.
Ideally, the groups are otherwise as similar as possible, isolating the treatment as the only potential source of difference between the groups. For this reason, the method of dividing groups is important. Some researchers attempt to ensure that the groups have similar characteristics (same number of females, same number of people over 50, etc.), but it is nearly impossible to control for every characteristic. Because of this, random assignment is very commonly used—that is, the choice of which participants are in the treatment and control groups is random. For this reason, such experiments are often called randomized controlled trials.
Note that we have now introduced two kinds of randomness. First, the participants in the study must be randomly selected: that is, the choice of who participates in the study in the first place must be random. This reduces selection bias, ensuring that participants roughly represent the overall population. Second, the participants must be randomly assigned to either the treatment or control group, as described above. These two types of randomness are distinct, and are both important for good experimental design.
Example 1.2.18.
To determine if a two day prep course would help high school students improve their scores on the SAT test, a group of students was randomly divided into two subgroups. The first group, the treatment group, was given a two day prep course. The second group, the control group, was not given the prep course. Afterwards, both groups were given the SAT.
Example 1.2.19.
A company testing a new plant food grows two crops of plants in adjacent fields, the treatment group receiving the new plant food and the control group not. The crop yield would then be compared. By growing them at the same time in adjacent fields, they are controlling for weather and other confounding factors.
Sometimes not giving the control group anything does not completely control for confounding variables. For example, suppose a medicine study is testing a new headache pill by giving the treatment group the pill and the control group nothing. If the treatment group showed improvement, we would not know whether it was due to the medicine in the pill, or a response to have taken any pill. This is called a placebo effect.
Placebo effect.
The placebo effect is when the effectiveness of a treatment is influenced by the patient’s perception of how effective they think the treatment will be, so a result might be seen even if the treatment is ineffectual.
Example 1.2.20.
A study found that when doing painful dental tooth extractions, patients told they were receiving a strong painkiller while actually receiving a saltwater injection found as much pain relief as patients receiving a dose of morphine.
To control for the placebo effect, a placebo, or dummy treatment, is often given to the control group. This way, both groups are truly identical except for the specific treatment given.
Placebo and Placebo controlled experiments.
A placebo is a dummy treatment given to control for the placebo effect.
An experiment that gives the control group a placebo is called a placebo controlled experiment.
Example 1.2.21.
In a study for a new medicine that is dispensed in a pill form, a sugar pill could be used as a placebo.
In a study on the effect of alcohol on memory, a non-alcoholic beer might be given to the control group as a placebo.
In a study of a frozen meal diet plan, the treatment group would receive the diet food, and the control could be given standard frozen meals stripped of their original packaging.
In some cases, it is more appropriate to compare to a conventional treatment than a placebo. For example, in a cancer research study, it would not be ethical to deny any treatment to the control group or to give a placebo treatment. In this case, the currently acceptable medicine would be given to the control group, sometimes called a
comparison group in this case. In
Example 1.2.18, the non-treatment group would most likely be encouraged to study on their own, rather than be asked to not study at all, to provide a meaningful comparison.
When using a placebo, it would defeat the purpose if the participant knew they were receiving the placebo.
Blind studies.
A blind study is one in which the participant does not know whether or not they are receiving the treatment or a placebo.
A double-blind study is one in which those interacting with the participants don’t know who is in the treatment group and who is in the control group.
Example 1.2.22.
In a study about anti-depression medicine, you would not want the psychological evaluator to know whether the patient is in the treatment or control group either, as it might influence their evaluation, so the experiment should be conducted as a double-blind study.
It should be noted that not every experiment needs a control group.
Example 1.2.23.
If a researcher is testing whether a new fabric can withstand fire, she simply needs to torch multiple samples of the fabric – there is no need for a control group.
Exploration 1.2.4.
To test a new lie detector, two groups of subjects are given the new test. One group is asked to answer all the questions truthfully, and the second group is asked to lie on one set of questions. The person administering the lie detector test does not know what group each subject is in.
Does this experiment have a control group? Is it blind, double-blind, or neither?
Solution.
The truth-telling group could be considered the control group, but really both groups are treatment groups here, since it is important for the lie detector to be able to correctly identify lies, and also not identify truth telling as lying. This study is blind, since the person running the test does not know what group each subject is in.