The primary reason for fandom sampling – whether a simple random sample or a more complex stratified sample – is to avoid bias in selecting the individuals to be included in the sample. A bias is a systematic difference between the characteristics of the members of the sample and the population from which it is drawn.
Biases can be introduced purposefully or by accident. For example, suppose you are interested in describing the age distribution of the population. The easiest way to obtain a sample would be to simply select the people whose age is to be measured from the people in your biostatistics class. The problem with this convenience sample is that you will be leaving out everyone not old enough to be learning biostatistics or those who have outgrown the desire to do so. The results obtained from this convenience sample would probably underestimate both the mean age of people in the entire population as well as the amount of variation in the population. Biases can also be introduced by selectively placing people in one comparison group or another. For example, if one is conducting an experiment to compare a new drug with conventional therapy, it would be possible to bias the results by putting the sicker people in the conventional therapy group with the expectation that they would do worse than people who were not as sick and were receiving the new drug. Random sampling protects against both these kinds of biases.
Biases can also be introduced when there is a systematic error in the measuring device, such as when the zero on a bathroom scale is set too high or too low, so that all measurements are above or below the real weight.
Another source of bias can come from the people making or reporting the measurements if they have hopes or beliefs that the treatment being tested is or is not superior to the control group or conventional therapy being studied. It is common, particularly in clinical research, for there to be some room for judgement in making and reporting measurements. If the investigator wants the study to come out one way or another, there is always the possibility for reading the measurements systematically low in one group and systematically high in another.
The best way to avoid this measurement bias is to have the person making the measurements blinded to which treatment led to the data being measured. For example, suppose that one is doing a comparison of the efficacy of two different stents (small tubes inserted into arteries) to keep coronary arteries (arteries in the heart) open. To blind the measurements, the person reading the data on artery size would not know whether the data came from a person in the control group (who did not receive a stent), or which of the different stents was used in a given person.
Another kind of bias is due to the placebo effect, the tendency of people to report a change in condition simply because they received a treatment, even if the treatment had no biologic effect. For example, about one-third of people given an insert injection that they thought was an anesthetic reported a lessening of dental pain. To control for this effect in clinical experiments, it is common to give one group a placebo so that they think that they are receiving a treatment. Examples of placebos include an injection of saline, sugar pill, or surgically opening and closing without performing any specific procedure on the target organ. Leaving out a placebo control can seriously bias the results of an experiment in favour of the treatment. Ideally, the experimental subject would not know if they were receiving a placebo or an active treatment. When the subject does not know whether they received a placebo or not, the subject is blinded.
When neither the investigator nor the subject knows who received which treatment, they study is double blinded. For example, in double-blind drug studies, people are assigned treatments at random and neither the subject nor the person delivering the drug and measuring the outcome knows whether the subject received an active drug or a placebo. The drug are delivered with only a number code identifying them. The code is broken only after all the data have been collected.