t score

Statistic Procedures – Confidence Interval

August 26, 2017 Medical Statistics No comments , , , , , , , , , , , ,

Confidence Intervals for One Population Mean

A common problem in statistics is to obtain information about the mean, μ, of a population. One way to obtain information about a population mean μ without taking a census is to estimate it by a sample mean x(bar). So, a point estimate of a parameter is the value of a statistic used to estimate the parameter. More generally, a statistic is called an unbiased estimator of a parameter if the mean of all its possible values equals the parameter; otherwise, the statistic is called a biased estimator of the parameter. Ideally, we want our statistic to be unbiased and have small standard error. In that case, chances are good that our point estimate (the value of the statistic) will be close to the parameter.

However, it is not uncommon that a sample mean is usually not equal to the population mean, especially when the standard error is not small as stated previously. Therefore, we should accompany any point estimate of μ with information that indicates the accuracy of that estimate. This information is called a confidence-interval estimate for μ. By definition, the confidence interval (CI) is an interval of numbers obtain from a point estimate of a parameter. The confidence level is the confidence we have that the parameter lies in the confidence interval. And the confidence-interval estimate is the confidence level and confidence interval. An confidence interval for a population mean depends on the sample mean, x(bar), which in turn depdends on the sample selected.

Margin of error E indicates how accurate the sample mean of x(bar) is as an estimate for the value of the unknown parameter of μ. With the point estimate and confidence-interval estimate (of 95% confidence interval), we can be 95% confident that the μ is within E of the sample mean. Simply, it means that the μ = point estimate +- E.

Summary

  • Point estimate
  • Confidence-interval estimate
  • Margin of error

Computing the Confidence-Interval for One Population Mean (σ known)

We not develop a step-by-step procedure to obtain a confidence interval for a population mean when the population standard deviation is known. In doing so, we assume that the variable under consideration is normallhy distributed. Because of the central limit theorem, however, the procedure will also work to obtain an approximately correct confidence interval when the sample size is large, regardless of the distribution of the variable. The basis of our confidence-interval procedure is the sampling distribution of the sample mean for a normally distributed variable: Suppose that a variable x of a population is normally distributed with mean μ and standard deviation σ. Then, for samples of size n, the variable x(bar) is also normally distributed and has mean μ and standard deviation σ/√n. As a consequence, we have the procedure to compute the confidence-interval.

PS: The one-mean z-interval procedure is also known as the one-sample z-interval procedure and the one-variable z-interval procedure. We prefer "one-mean" because it makes clear the parameter being estimated.

PS: By saying that the confidence interval is exact, we mean that the true confidence level equals 1 – α; by saying that the confidence that the confidence interval is approximately correct, we mean that the true confidence level only approximately equals 1 – α.

Before applying Procedure 8.1, we need to make several comments about it and the assumptions for its use, including:

  • We use the term normal population as an abbreviation for "the variable under consideration is normally distributed."
  • The z-interval procedure works reasonably well even when the variable is not normally distributed and the sample size is small or moderate, provided the variable is not too far from being normally distributed. Thus we say that the z-interval procedure is robust to moderate violations of the normality assumption.
  • Watch for outlilers because their presence calls into question the normality assumption. Moreover, even for large samples, outliers can sometimes unduly affect a z-interval because the sample mean is not resistant to outliers.
  • A statistical procedure that works reasonably well even when one of its assumptions is violated (or moderately violated) is called a robust procedure relative to that assumption.

Summary

Key Fact 8.1 makes it clear that you should conduct preliminary data analyses before applying the z-interval procedure. More generally, the following fundamental principle of data analysis is relevant to all inferential procedures: Before performing a statistical-inference procedure, examine the sample data. If any of the conditions required for using the procedure appear to be violated, do not apply the procedure. Instead use a different, more appropriate procedure, if one exists. Even for small samples, where graphical displays must be interpreted carefully, it is far better to examine the data than not to. Remember, though, to proceed cautiously when conducting graphical analyses of small samples, especially very small samples – say, of size 10 or less.

Sample Size Estimation

If the margin of error and confidence level are specified in advance, then we must determine the sample size needed to meet those specifications. To find the formula for the required sample, we solve the margin-of-error formula, E = zα/2 · σ/√n, for n. See the computing formula in Formula 8.2.

Computing the Confidence-Interval for One Population Mean (σ unknown)

So far, we have discussed how to obtain the confidence-interval estimate when the population standard deviation, σ, is known. What if, as is usual in practice, the population standard deviation is unknown? Then we cannot base our confidence-interval procedure on the standardized version of x(bar). The best we can do is estimate the population standard deviation, σ, by the sample standard deviation, s; in other words, we replace σ by s in Procedure 8.1 and base our confidence-interval procedure on the resulting variable t (studentized version of x(bar)). Unlike the standardize version, the studentized version of x(bar) does not have a normal distribution.

Suppose that a variable x of population is normally distributed with mean μ. Then, for samples of size n, the variable t has the t-distribution with n-1 degrees of freedom. A variable with a t-distribution has an associated curve, called a t-curve. Although there is a different t-curve for each number of degrees of freedom, all t-curves are similar and resemble the standard normal cruve. As the number of degrees of freedom becomes larger, t-curves look increasingly like the standard normal curve.

Having discussed t-distributions and t-curves, we can now develop a procedure for obtaining a confidence interval for a population mean when the population standard deviation is unknown. The procedure is called the one-mean t-interval procedure or, when no confusion can arise, simply the t-interval procedure.

Properties and guidelines for use of the t-interval procedure are the same as those for the z-interval procedure. In particular, the t-interval procedure is robust to moderate violations of the normality assumption but, even for large samples, can sometimes be unduly affected by outliers because the sample mean and sample standard deviation are not resistant to outliers.

What If the Assumptions Are Not Satisfied?

Suppose you want to obtain a confidence interval for a population mean based on a small sample, but preliminary data analyses indicate either the presence of outliers or that the variable under consideration is far from normally distributed. As neither the z-interval procedure nor the t-interval procedure is appropriate, what can you do? Under certain conditions, you can use a nonparametric method. Most nonparametric methods do not require even approximate normality, are resistant to outliers and other extreme values, and can be applied regardless of sample size. However, parametric methods, such as the z-interval and t-interval procedures, tend to give more accurate results than nonparametric methods when the normality assumption and other requirements for their use are met.

How to compute the expected 95% CI

June 22, 2017 Medical Statistics No comments , , , , , , , , ,

Screen Shot 2017 06 21 at 10 02 07 PM

The Random Sampling Distribution of Means

Imagine you have a hat containing 100 cards, numbered from 0 to 99. At random, you take out five cards, record the number written on each one, and find the mean of these five numbers. Then you put the cards back in the hat and draw another random sample, repeating the same process for about 10 minutes.

Do you expect that the means of each of these samples will be exactly the same? Of course not. Because of sampling error, they vary somewhat. If you plot all the means on a frequency distribution, the sample means form a distribution, called the random sampling distribution of means. If you actually try this, you will note that this distribution looks pretty much like a normal distribution. If you continued drawing samples and plotting their means ad infinitum, you would find that the distribution actually becomes a normal distribution! This holds true even if the underlying population was not all normally distributed: in our population of cards in the hat, there is just one card with each number, so the shape of the distribution is actually rectangular, yet its random sampling of means still tends to be normal.

These principles are stated by the central limit theorem, which states that the random sampling distribution of means will always tend to be normal, irrespective of the shape of the population distribution from which the samples were drawn. According to the theorem, the mean of the random sampling distribution of means is equal the mean of the original population.

Like all distributions, the random sampling distribution of means not only has a mean, but also has a standard deviation. This particular standard deviation, the standard deviation of the random sampling distribution of means is the standard deviation of the population of all the sample means. It has its own name: standard error, or standard error of the mean. It is a measure of the extent to which the sample means deviate from the true population mean.

When repeated random samples are drawn from a population, most of the means of those samples are going to cluster around the original population mean. If the samples each consisted of just two cards what would happen to the shape of the random sampling distribution of means? Clearly, with an n of just 2, there would be quite a high chance of any particular sample mean falling out toward the tails of the distribution, giving a broader, fatter shape to the curve, and hence a higher standard error. On the other hand, if the samples consisted of 25 cards each (n = 25), it would be very unlikely for many of their means to lie far from the center of the curve. Therefore, there would be a much thinner, narrower curve and a lower standard error.

So the shape of the random sampling distribution of means, as reflected by its standard error, is affected by the size of the samples. In fact, the standard error is equal to the population standard deviation (σ) divided by the square root of the size of the samples (n).

Using the Standard ErrorScreen Shot 2017 06 21 at 9 04 38 PM

Because the random sampling distribution of means is normal, so the z score could be expressed as follow. It is possible to find the limits between which 95%  of all possible random sample means would be expected to fall (z score = 1.96).Screen Shot 2017 06 21 at 9 15 41 PM

Estimating the Mean of a Population

It has been shown that 95% of all possible members of the population (sample means) will lie within approximately +-2 (or, more exactly, +-1.96) standard errors of the population mean. The sample mean lies within +-1.96 standard errors of the population mean in 95% of the time; conversely, the population mean lies within +-1.96 standard errors of the sample mean 95% of the time. These limits of +-1.96 standard errors are called the confidence limits.

Screen Shot 2017 06 21 at 9 28 02 PM

Therefore, 95% confidence limits are approximately equal to the sample mean plus or minus two standard errors. The difference between the upper and lower confidence limits is called the confidence interval – sometimes abbreviated as CI. Researchers obviously want the confidence interval to be as narrow as possible. The formula for confidence limits shows that to make the confidence interval narrower (for a given level of confidence, such as 95%), the standard error must be made smaller.

Estimating the Standard Error

According to the formula above, we cannot calculate standard error unless we know population standard deviation (σ). In practice, σ will not be known: researchers hardly ever know the standard deviation of the population (and if they did, they would probably not need to use inferential statistics anyway).

As a result, standard error cannot be calculated, and so z scores cannot be used. However, the standard error can be estimated using data that are available from the sample alone. The resulting statistic is the estimated standard error of the mean, usually called estimated standard error, as shown by formula below.

Screen Shot 2017 06 21 at 9 38 55 PM

where S is the sample standard deviation.

t Scores

The estimated standard error is used to find a statistic, t, that can be used in place of z score. The t score, rather than the z score, must be used when making inferences about means that are based on estimates of population parameters rather than on the population parameters themselves. The t score is Student’s t, which is calculated in much the same way as z score. But while z was expressed in terms of the number of standard errors by which a sample mean lies above or below the population mean, t is expressed in terms of the number of estimated standard errors by which the sample mean lies above or below the population mean.

Screen Shot 2017 06 21 at 9 55 46 PM

Just as z score tables give the proportions of the normal distribution that lie above and below any given z score, t score tables provide the same information for any given t score. However, there is one difference: while the value of z for any given proportion of the distribution is constant, the value of t for any given proportion is not constant – it varies according to sample size. When the sample size is large (n >100), the value of t and z are similar, but as samples get smaller, t and z scores become increasingly different.

Degree of Freedom and t Tables

Table 2-1 (right-upper) is an abbreviated t score table that shows the values of t corresponding to different areas under the normal distribution for various sample sizes. Sample size (n) is not stated directly in t score tables; instead, the tables express sample size in terms of degrees of freedom (df). The mathematical concept behind degrees of freedom is complex and not needed for the purposes of USMLE or understanding statistics in medicine: for present purposes, df can be defined as simply equal to n – 1. Therefore, to determine the values of t that delineate the central 95% of the sampling distribution of means based on a sample size of 15, we would look in the table for the appropriate value of t for df = 14; this is sometimes written as t14. Table 2-1 shows that this value is 2.145.

As n becomes larger (100 or more), the values of t are very close to the corresponding values of z.