Analysis-of-variance procedures rely on a distribution called the F-distribution, named in honor of Sir Ronald Fisher. A variable is said to have an F-distribution if its distribution has the shape of a special type of right-skewed curve, called an F-curve. There are infinitely many F-distribution (and F-curve) by its number of degrees of freedom, just as we did for t-distributions and chi-square distributions.

Screen Shot 2018 03 03 at 10 07 34 PMAn F-distribution, however, has two numbers of degrees of freedom instead of one. Figure 16.1 depicts two different F-curves; one has df = (10, 2), and the other has df = (9, 50). The first number of degrees of freedom for an F-curve is called the degree of freedom for the numerator, and the second is called the degrees of freedom for the denominator.

Basic properties of F-curves:

  • The total area under an F-curve equals 1.
  • An F-curve starts at 0 on the horizontal axis and extends indefinitely to the right, approaching, but never touching, the horizontal axis as it does so.
  • An F-curve is right skewed.

One-Way ANOVA: The Logic

In older threads, you learned how to compare two population means, that is, the means of a single variable for two different populations. You studies various methods for making such comparisons, one being the pooled t-procedure.

Analysis of variance (ANOVA) provides methods for comparing several population means, that is, the means of a single variable for several populations. In this section we present the simplest kind of ANOVA, one-way analysis of variance. This type of ANOVA is called one-way analysis of variance because it compares the means of a variable for populations that result from a classification by one other variable, called the factor. The possible values of the factor are referred to as the levels of the factor.

For example, suppose that you want to compare the mean energy consumption by households among the four regions of the United States. The variable under consideration is “energy consumption,” and there are four populations: households in the Northeast, Midwest, South, and West. The four populations result from classifying households in the United States by the factor “region,” whose levels are Northeast, Midwest, South, and West.

One-way analysis of variance is the generalization to more than two populations of the pooled t-procedure (i.e., both procedures give the same results when applied to two populations). As in the pooled t-procedure, we make the following assumptions.Screen Shot 2018 03 03 at 10 48 45 PMRegarding Assumptions 1 and 2, we note that one-way ANOVA can also be used as a method for comparing several means with a designed experiment. In addition, like the pooled t-procedure, one-way ANOVA is robust to moderate violations of Assumption 3 (normal populations) and is also robust to moderate violations of Assumption 4 (equal standard deviations) provided the sample sizes are roughly equal.

How can the conditions of normal populations and equal standard deviations be checked? Normal probability plots of the sample data are effective in detecting gross violations of normality. Checking equal population standard deviations, however, can be difficult, especially when the sample sizes are small; as a rule of thumb, you can consider that condition met if the ratio of the largest to the smallest sample standard deviation is less than 2. We call that rule of thumb the rule of 2.

Another way to assess the normality and equal-standard-deviations assumptions is to perform a residual analysis. In ANOVA, the residual of an observation is the difference between the observation and the mean of the sample containing it. If the normality and equal-standard-deviations assumptions are met, a normal probability plot of (all) the residuals should be roughly linear. Moreover, a plot of the residuals against the sample means should fall roughly in a horizontal band centered and symmetric about the horizontal axis.

The Logic Behind One-Way ANOVA

The reason for the word variance in analysis of variance is that the procedure for comparing the means analyzes the variation in the sample data. To examine how this procedure works, let’s suppose that independent random samples are taken from two populations – say, Populations 1 and 2 – with means 𝜇1 and 𝜇2. Further, let’s suppose that the means of the two samples are xbar1 = 20 and xbar2 = 25. Can we reasonably conclude from these statistics that 𝜇1 ≠ 𝜇2, that is, that the population means are (significantly) different? To answer this question, we must consider the variation within the samples.

The basic idea for performing a one-way analysis of variance to compare the means of several populations:

  • Take independent simple random samples from the populations.
  • Compute the sample means.
  • If the variation among the sample means is large relative to the variation within the samples, conclude that the means of the populations are not all equal (significantly different).

To make this process precise, we need quantitative measures of the variation among the sample means and the variation within the samples. We also need an objective method for deciding whether the variation among the sample means is large relative to the variation within the samples.

Mean Squares and F-Statistic in One-Way ANOVA

As before, when dealing with several population, we use subscripts on parameters and statistics. Thus, for Population j, we use 𝜇j ,xbarj, sj, and nj to denote the population mean, sample mean, sample standard deviation, and sample size, respectively.

We first consider the measure of variation among the sample means. In hypothesis tests for two population means, we measure the variation between the two sample means by calculating their different, xbar1 – xbar2. When more than two populations are involved, we cannot measure the variation among the sample means simply by taking a difference. However, we can measure that variation by computing the standard deviation or variance of the sample means or by computing any descriptive statistic that measures variation.

In one-way ANOVA, we measure the variation among the sample means by a weighted average of their squared deviations about he mean, bxar, of alll the sample data. That measure of variation is called the treatment mean square, MSTR, and is defined as

MSTR = SSTR / (k -1)

where k denotes the number of populations being sampled and

SSTR = n1(xbar1 -xbar)^2 + n2(xbar2 – xbar)^2 + … + nk(xbark – xbar)^2

The quantity SSTR is called the treatment sum of squares.

We note that MSTR is similar to the sample variance of the sample mans. In fact, if all the sample sizes are identical, then MSTR equals that common sample size times the sample variance of the sample means.

Next we consider the measure of variation within the samples. This measure is the pooled estimate of the common population variance, 𝜎^2. It is called the error mean square, MSE, and is defined as

MSE = SSE / (n – k)

where n denotes the total number of observations and 

SSE = (n1 -1)S1^2 + (n2 -1)S2^2 + … + (nk -1)Sk^2

The quantity SSE is called the error sum of squares. Finally, we consider how to compare the variation among the sample means, MSTR, to the variation within the samples, MSE. To do so, we use the statistic F = MSTR/MSE, which we refer to as the F-statistic. Large values of F indicate that the variation among the sample means is large relative to the variation within the samples and hence that the null hypothesis of equal population means should be rejected.

In summary,

Screen Shot 2018 03 04 at 5 08 51 PM

Screen Shot 2018 03 24 at 7 13 35 PM