This example is taken from Charles Hicks's *Fundamental Concepts in the
Design of Experiments* (3rd ed.; HBJ, 1982, pp. 66-70). The example is
fictitious but illustrative.

Note: The data sets are all given below. Cut and paste and use ` read.table ` to enter them into R.

A fleet manager wishes to compare the wearability of 4 brands of tire: A, B, C, and D. Four cars are available for the experiment and 4 tires of each brand are available, for a total of 16 tires. The idea is to mount 4 tires on each of the 4 cars, ask the driver of each of the cars to drive his/her car for 20,000 miles and then to measure tread loss. We will measure tread loss in mils (.001 inches). We will designate the 4 cars as cars I, II, III, and IV.

Hicks considers 3 possible experimental designs:

Here is design 1:

Car I II III IV ____________________ Brand A B C D Assignment A B C D A B C D A B C D ____________________Can you see the obvious flaw in this design?

Another design that negates the confounding is to use a comletely randomized design (CRD). This entails numbering the 16 tires, drawing at random the numbers and assigning tires to cars in a completely random manner. The following table illustrates a possible result of doing this.

Car I II III IV _____________________________ Brand assignment C(12) A(14) C(10) A(13) & A(17) A(13) D(11) D(9) Loss in D(13) B(14) B(14) B(8) thickness () D(11) C(12) B(13) C(9) ______________________________The appropriate analysis for such an experiment is the one-way ANOVA:

tire1.df attach(tire1.df) tapply(Wear, Brand, mean) summary(aov(Wear ~ Brand)) > tapply(Wear, Brand, mean) A B C D 14.25 12.25 10.75 11.00 # Group means > summary(aov(Wear ~ Brand)) Df Sum Sq Mean Sq F value Pr(>F) Brand 3 30.688 10.229 2.4428 0.1145 Residuals 12 50.250 4.188

Do you understand the data frame? Can we infer a difference in mean wear levels between the 4 brands? Are the assumptions for the one-way ANOVA met?

In this case, we do not reject the overall null hypothesis: it is tenable the 4 populations means are the same.

The third design considered by Hicks is the Randomized Complete Block Design (RCBD). In this case, each car tests all four brands. Thus one tire from each brand is selected at random and randomly allocated to the 4 wheels of car I. Then one tire from each brand is selected and the four are randomly allocated to car II, and so forth. Here are the results of that design.

Car I II III IV _____________________________ Brand assignment B(14) D(11) A(13) C(9) & C(12) C(12) B(13) D(9) Loss in A(17) B(14) D(11) B(8) thickness () D(13) A(14) C(10) A(13) ______________________________We use a so-called additve, two-way analysis of variance model to analyze these data, which is model 12.2.1 given on page 775 of Larsen and Marx.

First do some plots and compute some means.

attach(tire2.df) par(mfrow=c(2,2)) boxplot(split(Wear,Brand)) boxplot(split(Wear,Car)) tapply(Wear,Brand,mean) interaction.plot(Brand, Car, Wear) par(mfrow=c(1,1))The interaction plot is new. What do we learn from it?

Now let us look at the inference from a two-way ANOVA.

tire.aov <- aov(Wear ~ Brand + Car) anova(tire.aov) OUTPUT: Response: Wear Df Sum Sq Mean Sq F value Pr(>F) Brand 3 30.688 10.229 7.9622 0.006685 ** Car 3 38.688 12.896 10.0378 0.003133 ** Residuals 9 11.563 1.285 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1Look at some residual plots to diagnose any problems with model assumptions.

par(mfrow=c(2,1)) qqnorm(resid(tire.aov)) qqline(resid(tire.aov)) plot(fitted(tire.aov), resid(tire.aov)) par(mfrow=c(1,1))

Now a Tukey analysis:

TukeyHSD(tire.aov) plot(TukeyHSD(tire.aov, "Brand"))

The data sets follow:

# copy and paste and in R type: # tire1.df <- read.table(file='clipboard',header=T) Wear Brand 1 17 A 2 14 B 3 12 C 4 13 D 5 14 A 6 14 B 7 12 C 8 11 D 9 13 A 10 13 B 11 10 C 12 11 D 13 13 A 14 8 B 15 9 C 16 9 D # copy and paste and in R type: # tire2.df <- read.table(file='clipboard',header=T) Wear Brand Car 1 17 A I 2 14 B I 3 12 C I 4 13 D I 5 14 A II 6 14 B II 7 12 C II 8 11 D II 9 13 A III 10 13 B III 11 10 C III 12 11 D III 13 13 A IV 14 8 B IV 15 9 C IV 16 9 D IV