Factorial Designs

March 5, 2018 Clinical Trials, Medical Statistics, Research No comments , , , , , , , , ,

In this section we will describe the completely randomized factorial design. This design is commonly used when there are two or more factors of interest. Recall, in particular, the difference between an observational study and a designed experiment. Observational studies involve simply observing characteristics and taking measurements, as in a sample survey. A designed experiment involves imposing treatments on experimental units, controlling extraneous sources of variation that might affect the experiment, and then observing characteristics and taking measurement on the experimental units.

Also recall that in an experiment, the response variable is the characteristic of the experimental outcome that is measured or observed. A factor is a variable whose effect on the response variable is of interest to the experimenter. Generally a factor is a categorical variable whose possible values are referred to as the levels of the factor. In a single factor experiment, we will assign experimental unit to the treatments (or vice versa). Experimental units should be assigned to the treatments in such a way as to eliminate any bias that might be associated with the assignment. This is generally accomplished by randomly assigning the experimental units to the treatments.

In certain medical experiments, called clinical trials, randomization is essential. To compare two or more methods of treating illness, it is important to eliminate any bias that could be introduced by medical personnel assigning patients to the treatments in a nonrandom fashion. For example, a doctor might erroneously assign patients who exhibit less severe symptoms of the illness to a less risky treatment.

PS: Advantages of randomized design over other methods for selecting controls

  • First, randomization removes the potential of bias in the allocation of participants to the intervention group or to the control group. Such selection bias could easily occur, and cannot be necessarily prevented, in the non-randomized concurrent or historical control study because the investigator or the participant may influence the choice of intervention. This influence can be conscious or subconscious and can be due to numerous factors, including the prognosis of the participant. The direction of the allocation bias may go either way and can easily invalidate the comparison. This advantage of randomization assumes that the procedure is performed in a valid manner and that the assignment cannot be predicted.
  • Second, somewhat related to the first, is that randomization tends to produce comparable groups; that is, measured as well as unknown or unmeasured prognostic factors and other characteristics of the participants at the time of randomization will be, on the average, evenly balanced between the intervention and control groups. This does not mean that in any single experiment all such characteristics, sometimes called baseline variables or covariates, will be perfectly balanced between the two groups. However, it does mean that for independent covariates, whatever the detected or undetected differences that exist between the groups, the overall magnitude and direction of the differences will tend to be equally divided between the two groups. Of course, many covariates are strongly associated; thus, any imbalance in one would tend to produce imbalances in the others.
  • Third, the validity of statistical tests of significance is guaranteed. As has been stated, “although groups compared are never perfectly balanced for important covariates in any single experiment, the process of randomization makes it possible to ascribe a probability distribution to the difference in outcome between treatment groups receiving equal effective treatments and thus to assign significance levels to observed differences.” The validity of the statistical tests of significance is not dependent on the balance of prognostic factors between the randomized groups.

Often in clinical trials, double blind studies are used. In this type of study, patients (the experimental units) are randomly assigned to treatments, and neither the doctor nor the patient knows which treatment has been assigned to the patient. This is an effective way to eliminate bias in treatment assignment so that the treatment effects are not confounded (associated) with other non experimental and uncontrolled factors.

Factorial design involve two or more factors. Consider the experiment in this example. There the researchers studied the effects of two factors (hydrophilic polymer and irrigation regimen) on weight gain (the response variable) of Golden Torch cacti. The two levels of the polymer factor were: used and not used. The irrigation regimen had five levels to indicate the amount of water usage: none, light, medium, heavy, and very heavy. This is an example of a two-factor or two-way factorial design.

In this experiment every level of polymer occurred with every level of irrigation regimen, for a total of 2 * 5 = 10 treatments. Often these 10 treatments are called treatment combinations to indicate that we combine the levels of the various factors together to obtain the actual collection of treatments. Since, in this case, every level of one factor is combined with every level of the other factor, we say that the levels of one factor are crossed with the levels of the other factor. When all the possible treatment combinations obtained by crossing the levels of the factors are included in the experiment, we call the design a complete factorial design, or simply a factorial design.

It is possible to extend the two-way factorial design to include more factors. For example, in the Golden Torch cacti experiment, the amount of sunlight the cacti receive could have an effect on weight gain. If the amount of sunlight is controlled in the two-way study so that all plants receive the same amount sunlight, then the amount of sunlight would not be considered a factor in the experiment.

However, since the amount of sunlight a cactus receives might have an effect on its growth, the experimenter might want to introduce this additional factor. Suppose we consider three levels of sunlight: high, medium, and low. The levels of sunlight could be achieved by placing screens of various mesh sizes over the cacti. If amount of sunlight is added as a third factor, there would be 2 * 5 * 3 = 30 different treatment combinations in a complete factorial design.

Possibly we could add even more factors to the experiment to take into account other factors that might affect weight gain of the cacti. Adding more factors will increase the number of treatment combinations for the experiment (unless the level of that factor is 1). In general, the total number of treatment combinations for a complete factorial design is the product of the number of levels of all factors in the experiment.

Obviously, as the number of factors increases, the number of treatment combinations increases. A large number of factors can result in so many treatment combinations that the experiment is unwieldy, too costly, or too time consuming to carry out. Most complete factorial designs involve only two or three factors.

To handle many factors, statisticians have devised experimental designs that use only a fraction of the total number of possible treatment combinations. These designs are called fractional factorial designs and are usually restricted to the case of all factors having two or three levels each. Fractional factorial designs cannot provide as much information as a complete factorial design, but they are very useful when a large number of factors is involved and the number of experimental units is limited by availability, cost, time, or other considerations. Fractional factorial designs are beyond the scope of this thread.

Once the treatment combinations are determined, the experimental units need to be assigned to the treatment combinations. In a completely randomized design, the experimental units are randomly assigned to the treatment combinations. If this random assignment is not done or is not possible, the treatment effects might become confounded with other uncontrolled factors that would make it difficult or impossible to determine whether an effect is due to the treatment or due to the confounding with uncontrolled factors.

Besides the random assignment of experimental units to treatment combinations, it is important that we use randomization in other ways when conducting an experiment. Often experiments are conducted in sequence. One treatment combination is applied to an experimental unit, and then the next treatment combination is applied to the next experimental unit, and so forth. It is essential that the order in which the experiments are conducted be randomized.

For example, consider an experiment in which measurements are made that are sensitive to heat or humidity. If all experiments associated with the first level of a factor are conducted on a hot and humid day, all experiments are associated with the second level of the factor are conducted on a cooler, less humid day, and so on, then the factor effect is confounded with the heat/humidity conditions on the days that the experiments are conducted. If the analysis indicates an effect due to the factor, we do not know whether there is actually a factor effect or a heat/humidity effect (or both). Randomization of the order in which the experiments are conducted would help keep the heat/humidity effect from being confounded with any factor effect.

Experimental and Classification Factors

In the description of designing experiments for factorial designs, we emphasized the idea of being able to assign experimental units to treatment combinations. If the experimental units are assigned randomly to the levels of a factor, the factor is called an experimental factor. If all the factors of a factorial design are experimental factors, we consider the study a designed experiment.

In some factorial studies, however, the experimental units cannot be assigned at random to the levels of a factor, as in the case when the levels of the factor are characteristics associated with the experimental units. A factor whose levels are characteristics of the experimental unit is called a classification factor. If all the factors of a factorial design are classification factors, we consider the study an observation study.

Consider, for instance, in the household energy consumption study, the response variable is household energy consumption and the factor of interest is the region of the United States in which a household is located. A household cannot be randomly assigned to a region of the country. The region of the country is a characteristic of the household and, thus, a classification factor. If we were to add home type as a second factor, the levels of this factor would also be a characteristic of a household, and, hence, home type would also be a classification factor. This two-way factorial design would be considered an observational study, since both of its factors are classification factors.

There are many studies that involve a mixture of experimental and classification factors. For example, in studying the effect of four different medications on relieving headache pain, the age of an individual might play a role in how long it takes before headache pain dissipates. Suppose a researcher decides to consider four age groups: 21 to 35 years old, 36 to 50 years old, 51 to 65 years old, and 66 years and older. Obviously, since age is a characteristic of an individual, age group is a classification factor.

Suppose that the researcher randomly selects 40 individuals from each age group and then randomly assigns 10 individuals in each age group to one of the four medications. Since each person is assigned at random to a medication, the medication factor is an experimental factor. Although one of the factors here is a classification factor and the other is an experimental factor, we would consider this designed experiment.

Fixed and Random Effect Factors

There is another important way to classify factors that depends on the way the levels of a factor are selected. If the levels of a factor are the only levels of interest to the researcher, then the factor is called a fixed effect factor. For example, in the Golden Torch cacti experiment, both factors (polymer and irrigation regimen) are fixed effect factors because the levels of each factor are the only levels of interest to the experimenter.

In the levels of a factor are selected at random from a collection of possible levels, and if the researcher wants to make inferences to the entire collection of possible levels, the factor is called a random effect factor. For example, consider a study to be done on the effect of different types of advertising on sales of a new sandwich at a national fast-food chain. The marketing group conducting the study feels that the city in which a franchise store is located might have an effect on sales. So they decide to include a city factor in the study, and randomly select eight cities from the collection of cities in which the company’s stores are located. They are not interested in these eight cities alone, but want to make inferences to the entire collection of cities. In this case the city factor is a random effect factor.

Some Critical Notices Should Knowing When Using Warfarin

June 30, 2017 Anticoagulant Therapy, Hematology, Laboratory Medicine No comments , , , , , , , , , , , ,

PT/INR and Anticoagulation Status

For the vast majority of patients        , monitoring is done using the prothrombin time with international normalized ratio (PT/INR), which reflects the degree of anticoagulation due to depletion of vitamin K-dependent coagulation. However, attention must be paid that the PT/INR in a patient on warfarin may note reflect the total anticoagulation status of the patient in certain settings:

  • First few day of warfarin initiation

The initial prolongation of the PT/INR during the first one to three days of warfarin initiation does not reflect full anticoagulation, because only the factor with the shortest half-life is initially depleted; other functional vitamin K-dependent factors with longer half-lives (e.g., prothrombin) continues to circulate. The full anticoagulation effect of a VKA generally occurs within approximately one week after the initiation of therapy and results in equilibrium levels of functional factors II, IX, and X at approximately 10 to 35 percent of normal.

  • Liver disease

Individuals with liver disease frequently have abnormalities in routine laboratory tests of coagulation, including prolongation of the PT, INR, and aPTT, along with mild thrombocytopenia, elevated D-dimer, especially when liver synthetic function is more significantly impaired and portal pressures are increased. However, these tests are very poor at predicting the risk of bleeding in individuals with liver disease because they only reflect changes in procoagulant factors.

  • Baseline prolonged PT/INR

Some patients with the antiphospholipid antibody syndrome (APS) have marked fluctuations in the INR that make monitoring of the degree of anticoagulation difficult.

Time in the Therapeutic Range (TTR)

For patients who are stably anticoagulated with a VKA, the percentage of time in the therapeutic range (TTR) is often used as a measure of the quality of anticoagulation control. TTR can be calculated using a variety of methods. The TTR reported depends on the method of calculation as well as the INR range considered “therapeutic.” A TTR of 65 to 70 percent is considered to be a reasonable and achievable degree of INR control in most settings.

Factors Affecting the Dose-Response Relationship Between Warfarin and INR

  • Nutritional status, including vitamin K intake
  • Medication Adherence
  • Genetic variation
  • Drug interactions
  • Smoking and alcohol use
  • Renal, hepatic, and cardiac function
  • Hypermetabolic states

In addition, female sex, increased age, and previous INR instability or hemorrhage have been associated with a greater sensitivity to warfarin and/or an increased risk of bleeding.

Dietary Factors

Vitamin K intake – Individuals anti coagulated with warfarin generally are sensitive to fluctuations in vitamin K intake, and adequate INR control requires close attention to the amount of vitamin K ingested from dietary and other sources. The goal of monitoring vitamin K intake is to maintain a moderate, constant level of intake rather than to eliminate vitamin K from the diet. Specific guidance from anticoagulation clinics may vary, but a general principle is that maintaining a consistent level of vitamin K intake should not interfere with a nutritious diet. Patients taking warfarin may wish to be familiar with possible sources of vitamin K (in order to avoid inconsistency).

Of note, intestinal microflora produce vitamin K2, and one of the ways antibiotics contribute to variability in the prothrombin time/INR is by reducing intestinal vitamin K synthesis.

Cranberry juice and grapefruit juice have very low vitamin K content but have been reported to affect VKA anticoagulation in some studies, and some anticoagulation clinics advise patients to limit their intake to one or two servings (or less) per day.

Medication Adherence

Medication adherence for vitamin K antagonists can be challenging due to the need for frequent monitoring and dose adjustments, dietary restrictions, medication interactions, and, in some cases, use of different medication doses on different days to achieve the optimal weekly intake. Reducing the number of medications prescribed may be helpful, if this can be done safely.

Drug Interactions

A large number of drugs interact with vitamin K antagonists by a variety of mechanisms, and additional interacting drugs continue to be introduced. Determine clinically important drug interactions is challenging because the evidence substantiating claims for some drug is very limited; in other cases, the evidence is strong but the magnitude of effect is small. Patients should be advised to discuss any new medication or over-the-counter supplement with the clinician managing their anticoagulation, and clinicians are advised to confirm whether a clinically important drug-drug interaction has been reported when introducing a new medication in a patient anticoagulated with a VKA.

Smoking and Excess Alcohol

The effect of chronic cigarette smoking on warfarin metabolism was evaluated in a systematic review and that included 13 studies involving over 3000 patients. A meta-analysis of the studies that evaluated warfarin dose requirement found that smoking increased the dose requirement by 12 percent, corresponding to a requirement of 2.26 additional mg of warfarin per week. However, two studies that evaluated the effect of chronic smoking on INR control found equivalent control in smokers and non-smokers.

The mechanisms by which cigarette smoking interacts with warfarin metabolism is by causing enhanced drug clearance through induction of hepatic cytochrome P-450 activity by polycyclic aromatic hydrocarbons in cigarette smoke. Nicotine itself is not thought to alter warfarin metabolism.

The interaction between excess alcohol use and warfarin anticoagulation was evaluated in a case-control study that compared alcohol use in 265 individuals receiving warfarin who had major bleeding with 305 controls from the same cohort receiving warfarin who did not have major bleeding. The risk of major bleeding was increased with moderate to severe alcohol use and with heavy episodic drinking.

Mechanism by which alcohol use interacts with warfarin anticoagulation are many, and the contribution of various factors depends greatly on the amount of intake and the severity of associated liver disease. Excess alcohol consumption may interfere with warfarin metabolism. Severe liver disease may also be associated with coagulopathy, thrombocytopenia, and/or gastrointestinal varices, all of which increase bleeding risk independent of effects on warfarin metabolism.


The major comorbidities that affect anticoagulation control are hepatic disease, renal dysfunction, and heart failure. In addition, other comorbidities such as metastatic cancer, diabetes, or uncontrolled hyperthyroidism may also play a role.

The liver is the predominant site of warfarin metabolism. It is also the source of the majority of coagulation factors. Thus, liver disease can affect warfarin dosage, INR control, and coagulation in general. Importantly, individuals with severe liver disease are not “auto-anticoagulated,” because they often have a combination of abnormalities that both impair hemostasis and increase thrombotic risk.

Warfarin undergoes partial excretion in the kidney. Patients with kidney disease can receive warfarin, and management is generally similar to the population without renal impairment; however, dose requirement may be lower.

Heart failure has been shown to interfere with INR stabilization.

Acute illnesses may alter anticoagulation through effects on vitamin K intake, VKA metabolism, and medication interactions, especially infections and gastrointestinal illnesses.

Genetic Factors

Genetic polymorphisms have been implicated in altered sensitivity to warfarin and other vitamin K antagonists.