Month: June 2017

Some Critical Notices Should Knowing When Using Warfarin

June 30, 2017 Anticoagulant Therapy, Hematology, Laboratory Medicine No comments , , , , , , , , , , , ,

PT/INR and Anticoagulation Status

For the vast majority of patients        , monitoring is done using the prothrombin time with international normalized ratio (PT/INR), which reflects the degree of anticoagulation due to depletion of vitamin K-dependent coagulation. However, attention must be paid that the PT/INR in a patient on warfarin may note reflect the total anticoagulation status of the patient in certain settings:

  • First few day of warfarin initiation

The initial prolongation of the PT/INR during the first one to three days of warfarin initiation does not reflect full anticoagulation, because only the factor with the shortest half-life is initially depleted; other functional vitamin K-dependent factors with longer half-lives (e.g., prothrombin) continues to circulate. The full anticoagulation effect of a VKA generally occurs within approximately one week after the initiation of therapy and results in equilibrium levels of functional factors II, IX, and X at approximately 10 to 35 percent of normal.

  • Liver disease

Individuals with liver disease frequently have abnormalities in routine laboratory tests of coagulation, including prolongation of the PT, INR, and aPTT, along with mild thrombocytopenia, elevated D-dimer, especially when liver synthetic function is more significantly impaired and portal pressures are increased. However, these tests are very poor at predicting the risk of bleeding in individuals with liver disease because they only reflect changes in procoagulant factors.

  • Baseline prolonged PT/INR

Some patients with the antiphospholipid antibody syndrome (APS) have marked fluctuations in the INR that make monitoring of the degree of anticoagulation difficult.

Time in the Therapeutic Range (TTR)

For patients who are stably anticoagulated with a VKA, the percentage of time in the therapeutic range (TTR) is often used as a measure of the quality of anticoagulation control. TTR can be calculated using a variety of methods. The TTR reported depends on the method of calculation as well as the INR range considered “therapeutic.” A TTR of 65 to 70 percent is considered to be a reasonable and achievable degree of INR control in most settings.

Factors Affecting the Dose-Response Relationship Between Warfarin and INR

  • Nutritional status, including vitamin K intake
  • Medication Adherence
  • Genetic variation
  • Drug interactions
  • Smoking and alcohol use
  • Renal, hepatic, and cardiac function
  • Hypermetabolic states

In addition, female sex, increased age, and previous INR instability or hemorrhage have been associated with a greater sensitivity to warfarin and/or an increased risk of bleeding.

Dietary Factors

Vitamin K intake – Individuals anti coagulated with warfarin generally are sensitive to fluctuations in vitamin K intake, and adequate INR control requires close attention to the amount of vitamin K ingested from dietary and other sources. The goal of monitoring vitamin K intake is to maintain a moderate, constant level of intake rather than to eliminate vitamin K from the diet. Specific guidance from anticoagulation clinics may vary, but a general principle is that maintaining a consistent level of vitamin K intake should not interfere with a nutritious diet. Patients taking warfarin may wish to be familiar with possible sources of vitamin K (in order to avoid inconsistency).

Of note, intestinal microflora produce vitamin K2, and one of the ways antibiotics contribute to variability in the prothrombin time/INR is by reducing intestinal vitamin K synthesis.

Cranberry juice and grapefruit juice have very low vitamin K content but have been reported to affect VKA anticoagulation in some studies, and some anticoagulation clinics advise patients to limit their intake to one or two servings (or less) per day.

Medication Adherence

Medication adherence for vitamin K antagonists can be challenging due to the need for frequent monitoring and dose adjustments, dietary restrictions, medication interactions, and, in some cases, use of different medication doses on different days to achieve the optimal weekly intake. Reducing the number of medications prescribed may be helpful, if this can be done safely.

Drug Interactions

A large number of drugs interact with vitamin K antagonists by a variety of mechanisms, and additional interacting drugs continue to be introduced. Determine clinically important drug interactions is challenging because the evidence substantiating claims for some drug is very limited; in other cases, the evidence is strong but the magnitude of effect is small. Patients should be advised to discuss any new medication or over-the-counter supplement with the clinician managing their anticoagulation, and clinicians are advised to confirm whether a clinically important drug-drug interaction has been reported when introducing a new medication in a patient anticoagulated with a VKA.

Smoking and Excess Alcohol

The effect of chronic cigarette smoking on warfarin metabolism was evaluated in a systematic review and that included 13 studies involving over 3000 patients. A meta-analysis of the studies that evaluated warfarin dose requirement found that smoking increased the dose requirement by 12 percent, corresponding to a requirement of 2.26 additional mg of warfarin per week. However, two studies that evaluated the effect of chronic smoking on INR control found equivalent control in smokers and non-smokers.

The mechanisms by which cigarette smoking interacts with warfarin metabolism is by causing enhanced drug clearance through induction of hepatic cytochrome P-450 activity by polycyclic aromatic hydrocarbons in cigarette smoke. Nicotine itself is not thought to alter warfarin metabolism.

The interaction between excess alcohol use and warfarin anticoagulation was evaluated in a case-control study that compared alcohol use in 265 individuals receiving warfarin who had major bleeding with 305 controls from the same cohort receiving warfarin who did not have major bleeding. The risk of major bleeding was increased with moderate to severe alcohol use and with heavy episodic drinking.

Mechanism by which alcohol use interacts with warfarin anticoagulation are many, and the contribution of various factors depends greatly on the amount of intake and the severity of associated liver disease. Excess alcohol consumption may interfere with warfarin metabolism. Severe liver disease may also be associated with coagulopathy, thrombocytopenia, and/or gastrointestinal varices, all of which increase bleeding risk independent of effects on warfarin metabolism.

Comorbidities

The major comorbidities that affect anticoagulation control are hepatic disease, renal dysfunction, and heart failure. In addition, other comorbidities such as metastatic cancer, diabetes, or uncontrolled hyperthyroidism may also play a role.

The liver is the predominant site of warfarin metabolism. It is also the source of the majority of coagulation factors. Thus, liver disease can affect warfarin dosage, INR control, and coagulation in general. Importantly, individuals with severe liver disease are not “auto-anticoagulated,” because they often have a combination of abnormalities that both impair hemostasis and increase thrombotic risk.

Warfarin undergoes partial excretion in the kidney. Patients with kidney disease can receive warfarin, and management is generally similar to the population without renal impairment; however, dose requirement may be lower.

Heart failure has been shown to interfere with INR stabilization.

Acute illnesses may alter anticoagulation through effects on vitamin K intake, VKA metabolism, and medication interactions, especially infections and gastrointestinal illnesses.

Genetic Factors

Genetic polymorphisms have been implicated in altered sensitivity to warfarin and other vitamin K antagonists.

The Differential Diagnosis of Abnormal Serum Uric Acid Concentration

June 28, 2017 Differential Diagnosis, Laboratory Medicine No comments , , ,

Reference range: 4.0-8.5 mg/dL or 237-506 mmol/L for males >17 years old; 2.7-7.3 mg/dL or 161-434 mmol/L for females >17 years old

Uric acid is the metabolic end-product of the purine bases of DNA. In humans, uric acid is not metabolized further and is eliminated unchanged by renal excretion (the net result of filtration, secretion, and reabsorption). It is completely filtered at the renal glomerulus and is almost completely reabsorbed. Most excreted uric acid (80% to 86%) is the result of active tubular secretion at the distal end of the proximal convoluted tubule.

As urine becomes more alkaline, more uric acid is excreted because the percentage of ionized uric acid molecules increases. Conversely, reabsorption of uric acid within the proximal tubule is enhanced and uric acid excretion is suppressed as urine becomes more acidic.

When serum uric acid exceeds the upper limit of the reference range, the biochemical diagnosis of hyperuricemia can be made. Hyperuricemia can result from an overproduction of purines and/or reduced renal clearance of uric acid. When specific factors affecting the normal disposition of uric acid cannot be identified, the problem is diagnosed as primary hyperuricemia. When specific factors can be identified, the problem is referred to as secondary hyperuricemia.

As the serum urate concentration increases above the upper limit of the reference range, the risk of developing clinical signs and symptoms of gouty arthritis, renal stones, uric acid nephropathy, and subcutaneous tophaceous deposits increases. However, many hyperuricemic patients are asymptomatic. If a patient is hyperuricemic, it is important to determine if there are potential causes of false laboratory test elevation and contributing extrinsic factors.

Exogenous Causes

Medications via 1) decreased renal excretion resulting from drug-induced renal dysfunction; 2) decreased renal excretion resulting from drug competition with uric acid for secretion within the kidney tubules; and 3) rapid destruction of large numbers of cells from anti-neoplastic therapy.

Diet. High-protein weight-reduction programs can greatly increase the amount of ingested purines and subsequent uric acid production.

Endogenous Causes

Endogenous causes of hyperuricemia include diseases, abnormal physiological conditions that may or may not be disease related, and genetic abnormalities. Diseases include 1) renal diseases (e.g., renal failure); 2) disorders associated with increased destruction of nucleoproteins; and 3) endocrine abnormalities (e.g., hypothyroidism, hypoparathyroidism, pseudohypoparathyroidism, nephrogenic diabetes insidious, and Addison disease).

Predisposing abnormal physiological conditions include shock, hypoxia, lactic acidosis, diabetic ketoacidosis, alcoholic ketosis, and strenuous muscular exercise.

Genetic abnormalities include Lesch-Nyhan syndrome, gout with partial absence of the enzyme hypoxanthine guanine phosphoribosyltransferase, increased phosphoribosyl pyrophosphate P-ribose-PP synthetase, and glycogen storage disease type I.

How to compute the expected 95% CI

June 22, 2017 Medical Statistics No comments , , , , , , , , ,

Screen Shot 2017 06 21 at 10 02 07 PM

The Random Sampling Distribution of Means

Imagine you have a hat containing 100 cards, numbered from 0 to 99. At random, you take out five cards, record the number written on each one, and find the mean of these five numbers. Then you put the cards back in the hat and draw another random sample, repeating the same process for about 10 minutes.

Do you expect that the means of each of these samples will be exactly the same? Of course not. Because of sampling error, they vary somewhat. If you plot all the means on a frequency distribution, the sample means form a distribution, called the random sampling distribution of means. If you actually try this, you will note that this distribution looks pretty much like a normal distribution. If you continued drawing samples and plotting their means ad infinitum, you would find that the distribution actually becomes a normal distribution! This holds true even if the underlying population was not all normally distributed: in our population of cards in the hat, there is just one card with each number, so the shape of the distribution is actually rectangular, yet its random sampling of means still tends to be normal.

These principles are stated by the central limit theorem, which states that the random sampling distribution of means will always tend to be normal, irrespective of the shape of the population distribution from which the samples were drawn. According to the theorem, the mean of the random sampling distribution of means is equal the mean of the original population.

Like all distributions, the random sampling distribution of means not only has a mean, but also has a standard deviation. This particular standard deviation, the standard deviation of the random sampling distribution of means is the standard deviation of the population of all the sample means. It has its own name: standard error, or standard error of the mean. It is a measure of the extent to which the sample means deviate from the true population mean.

When repeated random samples are drawn from a population, most of the means of those samples are going to cluster around the original population mean. If the samples each consisted of just two cards what would happen to the shape of the random sampling distribution of means? Clearly, with an n of just 2, there would be quite a high chance of any particular sample mean falling out toward the tails of the distribution, giving a broader, fatter shape to the curve, and hence a higher standard error. On the other hand, if the samples consisted of 25 cards each (n = 25), it would be very unlikely for many of their means to lie far from the center of the curve. Therefore, there would be a much thinner, narrower curve and a lower standard error.

So the shape of the random sampling distribution of means, as reflected by its standard error, is affected by the size of the samples. In fact, the standard error is equal to the population standard deviation (σ) divided by the square root of the size of the samples (n).

Using the Standard ErrorScreen Shot 2017 06 21 at 9 04 38 PM

Because the random sampling distribution of means is normal, so the z score could be expressed as follow. It is possible to find the limits between which 95%  of all possible random sample means would be expected to fall (z score = 1.96).Screen Shot 2017 06 21 at 9 15 41 PM

Estimating the Mean of a Population

It has been shown that 95% of all possible members of the population (sample means) will lie within approximately +-2 (or, more exactly, +-1.96) standard errors of the population mean. The sample mean lies within +-1.96 standard errors of the population mean in 95% of the time; conversely, the population mean lies within +-1.96 standard errors of the sample mean 95% of the time. These limits of +-1.96 standard errors are called the confidence limits.

Screen Shot 2017 06 21 at 9 28 02 PM

Therefore, 95% confidence limits are approximately equal to the sample mean plus or minus two standard errors. The difference between the upper and lower confidence limits is called the confidence interval – sometimes abbreviated as CI. Researchers obviously want the confidence interval to be as narrow as possible. The formula for confidence limits shows that to make the confidence interval narrower (for a given level of confidence, such as 95%), the standard error must be made smaller.

Estimating the Standard Error

According to the formula above, we cannot calculate standard error unless we know population standard deviation (σ). In practice, σ will not be known: researchers hardly ever know the standard deviation of the population (and if they did, they would probably not need to use inferential statistics anyway).

As a result, standard error cannot be calculated, and so z scores cannot be used. However, the standard error can be estimated using data that are available from the sample alone. The resulting statistic is the estimated standard error of the mean, usually called estimated standard error, as shown by formula below.

Screen Shot 2017 06 21 at 9 38 55 PM

where S is the sample standard deviation.

t Scores

The estimated standard error is used to find a statistic, t, that can be used in place of z score. The t score, rather than the z score, must be used when making inferences about means that are based on estimates of population parameters rather than on the population parameters themselves. The t score is Student’s t, which is calculated in much the same way as z score. But while z was expressed in terms of the number of standard errors by which a sample mean lies above or below the population mean, t is expressed in terms of the number of estimated standard errors by which the sample mean lies above or below the population mean.

Screen Shot 2017 06 21 at 9 55 46 PM

Just as z score tables give the proportions of the normal distribution that lie above and below any given z score, t score tables provide the same information for any given t score. However, there is one difference: while the value of z for any given proportion of the distribution is constant, the value of t for any given proportion is not constant – it varies according to sample size. When the sample size is large (n >100), the value of t and z are similar, but as samples get smaller, t and z scores become increasingly different.

Degree of Freedom and t Tables

Table 2-1 (right-upper) is an abbreviated t score table that shows the values of t corresponding to different areas under the normal distribution for various sample sizes. Sample size (n) is not stated directly in t score tables; instead, the tables express sample size in terms of degrees of freedom (df). The mathematical concept behind degrees of freedom is complex and not needed for the purposes of USMLE or understanding statistics in medicine: for present purposes, df can be defined as simply equal to n – 1. Therefore, to determine the values of t that delineate the central 95% of the sampling distribution of means based on a sample size of 15, we would look in the table for the appropriate value of t for df = 14; this is sometimes written as t14. Table 2-1 shows that this value is 2.145.

As n becomes larger (100 or more), the values of t are very close to the corresponding values of z.

Why the Odds Ratio Can Be Used as an Estimate for Relative Risk in a Case-Control Study

June 10, 2017 Clinical Trials, Research No comments , , ,

The data in a case-control study represent two samples: The cases are drawn from a population of people who have the disease and the controls from a population of people who do not have the disease. The predictor variable (risk factor) is measured, and the results can be summarized in a 2 X 2 table like the following one:

Screen Shot 2017 06 10 at 11 01 33 PM

If this 2 X 2 table represented data from a cohort study, then the incidence of the disease in those with the risk factor would be a/(a + b) and the relative risk would be simply [a/(a + b)]/[c/(c + d)]. However, it is not appropriate to compute either incidence or relative risk in this way in a case-control study because the two samples are not drawn from the population in the same proportions. Usually, there are roughly equal numbers of cases and controls in the study samples but many fewer cases than controls in the population. Instead, relative risk in a case-control study can be approximately by the odds ratio, computed as the cross-product of the 2 X 2 table, ad/cb.

Screen Shot 2017 06 10 at 11 12 05 PM

This extremely useful fact is difficult to grasp intuitively but easy to demonstrate algebraically. Consider the situation for the full population, represented by a’, b’, c’, and d’.

Here it is appropriate to calculate the risk of disease among people with the risk factor as a’/(a’ + b’), the risk among those without the risk factor as c’/(c’ + d’), and the relative risk as [a’/(a’ + b’)]/[c’/(c’ + d’)]. We have already discussed the fact that a’/(a’ + b’) is not equal to a/(a + b). However, if the disease is relatively uncommon in both those with and without the risk factor (as most are), then a’ is much smaller than b’, and c’ is much smaller than d’. This means that a’/(a’ + b’) is closely approximated by a’/b’ and that c’/(c’ + d’) is closely approximated by c’/d’. Therefore, the relative risk of the population can be approximated as follows:

Screen Shot 2017 06 10 at 11 22 28 PM

The latter term is the odds ratio of the population (literally, the ratio of the odds of disease in those with the risk factor, a’/b’, to the odds of disease in those without the risk factor, c’/d’).

a’/c’ in the population equals a/c in the sample is the cases are representative of all cases in the population (i.e., have the same prevalence of the risk factor). Similarly, b’/d’ equals b/d if the controls are representative.

Therefore, the population parameters in this last term can be replaced by the sample parameters, and we are left with the fact that the odds ratio observed in the sample, ad/bc, is a close approximation of the relative risk in the population [a’/(a’ + b’)]/[c’/(c’ + d’)], provided that the disease is rare.

Why can't calculate risk in a case-control study?

For most people, the risk of some particular outcome, being akin to probability, makes more sense and is easier to interpret than the odds for that same outcome. To calculate the risk, you need to know two things: the total number who'd had a outcome and the number of those who had been exposed to the risk. You would then divide the latter by the former. In a cohort study on the other hand, you start with healthy individuals and follow them to measure the proportion exposed to the risk factor who subsequently developed the illness. This proportion would be an estimate of the risk in the population.

However, in a case-control study, you select on the basis of whether people have some illness or condition or not. So you have one group composed of individuals who've had an illness, and one group who have not had the illness, but both groups will contain individual  who were, and were not, exposed to the risk. Moreover, you can select whatever number of cases and controls you want. You could, for example, halve the number of cases and double the number of controls. This means that the column totals, which you would otherwise need for your risk calculation, are meaningless. The result of this is that the population at risk cannot be estimated using a case-control study and so risks and risk ratios cannot be calculated.