Laboratory Medicine

Inherited Variation and Polymorphism in DNA

August 3, 2017 Cytogenetics, Laboratory Medicine, Molecular Biology, Pharmacogenetics No comments

The original Human Genome Project and the subsequent study of now many thousands of individuals worldwide have provided a vast amount of DNA sequence information. With this information in hand, one can begin to characterize the types and frequencies of polymorphic variation found in the human genome and to generate catalogues of human DNA sequence diversity around the globe. DNA polymorphisms can be classified according to how the DNA sequence varies between the different alleles.

Single Nucleotide Polymorphisms

The simplest and most common of all polymorphisms are single nucleotide polymorphisms (SNPs). A locus characterized by a SNP usually has only two alleles, corresponding to the two different bases occupying that particular location in the genome. As mentioned previously, SNPs are common and are observed on average once every 1000 bp in the genome. However, the distribution of SNPs is uneven around the genome; many more SNPs are found in noncoding parts of the genome, in introns and in sequences that are some distance from known genes. Nonetheless, there is still a significant number of SNPs that do occur in genes and other known functional elements in the genome. For the set of protein-coding genes, over 100,000 exonic SNPs have been documented to date. Approximately half of these do not alter the predicted amino acid sequence of the encoded protein and are thus termed synonymous, whereas the other half do alter the amino acid sequence and are said to be nonsynonymous. Other SNPs introduce or change a stop codon, and yet others alter a known splice site; such SNPs are candidates to have significant functional consequences.

The significance for health of the vast majority of SNPs is unknown and is the subject of ongoing research. The fact that SNPs are common does not mean that they are without effect on health or longevity. What it does mean is that any effect of common SNPs is likely to involve a relatively subtle altering of disease susceptibility rather than a direct cause of serious illness.

Insertion-Deletion Polymorphisms

A second class of polymorphism is the result of variations caused by insertion or deletion (in/dels or simply indels) of anywhere from a single base pair up to approximately 1000 bp, although larger indels have been documented as well. Over a million indels have been described, numbering in the hundreds of thousands in any one individual’s genome. Approximately half of all indels are referred to as “simple” because they have only two alleles – that is, the presence or absence of the inserted or deleted segment.

Microsatellite Polymorphisms

Other indels, however, are multiallelic due to variable numbers of the segment of DNA that is inserted in tandem at a particular location, thereby constituting what is referred to as a microsatellite. They consist of stretches of DNA composed of units of two, three, or four nucleotides, such as TGTGTG, CAACAACAA, or AAATAAATAAAT, repeated between one and a few dozen times at a particular site in the genome. The different alleles in a microsatellite polymorphism are the result of differing numbers of repeated nucleotide units contained within any one microsatellite and are therefore sometimes also referred to as short tandem repeat (STR) polymorphisms. A microsatellite locus often has many alleles (repeat lengths) that can be rapidly evaluated by standard laboratory procedures to distinguish different individuals and to infer familial relationships. Many tens of thousands of microsatellite polymorphic loci are known throughout the human genome. Finally, microsatellites are a particularly useful group of indels. Determining the alleles at multiple microsatellite loci is currently the method of choice for DNA fingerprinting used for identity testing.

Mobile Element Insertion Polymorphisms

Nearly half of the human genome consists of families of repetitive elements that are dispersed around the genome. Although most of the copies of these repeats are stationary, some of them are mobile and contribute to human genetic diversity through the process of retrotransposition, a process that involves transcription into an RNA, reverse transcription into a DNA sequence, and insertion into another site in the genome. Mobile element polymorphisms are found in nongenic regions of the genome, a small proportion of them are found within genes. At least 5000 of these polymorphic loci have an insertion frequency of greater than 10% in various populations.

Coyp Number Variants

Another important type of human polymorphism includes copy number variants (CNVs). CNVs are conceptually related to indels and microsatellites but consist of variation in the number of copies of larger segments of the genome, ranging in size from 1000 bp to many hundreds of kilobase pairs. Variants larger than 500 kb are found in 5% to 10% of individuals in the general population, whereas variants encompassing more than 1 Mb are found in 1% to 2%. The largest CNVs are sometimes found in regions of the genome characterized by repeated blocks of homologous sequences called segmental duplications (or segdups).

Smaller CNVs in particular may have only two alleles (i.e., the presence or absence of a segment), similar to indels in that regard. Larger CNVs tend to have multiple alleles due to the presence of different numbers of copies of a segment of DNA in tandem. In terms of genome diversity between individuals, the amount of DNA involved in CNVs vastly exceeds the amount that differs because of SNPs. The content of any two human genomes can differ by as much as 50 to 100 Mb because of copy number differences at CNV loci.

Notably, the variable segment at many CNV loci can include one to as several dozen genes, that thusCNVs are frequently implicated in traits that involve altered gene dosage. When a CNV is frequent enough to be polymorphic, it represents a background of common variation that must be understood if alterations in copy number observed in patients are to be interpreted properly. As with all DNA polymorphism, the significance of different CNV alleles in health and disease susceptibility is the subject of intensive investigation.

Inversion Polymorphisms

A final group of polymorphisms to be discussed is inversions, which differ in size from a few base pairs to large regions of the genome (up to several megabase pairs) that can be present in either of two orientations in the genomes of different individuals. Most inversions are characterized by regions of sequence homology at the edges of the inverted segment, implicating a process of homologous recombination in the origin of the inversions. In their balanced form, inversions, regardless of orientation, do not involve a gain or loss of DNA, and the inversion polymorphisms (with two alleles corresponding to the two orientations) can achieve substantial frequencies in the general population.

Renal Handling of Urea

July 22, 2017 Laboratory Medicine, Nephrology, Physiology and Pathophysiology, Urology No comments , , , , , , ,

Renal Handling of Urate

Urate, an anion that is the base form of uric acid, provides a fascinating example of the renal handling of organic anions that is particularly important for clinical medicine and is illustrative of renal pathology. An increase in the plasma concentration of urate can cause gout and is thought to be involved in some forms of heart disease and renal disease; therefore, its removal from the blood is important. However, instead of excreting all the urate it can, the kidneys actually reabsorb most of the filtered urate. Urate is freely filterable. Almost all the filtered rate is reabsorbed early in the proximal tubule, primarily via antiporters (URAT1) that exchange urate for another organic anion. Further on the proximal tubule urate undergoes active tubular secretion. Then, in the straight portion, some of the urate is once again reabsorbed. Because the total rate of reabsorption is normally much greater than the rate of secretion, only a small fraction of the filtered load is excreted.

Although urate reabsorption is greater than secretion, the secretory process is controlled to maintain relative constancy of plasma urate. In other words, if plasma urate begins to increase because of increased urate production, the active proximal secretion of urate is stimulated, thereby increasing urate excretion.

Given these mechanisms of renal urate handling, the reader should be able to deduce the 3 ways by which altered renal function can lead to decreased urate excretion and hence increased plasma urate, as in gout: 1) decreased filtration of urate secondary to decreased GFR, 2) excessive reabsorption of urate, and 3) diminished secretion of urate.

Urate, and some other organic solutes, although more membrane permeable in the neutral form, are less soluble in aqueous solution and tend to precipitate. The combination of excess plasma urate and low urinary pH, which converts urate to the neutral uric acid, often leads to the formation of uric acid kidney stones.

Renal Handling of Urea

Urea is a very special substance for the kidney. It is an end product of protein metabolism, waste to be excreted, and also an important component for the regulation of water excretion. Urea differs from all the other organic solutes in several significant ways. 1) There are no membrane transport mechanisms in the proximal tubule; instead, it easily permeates the tight junctions of the proximal tubule where it is reabsorbed paracellularly. 2) Tubular elements beyond the proximal tubule express urea transporters and handle urea in a complex, regulated manner.

Urea is derived from proteins, which form much of the functional and structural substance of body tissues. Proteins are also a source of metabolic fuel. Dietary protein is first digested into its constituent amino acids. These are then used as building blocks for tissue protein, converted to fat or oxidized immediately. During fasting, the body breaks down proteins into amino acids that are used as fuel, in essence consuming itself. The metabolism of amino acids yields a nitrogen moiety (ammonium) and a carbohydrate moiety. The carbohydrate goes on to further metabolic processing, but the ammonium cannot be further oxidized and is a waste product. Ammonium per se is rather toxic to most tissues and the liver immediately converts most ammonium to urea and a smaller, but crucial amount to glutamine. While normal levels of urea are not toxic, the large amounts produced on a daily basis, particularly on a high protein diet, represent a large osmotic load that must be excreted. Whether a person is well fed or fasting, urea production proceeds continuously and constitutes about half of the usual solute content of urine.

The normal level of urea in the blood is quite variable, reflecting variations in both protein intake and renal handling of urea. Over days to weeks, renal urea excretion must match hepatic production; otherwise plasma levels would increase into the pathological range producing a condition called uremia. On a short-term basis (hours to days), urea excretion rate may not exactly match production rate because urea excretion is also regulated for purposes other than keeping a stable plasma level.

The gist of the renal handling of urea is the following: it is freely filtered. About half is reabsorbed passively in the proximal tubule. Then an amount equal to that reabsorbed is secreted back into the loop of Henle. Finally, about half is reabsorbed a second time in the medullary collecting duct. The net result is that about half the filtered load is excreted.

pH Dependence of Passive Reabsorption or Secretion

Many of the organic solutes handled by the kidney are weak acids or bases and exist in both, neutral and ionized forms. The state of ionization affects both the aqueous solubility and membrane permeability of the substance. Neutral solutes are more permeable than ionized solutes. As water is reabsorbed from the tubule, any substance remaining in the tubule becomes progressively more concentrated. And the luminal pH may change substantially during flow through the tubules. Therefore, both the progressive concentration of organic solutes and change in pH strongly influence the degree to which they are reabsorbed by passive diffusion through regions of tubule beyond the proximal tubule.

At low pH weak acids are predominantly neutral, while at high pH they dissociate into an anion and a proton. Imagine the case in which the tubular fluid becomes acidified relative to the plasma, which it does on a typical Western diet. For a weak acid in the tubular fluid, acidification converts much of the acid to the neutral form and therefore, increases its permeability. This favors diffusion out of the lumen (reabsorption). Highly acidic urine tends to increase passive reabsorption of weak acids (and promote less excretion). For many weak bases, the pH dependence is just opposite. At low pH they are protonated cations. As the urine becomes acidified, more is converted to the impermeable charged form and is trapped in the lumen. Less is reabsorbed passively, and more is excreted.

Evaluation of Chronic Heart Failure

July 12, 2017 Cardiology, Critical Care, Differential Diagnosis, Laboratory Medicine No comments , , , ,

Table 28-3 and 28-4, taken from the European Society of Cardiology heart failure guideline, recommend a routine assessment to establish the diagnosis and likely cause of heart failure. Once the diagnosis of heart failure has been made, the first step in evaluating heart failure is to determine the severity and type of cardiac dysfunction, by measuring ejection fraction through two-dimensional echocardiography and/or radionuclide ventriculography. Measurement of ejection fraction is the gold standard for differentiating between the two forms of heart failure, systolic and diastolic, and is particularly important given that the approaches to therapy for each syndrome differ somewhat. The history and physical examination should include assessment of symptoms, functional capacity, and fluid retention.

ScreenShot2017-07-12at9.31.42PM-2017-07-12-21-06-2.png

Functional capacity is measured through history taking or preferably an exercise test. Analysis of expired air during exercise offers a precise measure of the patient’s physical limitations. However, this test is uncommonly performed outside of cardiac transplant centers. The NYHA has classified heart failure into four functional classes that may be determined by history taking. The NYHA functional classification should not be confused with the stages of heart failure described in the American College of Cardiology/American Heart Association heart failure guideline. The NYHA classification describes functional limitation and is applicable to stage B through stage D patients, whereas the staging system describes disease progression somewhat independently of functional status.

Assessment of fluid retention through measurement of jugular venous pressure, auscultation of the lungs, and examination for peripheral edema is central to the physical examination of heart failure patients.

Given the limitation of physical signs and symptoms in evaluating heart failure clinical status, a number of noninvasive and invasive tools are under development for the assessment of heart failure. One such tool that has proven useful in determining the diagnosis and prognosis of heart failure is the measurement of plasma B-type natriuretic peptide (BNP) levels. Multiple studies demonstrate the utility of BNP measurement in the diagnosis of heart failure. The diagnostic accuracy of BNP at a cutoff of 100 pg/mL was 83.4 percent. The negative predictive value of BNP was excellent. At levels less than 50 pg/mL, the negative predictive value of the assay was 96%.

ScreenShot2017-07-12at9.31.54PM-2017-07-12-21-06-1.png

Based largely on the findings of the BNP Multinational Study, clinicians were advised that a plasma BNP concentration below 100 pg/mL made the diagnosis of congestive heart failure unlikely, while a level above 500 pg/mL made it highly likely. For BNP levels between 100 pg/mL and 500 pg/mL, the use of clinical judgement and additional testing were encouraged.

Additionally, plasma BNP is useful in predicting prognosis in heart failure patients. However, serial measurement of plasma BNP as a guide to heart failure management has not yet been proven useful in the management of acute or chronic heart failure.

Some Critical Notices Should Knowing When Using Warfarin

June 30, 2017 Anticoagulant Therapy, Hematology, Laboratory Medicine No comments , , , , , , , , , , , ,

PT/INR and Anticoagulation Status

For the vast majority of patients        , monitoring is done using the prothrombin time with international normalized ratio (PT/INR), which reflects the degree of anticoagulation due to depletion of vitamin K-dependent coagulation. However, attention must be paid that the PT/INR in a patient on warfarin may note reflect the total anticoagulation status of the patient in certain settings:

  • First few day of warfarin initiation

The initial prolongation of the PT/INR during the first one to three days of warfarin initiation does not reflect full anticoagulation, because only the factor with the shortest half-life is initially depleted; other functional vitamin K-dependent factors with longer half-lives (e.g., prothrombin) continues to circulate. The full anticoagulation effect of a VKA generally occurs within approximately one week after the initiation of therapy and results in equilibrium levels of functional factors II, IX, and X at approximately 10 to 35 percent of normal.

  • Liver disease

Individuals with liver disease frequently have abnormalities in routine laboratory tests of coagulation, including prolongation of the PT, INR, and aPTT, along with mild thrombocytopenia, elevated D-dimer, especially when liver synthetic function is more significantly impaired and portal pressures are increased. However, these tests are very poor at predicting the risk of bleeding in individuals with liver disease because they only reflect changes in procoagulant factors.

  • Baseline prolonged PT/INR

Some patients with the antiphospholipid antibody syndrome (APS) have marked fluctuations in the INR that make monitoring of the degree of anticoagulation difficult.

Time in the Therapeutic Range (TTR)

For patients who are stably anticoagulated with a VKA, the percentage of time in the therapeutic range (TTR) is often used as a measure of the quality of anticoagulation control. TTR can be calculated using a variety of methods. The TTR reported depends on the method of calculation as well as the INR range considered “therapeutic.” A TTR of 65 to 70 percent is considered to be a reasonable and achievable degree of INR control in most settings.

Factors Affecting the Dose-Response Relationship Between Warfarin and INR

  • Nutritional status, including vitamin K intake
  • Medication Adherence
  • Genetic variation
  • Drug interactions
  • Smoking and alcohol use
  • Renal, hepatic, and cardiac function
  • Hypermetabolic states

In addition, female sex, increased age, and previous INR instability or hemorrhage have been associated with a greater sensitivity to warfarin and/or an increased risk of bleeding.

Dietary Factors

Vitamin K intake – Individuals anti coagulated with warfarin generally are sensitive to fluctuations in vitamin K intake, and adequate INR control requires close attention to the amount of vitamin K ingested from dietary and other sources. The goal of monitoring vitamin K intake is to maintain a moderate, constant level of intake rather than to eliminate vitamin K from the diet. Specific guidance from anticoagulation clinics may vary, but a general principle is that maintaining a consistent level of vitamin K intake should not interfere with a nutritious diet. Patients taking warfarin may wish to be familiar with possible sources of vitamin K (in order to avoid inconsistency).

Of note, intestinal microflora produce vitamin K2, and one of the ways antibiotics contribute to variability in the prothrombin time/INR is by reducing intestinal vitamin K synthesis.

Cranberry juice and grapefruit juice have very low vitamin K content but have been reported to affect VKA anticoagulation in some studies, and some anticoagulation clinics advise patients to limit their intake to one or two servings (or less) per day.

Medication Adherence

Medication adherence for vitamin K antagonists can be challenging due to the need for frequent monitoring and dose adjustments, dietary restrictions, medication interactions, and, in some cases, use of different medication doses on different days to achieve the optimal weekly intake. Reducing the number of medications prescribed may be helpful, if this can be done safely.

Drug Interactions

A large number of drugs interact with vitamin K antagonists by a variety of mechanisms, and additional interacting drugs continue to be introduced. Determine clinically important drug interactions is challenging because the evidence substantiating claims for some drug is very limited; in other cases, the evidence is strong but the magnitude of effect is small. Patients should be advised to discuss any new medication or over-the-counter supplement with the clinician managing their anticoagulation, and clinicians are advised to confirm whether a clinically important drug-drug interaction has been reported when introducing a new medication in a patient anticoagulated with a VKA.

Smoking and Excess Alcohol

The effect of chronic cigarette smoking on warfarin metabolism was evaluated in a systematic review and that included 13 studies involving over 3000 patients. A meta-analysis of the studies that evaluated warfarin dose requirement found that smoking increased the dose requirement by 12 percent, corresponding to a requirement of 2.26 additional mg of warfarin per week. However, two studies that evaluated the effect of chronic smoking on INR control found equivalent control in smokers and non-smokers.

The mechanisms by which cigarette smoking interacts with warfarin metabolism is by causing enhanced drug clearance through induction of hepatic cytochrome P-450 activity by polycyclic aromatic hydrocarbons in cigarette smoke. Nicotine itself is not thought to alter warfarin metabolism.

The interaction between excess alcohol use and warfarin anticoagulation was evaluated in a case-control study that compared alcohol use in 265 individuals receiving warfarin who had major bleeding with 305 controls from the same cohort receiving warfarin who did not have major bleeding. The risk of major bleeding was increased with moderate to severe alcohol use and with heavy episodic drinking.

Mechanism by which alcohol use interacts with warfarin anticoagulation are many, and the contribution of various factors depends greatly on the amount of intake and the severity of associated liver disease. Excess alcohol consumption may interfere with warfarin metabolism. Severe liver disease may also be associated with coagulopathy, thrombocytopenia, and/or gastrointestinal varices, all of which increase bleeding risk independent of effects on warfarin metabolism.

Comorbidities

The major comorbidities that affect anticoagulation control are hepatic disease, renal dysfunction, and heart failure. In addition, other comorbidities such as metastatic cancer, diabetes, or uncontrolled hyperthyroidism may also play a role.

The liver is the predominant site of warfarin metabolism. It is also the source of the majority of coagulation factors. Thus, liver disease can affect warfarin dosage, INR control, and coagulation in general. Importantly, individuals with severe liver disease are not “auto-anticoagulated,” because they often have a combination of abnormalities that both impair hemostasis and increase thrombotic risk.

Warfarin undergoes partial excretion in the kidney. Patients with kidney disease can receive warfarin, and management is generally similar to the population without renal impairment; however, dose requirement may be lower.

Heart failure has been shown to interfere with INR stabilization.

Acute illnesses may alter anticoagulation through effects on vitamin K intake, VKA metabolism, and medication interactions, especially infections and gastrointestinal illnesses.

Genetic Factors

Genetic polymorphisms have been implicated in altered sensitivity to warfarin and other vitamin K antagonists.

The Differential Diagnosis of Abnormal Serum Uric Acid Concentration

June 28, 2017 Differential Diagnosis, Laboratory Medicine No comments , , ,

Reference range: 4.0-8.5 mg/dL or 237-506 mmol/L for males >17 years old; 2.7-7.3 mg/dL or 161-434 mmol/L for females >17 years old

Uric acid is the metabolic end-product of the purine bases of DNA. In humans, uric acid is not metabolized further and is eliminated unchanged by renal excretion (the net result of filtration, secretion, and reabsorption). It is completely filtered at the renal glomerulus and is almost completely reabsorbed. Most excreted uric acid (80% to 86%) is the result of active tubular secretion at the distal end of the proximal convoluted tubule.

As urine becomes more alkaline, more uric acid is excreted because the percentage of ionized uric acid molecules increases. Conversely, reabsorption of uric acid within the proximal tubule is enhanced and uric acid excretion is suppressed as urine becomes more acidic.

When serum uric acid exceeds the upper limit of the reference range, the biochemical diagnosis of hyperuricemia can be made. Hyperuricemia can result from an overproduction of purines and/or reduced renal clearance of uric acid. When specific factors affecting the normal disposition of uric acid cannot be identified, the problem is diagnosed as primary hyperuricemia. When specific factors can be identified, the problem is referred to as secondary hyperuricemia.

As the serum urate concentration increases above the upper limit of the reference range, the risk of developing clinical signs and symptoms of gouty arthritis, renal stones, uric acid nephropathy, and subcutaneous tophaceous deposits increases. However, many hyperuricemic patients are asymptomatic. If a patient is hyperuricemic, it is important to determine if there are potential causes of false laboratory test elevation and contributing extrinsic factors.

Exogenous Causes

Medications via 1) decreased renal excretion resulting from drug-induced renal dysfunction; 2) decreased renal excretion resulting from drug competition with uric acid for secretion within the kidney tubules; and 3) rapid destruction of large numbers of cells from anti-neoplastic therapy.

Diet. High-protein weight-reduction programs can greatly increase the amount of ingested purines and subsequent uric acid production.

Endogenous Causes

Endogenous causes of hyperuricemia include diseases, abnormal physiological conditions that may or may not be disease related, and genetic abnormalities. Diseases include 1) renal diseases (e.g., renal failure); 2) disorders associated with increased destruction of nucleoproteins; and 3) endocrine abnormalities (e.g., hypothyroidism, hypoparathyroidism, pseudohypoparathyroidism, nephrogenic diabetes insidious, and Addison disease).

Predisposing abnormal physiological conditions include shock, hypoxia, lactic acidosis, diabetic ketoacidosis, alcoholic ketosis, and strenuous muscular exercise.

Genetic abnormalities include Lesch-Nyhan syndrome, gout with partial absence of the enzyme hypoxanthine guanine phosphoribosyltransferase, increased phosphoribosyl pyrophosphate P-ribose-PP synthetase, and glycogen storage disease type I.