Critical Care

Acute Potassium Disorders

July 15, 2017 Cardiology, Clinical Skills, Critical Care, Differential Diagnosis, EKG/ECG No comments , , , , ,

Disorders of potassium homeostasis are common in hospitalized patients and may be associated with severe adverse clinical outcomes, including death. Prevention and proper treatment of hyper- and hypokalemia depend on an understanding of the underlying physiology.

The total body potassium content of a 70-kg adult is about 3500 mmol (136.5 g), of which only 2% (about 70 mmol / 2.73 g) is extracellular. It is not surprising that the extracellular potassium concentration is tightly regulated. In fact, two separate and cooperative systems participate in potassium homeostasis. One system regulates external potassium balance: the total body parity of potassium elimination with potassium intake. The other system regulates internal potassium balance: the distribution of potassium between the intracellular and extracellular fluid compartments. This latter system provides a short-term defense against changes in the plasma potassium concentration that might otherwise result from total body potassium losses or gains.

Disorders of Potassium Homeostasis

Disorders of potassium homeostasis may be conveniently divided according to the duration of the disturbance: acute (<48 hours’ duration) or chronic.

Acute Hyperkalemia

Excessive potassium intake. Given an acute potassium load, a normal individual will excrete about 50% in the urine and transport about 90% of the remainder into cells over 4 to 6 hours. It is possible to overwhelm this adaptive mechanism such that if too much potassium is taken in too quickly, significant hyperkalemia will result. Such events are almost always iatrogenic. One’s ability to tolerate a potassium load declines with disordered internal balance and impaired renal potassium excretory capacity. In such circumstances, an otherwise tolerable increase in potassium intake may cause clinically significant hyperkalemia: Doses of oral potassium supplements as small as 30 to 45 mmol have resulted in severe hyperkalemia in patients with impaired external or internal potassium homeostasis.

KCl, used as a supplement, is the drug most commonly implicated in acute hyperkalemia. Banked blood represents a trivial potassium load under most circumstances, because a unit of fresh banked blood, either whole or packed cells, contains only 7 mmol (273 mg) of potassium. Thus, severe hyperkalemia would result only from massive transfusion of compatible blood. However, the potassium concentration in banked blood does increase substantially as the blood ages.

Patients undergoing open heart surgery are exposed to cardioplegic, solutions containing KCl typically at about 16 mmol/L, which may lead to clinically significant hyperkalemia in the postoperative period, especially in patients with diabetes mellitus with or without renal failure.

Abnormal potassium distribution. Acute hyperkalemia may result from sudden redistribution of intracellular potassium to the extracellular space. If only 2% of intracellular potassium were to leak unopposed from cells, serum potassium level would immediately double. Fortunately, such dramatic circumstances are rarely encountered. Nevertheless, smaller degrees of potassium redistribution commonly result in clinically significant hyperkalemia.

Among the most impressive syndromes associated with acute hyperkalemia are those involving rapid cell lysis. The tumor lysis syndrome results from treatment of chemosensitive bulky tumors with release of intracellular contents, including potassium, into the ECF. Extreme hyperkalemia even causing sudden death has featured prominently in some series of patients. Most of such patients were in renal failure from acute uric acid nephropathy, thus impairing their ability to excrete the potassium load. Rhabdomylosis, either traumatic or nontraumatic, may result in sudden massive influx of potassium to the extracellular space. Other circumstances that may result in redistributive hyperkalemia include severe extensive burns, hemolytic transfusion reactions, and mesenteric ischemia or infarction.

Pharmacologic agents. Two drugs may rarely cause acute hyperkalemia by redistribution: digitalis glycosides and succinylcholine. Massive digitalis overdose has been associated with extreme hyperkalemia. Succinylcholine depolarizes the motor end plate and in normal individuals causes a trivial amount of potassium leak from muscle, resulting in an increase in serum potassium level by about 0.5 mmol/L. In patients with neuromuscular disorders, muscle damage, or prolonged immobilization, however, muscle depolarization may be more widespread, causing severe hyperkalemia. Prolonged use of nondepolarizing non depolarizing neuromuscular blockers in critically ill patients may predispose to succinylcholine-induced hyperkalemia.

Hyperkalemic periodic paralysis. This rare syndrome of episodic hyperkalemia and paralysis is caused by a mutation of the skeletal muscle sodium channel, inherited in an autosomal dominant pattern. Attacks may be precipitated by exercise, fasting, exposure to cold, and potassium administration, and prevented by frequent carbohydrate snacks. Attacks are usually brief and treatment consists of carbohydrate ingestion. Severe attacks may require intravenous glucose infusion.

Acute renal failure. Hyperkalemia accompanies acute renal failure in 30% to 50% of cases. It is seen most commonly in oliguric renal failure. Contributing factors include tissue destruction and increased catabolism.

Pseudohyperkalemia. It refers to a measured potassium level that is higher than that circulating in the patient’s blood. It has a number of possible causes. First, it may be caused by efflux of potassium out of blood cells in the test tube after phlebotomy. This may be seen in a serum specimen in cases of thrombocytosis or leukocytosis, when the clot causes cell lysis in vitro. These days, many clinical laboratories measure electrolytes in plasma (unclotted) specimens. Even under these conditions, extreme leukocytosis may cause pseudohyperkalemia if the specimen is chilled for a long time before the plasma is separated, leading to passive potassium leak from cells. Hemolysis during specimen collection with false raise [K+] or plasma potassium concentration by liberating intraerythrocyte to potassium. Second, if the patient’s arm is exercised by fist clenching with a tourniquet in place before the specimen is drawn, the sampled blood potassium concentration will rise significantly as a result of local muscle release of intracellular potassium.

Acute Hypokalemia

Treatment of diabetic ketoacidosis. It is well recognized that patients presenting in DKA are always severely depleted in total body potassium as a result of glucose-driven osmotic diuresis, poor nutrition, and vomiting during the development of DKA. Paradoxically, most patients in DKA have a normal serum potassium level upon admission. Insulin deficiency and hyperglycemia appear to account for the preservation of a normal [K+] despite severe total body potassium depletion. Once therapy for DKA is instituted, however, [K+] typically plummets as potassium is rapidly taken up by cells. Potassium replacement at rates up to 120 mmol (4.68 g) per hour have been reported, with total potassium supplementation of 600 to 800 mmol (23.5 to 31.2 g) within the first 24 hours of treatment. Hypokalemia in this setting may lead to respiratory arrest.

Refeeding. A situation analogous to DKA arises during aggressive refeeding after prolonged starvation or with aggressive “hyperalimentation” of chronically ill patients. The glucose-stimulated hyperinsulinemia and tissue anabolism shift potassium into cells, rapidly depleting extracellular potassium.

Pharmacologic agents. Specific beta2-adrenergic receptor agonists may cause electrophysiologically significant hypokalemia, especially when given to patients who are potassium depleted from the use of diuretic drugs. Epinephrine, given intravenously in a dose about 5% of that recommended for cardiac resuscitation, cause a fall in [K+] by about 1 mmol/L. A rare cause of severe hypokalemia is poisoning with soluble barium salts such as chloride, carbonate, hydroxide, and sulfide. Soluble barium salts are used in pesticides and some depilatories, which may be ingested accidentally or intentionally. Thiopentone, a barbiturate used to induce coma for refractory intracranial hypertension, is associated with redistributive hypokalemia in the majority of treated patients within 12 hours of initiating therapy.

Hypokalemic periodic paralysis. Three forms of this rare syndrome have been described: familial, sporadic, and thyrotoxic. All have in common attacks of muscle weakness accompanied by acute hypokalemia caused by cellular potassium uptake.

Pseudohypokalemia. Severe leukocytosis may cause spuriously low plasma potassium concentrations if blood cells are left in contact with the plasma for a long time at room temperature or higher. This phenomenon results from ongoing cell metabolism in vitro with glucose and potassium uptake. Unexpected hypokalemia and hypoglycemia in the setting of leukocytosis should alert the clinician to this phenomenon.


Potassium exchange between ECF and ICF: insulin, epinephrine, and [H+]

Potassium renal secretion: [K+], dietary intake of potassium, AngII, aldosterone, tubular sodium delivered to principal cells (at distal nephron).

EKG Changes of Potassium Disturbances



Hyperkalemia produces a progressive evaluation of changes in the EKG that can culminate in ventricular fibrillation and death. The presence of electrocardiographic changes is a better measure of clinically significant potassium toxicity than the serum potassium level. As the potassium begins to rise, the T waves across the entire 12-lead EKG begin to peak. This effect can easily be confused with the peaked T waves of an acute myocardial infarction. One difference is that the changes in an infarction are confined to those leads overlying the area of the infarct, whereas in hyperkalemia, the changes are diffuse. With a further increase in the serum potassium, the PR interval becomes prolonged, and the P wave gradually flattens and then disappears. Ultimately, the QRS complex widens until it merges with the T wave, forming a sine wave pattern. Ventricular fibrillation may eventually develop.

It is important to note that whereas these changes frequently do occur in the order described as the serum potassium rises, they do not always do so. Progression to ventricular fibrillation can occur with devastating suddenness. Any change in the EKG due to hyperkalemia mandates immediate clinical attention.


With hypokalemia, the EKG may again be a better measure of serious toxicity than the serum potassium level. Three changes can be seen, occurring in no particular order, including: ST-segment depression, flattening of the T wave with prolongation of the QT interval, and appearance of a U wave. The term U wave is given to a wave appearing after T wave in the cardiac cycle. It is usually has the same axis as the T wave and is often best seen in the anterior leads. Its precise physiologic meaning is not fully understood. Although U waves are the most characteristic feature of hypokalemia, they are not in and of themselves diagnostic. Other conditions can produce prominent U waves, and U waves can sometimes be seen in patients with normal hearts and normal serum potassium levels. Rarely, severe hypokalemia can cause ST-segment elevation. Whenever you see ST-segment elevation or depression on an EKG, you first instinct should always be to suspect some form of cardiac ischemia, but always keep hypokalemia in your differential diagnosis.

Evaluation of Chronic Heart Failure

July 12, 2017 Cardiology, Critical Care, Differential Diagnosis, Laboratory Medicine No comments , , , ,

Table 28-3 and 28-4, taken from the European Society of Cardiology heart failure guideline, recommend a routine assessment to establish the diagnosis and likely cause of heart failure. Once the diagnosis of heart failure has been made, the first step in evaluating heart failure is to determine the severity and type of cardiac dysfunction, by measuring ejection fraction through two-dimensional echocardiography and/or radionuclide ventriculography. Measurement of ejection fraction is the gold standard for differentiating between the two forms of heart failure, systolic and diastolic, and is particularly important given that the approaches to therapy for each syndrome differ somewhat. The history and physical examination should include assessment of symptoms, functional capacity, and fluid retention.


Functional capacity is measured through history taking or preferably an exercise test. Analysis of expired air during exercise offers a precise measure of the patient’s physical limitations. However, this test is uncommonly performed outside of cardiac transplant centers. The NYHA has classified heart failure into four functional classes that may be determined by history taking. The NYHA functional classification should not be confused with the stages of heart failure described in the American College of Cardiology/American Heart Association heart failure guideline. The NYHA classification describes functional limitation and is applicable to stage B through stage D patients, whereas the staging system describes disease progression somewhat independently of functional status.

Assessment of fluid retention through measurement of jugular venous pressure, auscultation of the lungs, and examination for peripheral edema is central to the physical examination of heart failure patients.

Given the limitation of physical signs and symptoms in evaluating heart failure clinical status, a number of noninvasive and invasive tools are under development for the assessment of heart failure. One such tool that has proven useful in determining the diagnosis and prognosis of heart failure is the measurement of plasma B-type natriuretic peptide (BNP) levels. Multiple studies demonstrate the utility of BNP measurement in the diagnosis of heart failure. The diagnostic accuracy of BNP at a cutoff of 100 pg/mL was 83.4 percent. The negative predictive value of BNP was excellent. At levels less than 50 pg/mL, the negative predictive value of the assay was 96%.


Based largely on the findings of the BNP Multinational Study, clinicians were advised that a plasma BNP concentration below 100 pg/mL made the diagnosis of congestive heart failure unlikely, while a level above 500 pg/mL made it highly likely. For BNP levels between 100 pg/mL and 500 pg/mL, the use of clinical judgement and additional testing were encouraged.

Additionally, plasma BNP is useful in predicting prognosis in heart failure patients. However, serial measurement of plasma BNP as a guide to heart failure management has not yet been proven useful in the management of acute or chronic heart failure.

Evaluation of Renal Function

May 3, 2017 Clinical Skills, Critical Care, Nephrology No comments , , , , , , , ,

Assessment of kidney function using both qualitative and quantitative methods is an important part of the evaluation of patients and an essential characterization of individuals who participate in clinical research investigations. Estimating of creatinine clearance has been considered the clinical standard for assessment of kidney function for nearly 50 years, and continues to be used as the primary method of stratifying kidney function in drug pharmacokinetic studies submitted to the United States Food and Drug Administration (FDA). New equations to estimate glomerular filtration rate (GFR) are now used in many clinical settings to identify patients with CKD, and in large epidemiology studies to evaluate risks of mortality and progression to stage 5 CKD, that is , ESKD. Other tests, such as urinalysis, radiographic procedures, and biopsy, are also valuable tools in the assessment of kidney disease, and these qualitative assessments are useful for determining the pathology and etiology of kidney disease.

Quantitative indices of GFR or Clcr are considered the most useful diagnostic tools for identify the presence and monitoring the progression of CKD. These measures can also be used to quantify changes in function that may occur as a result of disease progression, therapeutic intervention, or a toxic insult. It is important to note that the term kidney function includes the combined processes of glomerular filtration, tubular secretion, and reabsorption, as well as endocrine and metabolic functions. This thread critically evaluates the various methods that can be used for the quantitative assessment of kidney function in individuals with normal kidney function, as well as in those with CKD and acute kidney injury (AKI). Where appropriate, discussion regarding the qualitative assessment of the kidney function is also presented, including the role of imaging procedures and invasive tests such as kidney biopsy.

Excretory Function

The kidney is largely responsible for the maintenance of body homeostasis via its role in regulating urinary excretion of water, electrolytes, endogenous substances such as urea, medications, and environmental toxins. It accomplishes this through the combined processes of glomerular filtration, tubular secretion, and reabsorption.

The “intact nephron hypothesis” described by Bricker, more than 40 years ago, proposes that “kidney function” of patients with renal disease is the net result of a reduced number of appropriately functioning nephrons. As the number of nephrons is reduced from the initial complement of 2 million, those that are unaffected compensate; that is, they hyper function. The cornerstone of this hypothesis is that glomerulotubular balance is maintained, such that those nephrons capable of functioning will continue to perform in an appropriate fashion. Extensive studies have indeed shown that single-nephron GFR increases in the unaffected nephrons; thus, the whole-kidney GFR, which represents the sum of the single-nephron GFRs of the remaining functional nephrons, may remain close to normal until there is extensive injury. Based on this, one would presume that a measure of one component of nephron function could be used as an estimate of all renal functions. This, indeed, has been and remains our clinical approach. We estimate GFR and assume secretion and reabsorption remain proportionally intact.

Screen Shot 2017 04 20 at 9 53 00 PM


GFR is dependent on numerous factors, one of which is protein load. Bosch suggested that an appropriate comprehensive evaluation of kidney function should include the measurement of “filtration capacity.” Recently, the concept of renal function reserve (RFR) has been defined as the capacity of the kidney to increase GFR in response to physiological or pathologic conditions. This is similar in context to a cardiac stress test. The patient may have no hypoxic symptoms, for example, angina while resting, but it may become quite evident when the patient begins to exercise. Subjects with normal renal function administered an oral or intravenous (IV) protein load prior to measurement of GFR have been noted to increase their GFR by as much as 50%. As renal function declines, the kidneys usually compensate by increasing the single-nephron GFR. The RFR will be reduced in those individuals whose kidneys are already functioning at higher-than-normal levels because of preexisting kidney injury or subclinical loss of kidney mass. Thus, RFR ma be a complementary, insightful index of renal function for many individual with as yet unidentified CKD.

Quantification of renal function (excretory) is not only an important component of a diagnostic evaluation, but it also serves as an important parameter for monitoring therapy directed at the etiology of the diminished function itself, thereby allowing for objective measurement of the success of treatment. Measurement of renal function also serves as a useful indicator of the ability to the kidneys to eliminate drugs from the body. Furthermore, alterations of drug distribution and metabolism have been associated with the degree of renal function. Although several indices have been used for the quantification of GFR in the research setting, estimation of Clcr and GFR are the primary approaches used in the clinical arena.


Secretion is an active process that predominantly takes place in the proximal tubule and facilitates the elimination of compounds from the renal circulation into the tubular lumen. Several highly efficient transport pathways exist for a wide range of endogenous and exogenous substances, resulting in renal clearances of these actively secreted entities that often greatly exceed GFR and in some cases approximate renal blood flow. These transporters are typically found among the solute-linked carrier (SLC) and ATB-binding cassette (ABC) super families. Overall, the net process of tubular secretion for drugs is likely a result of multiple secretory pathways acting simultaneously.


Reabsorption of water and solutes occurs throughout the nephron, whereas the reabsorption of most medications occurs predominantly along the distal tubule and collecting duct. Urine flow rate and physicochemical characteristics of the molecule influence these processes: highly ionized compounds are not reabsorbed unless pH changes within the urine increase the fraction unionized, so that reabsorption may be facilitated.

Endocrine Function

The kidney synthesizes and secretes many hormones involved in maintaining fluid and electrolyte homeostasis. Secretion of renin by the cells of the juxtaglomerular apparatus and production and metabolism of prostaglandins and kinins are among the kidney’s endocrine functions. In addition in response to decreased oxygen tension in the blood, which is sensed by the kidney, erythropoietin is produced and secreted by peritubular fibroblasts. Because these functions are related to renal mass, decreased endocrine activity is associated with the loss of viable kidney cells.

Metabolic Function

The kidney perform a wide variety of metabolic functions, including the activation of vitamin D, gluconeogenesis, and metabolism of endogenous compounds such as insulin, steroids, and xenobiotics. It is common for patients with diabetes and stages 4 to 5 CKD to have reduced requirements for exogenous insulin, and require supplemental therapy with activated vitamin D3 or other vitamin D analogs to avert the bone loss and pain associated with CKD-associated metabolic bone disease. Cytochrome P450, N-acetyltransferase, glutathione transferase, renal peptidases, and other enzymes responsible for the degradation and activation of selected endogenous and exogenous substances have been identified in the kidney. The CYP enzymes in the kidneys are as active as those in the liver, when corrected for organ mass. In vitro and in vivo studies have shown that CYP-mediated metabolism is impaired in the presence of renal failure or uremia. In clinical studies using CYP3A probes in ESRD patients receiving hemodialysis, hepatic CYP3A activity was reported to be reduced by 28% from values observed in age-matched controls; partial correction was noted following the hemodialysis procedure.

Measurement of Kidney Function

The gold standard quantitative index of kidney function is a mGFR. A variety of methods may be used to measure and estimate kidney function in the acute care and ambulatory settings. Measurement of GFR is important for early recognition and monitoring of patients with CKD and as a guide for drug-dose adjustment.

Dipipharm10 ch42 f005

It is important to recognize conditions that may alter renal function independent of underlying renal pathology. For example, protein intake, such as oral protein loading or an infusion of amino acid solution, may increase GFR. As a result, inter- and intrasubject variability must be considered when it is used as a longitudinal marker of renal function. Dietary protein intake has been demonstrated to correlate with GFR in healthy subjects. The increased GFR following a protein load is the result of renal vasodilation accompanied by an increased renal plasma flow. The exact mechanism of the renal response to protein is unknown, but may be related to extra renal factors such as glucagon, prostaglandins, and angiotensin II, or intra renal mechanisms, such as alterations in tubular transport and tubuloglomerular feedback. Despite the evidence of a “renal reserve,” standardized evaluation techniques have not been developed. Therefore, assessment of a mGFR must consider the dietary protein status of the patient at the time of the study.

Measurement of Glomerular Filtration Rate

  • Measurement of the GFR is most accurate when performed following the exogenous administration of iohexol, iothalamate, or radioisotopes such as technetium-99m diethylenetriamine pentaacetic acid (99mTc-DTPA).

A mGFR remains the single best index of kidney function. As renal mass declines in the presence of age-related loss of nephrons or disease states such as hypertension or diabetes, there is a progressive decline in GFR. The rate of decline in GFR can be used to predict the time to onset of stage 5 CKD, as well as the risk of complications of CKD. Accurate measurement of GFR in clinical practice is a critical variable for individualization of the dosage regimens of renal excreted medications so that one can maximize their therapeutic efficacy and avoid potential toxicity.

The GFR is expressed as the volume of plasma filtered across the glomerulus per unit of time, based on total renal blood flow and capillary hemodynamics. The normal values for GFR are 127 +- 20 mL/min/1.73 m2 and 118 +- 20 mL/min/1.73 m2 in healthy men and women, respectively. These measured values closely approximate what one would predict if the normal renal blood flow were approximately 1.0 L/min/1.73 m2, plasma volume was 60% of blood volume, and filtration fraction across the glomerulus was 20%. In that situation the normal GFR would be expected to be approximately 120 mL/min/1.73 m2.

Optimal clinical measurement of GFR involves determining the renal clearance of a substance that is freely filtered without additional clearance because of tubular secretion or reduction as the result of reabsorption. Additionally, the substance should not be susceptible to metabolism within renal tissues and should not alter renal function. Given these conditions, the mGFR is equivalent to the renal clearance of the solute marker:

GFR = renal Cl = Ae / AUC 0>t

where renal Cl is renal clearance of the marker, Ae is the amount of marker excreted in the urine from time 0 to t, and AUC 0>t is the area under the plasma-concentration-versus-time curve of the marker.

Under steady-state conditions, for example during a continuous infusion of the marker, the expression simplifies to

GFR = renal Cl = Ae / (Css*t)

where Css is the steady-state plasma concentration of the marker achieved during continuous infusion. The continuous infusion method can also be employed without urine collection, where plasma clearance is calculated as Cl = infusion rate / Css. This method is dependent on the attainment of steady-state plasma concentrations and accurate measurement of infusatn concentrations. Plasma clearance can also be determined following a single-dose IV injection with the collection of multiple blood samples to estimate area under the curve (AUC 0>∞). Here, clearance is calculated as Cl = dose/AUC. These plasma clearance methods commonly yield clearance values 10% to 15% higher than GFR measured by urine collection methods.

Several markers have been used for the measurement of GFR and include both exogenous and endogenous compounds. Those administered as exogenous agents, such as inulin, sinistrin, iothalamate, iohexol, and radioisotopes, require specialized administration techniques and detection methods for the quantification of concentrations in serum and urine, but generally provide an accurate measure of GFR. Methods that employ endogenous compounds, such as creatinine or cyst, require less technical expertise, but produce results with greater variability. The GFR marker of choice depends on the purpose and cost of the compound which ranges from $2,000 per vial for radioactive for 125I-iothalamate to $6 per vial for nonradiolabeled iothalamate or iohexol.

Inulin and Sinistrin Clearance

Inulin is a large fructose polysaccharide, obtained from the Jerusalem artichoke, dahlia, and chicory plants. It is not bound to plasma proteins, is freely filtered at the glomerulus, is not secreted or reabsorbed, and is not metabolized by the kidney. The volume of distribution of inulin approximates extracellular volume, or 20% of ideal body weight. Because it is eliminated by glomerular filtration, its elimination half-life is dependent on renal function and is approximately 1.3 hours in subjects with normal renal function. Measurement of plasma and urine inulin concentrations can be performed using high-performance liquid chromatography. Sinistrin, another polyfructosan, has similar characteristics to inulin; it is filtered at the glomerulus and not secreted or reabsorbed to any significant extent. It is a naturally occurring substance derived from the root of the North African vegetable red squill, Urginea maritime, which has a much higher degree of water solubility than inulin. Assay methods for sinistrin have been described using enzymatic procedures, as well as high-performance liquid chromatography with electrochemical detection. Alternatives have been sought for inulin as a marker for GFR because of the problems of availability, high cost, sample preparation and assay variability.

Iothalamate Clearance

Iothalamate is an iodine-containing radio contrast agent that is available in both radiolabeled (125I) and nonradiolabeled forms. This agent is handled in a manner similar to that of inulin; it is freely filtered at the glomerulus and does not undergo substantial tubular secretion or reabsorption. The nonradiolabeled form is most widely used to measure GFR in ambulatory and research settings, and can safely be administered by IV bolus, continuous infusion, or subcutaneous injection. Plasma and urine iothalamate concentrations can be measured using high-performance liquid chromatography. Plasma clearance methods that do not require urine collections have been shown to be highly correlated with renal clearance, making them particularly well-suited for longitudinal evaluations of renal function. These plasma clearance methods require two-compartment modeling approaches because accuracy is dependent on duration of sampling. For example, Agarwal et al. demonstrated that short sampling intervals can overestimate GFR, particularly in patients with severely reduced GFR. In individuals with GFR more than 30 mL/min/1.73 m2 (greater than 0.29 mL/s/m2), a 2-hour sampling strategy yielded GFR values that were 54% higher compared with 10-hour sampling, whereas the 5-hour sampling was 17% higher. In individuals with GFR less than 30 mL/min/1.73 m2, the 5-hour GFR was 36% higher and 2-hour GFR was 126% higher than the 10-hour measurement. The authors proposed a 5- to 7- hour sampling time period with eight plasma samples to be the most appropriate and feasible approach for most GFR evaluations.


Lohexol, a nonionic, low osmolar, iodinated contrast agent, has also been used for the determination of GFR. It is eliminated almost entirely by glomerular filtration, and plasma and renal clearance values are similar to observations with other marker agents: Strong correlations of 0.90 or greater and significant relationships with iothalamate have been reported. These data support iohexol as a suitable alternative marker for the measurement of GFR. A reported advantage of this agent is that a limited number of plasma samples can be used to quantify iohexol plasma clearance. For patients with a reduced GFR more time must allotted – more than 24 hours if the eGFR is less  than 20 mL/min.

Radiolabeled Markers

The GFR has also been quantified using radiolabeled markers, such as 125I-iothalamate, 99mTc-DPTA, and 51Cr-ethylenediaminetetraacetic acid. These relatively small molecules are minimally bound to plasma proteins and do not undergo tubular secretion or reabsorption to any significant degree. 125I-iothalamate and 99mTc-DPTA are used in the United States, whereas 51Cr-EDTA is used extensively in Europe. The use of radiolabeled markers allows one to determine the individual contribution of each kidney to total renal function. Various protocols exist for the administration of these markers and subsequent measurement of GFR using either plasma or renal clearance calculation methods. The non renal clearance of these agents appears to be low, suggesting that plasma clearance is an acceptable technique except in patients with severe renal insufficiency (GFR less than 30 mL/min). Indeed, highly significant correlations between renal clearance among radiolabeled markers has been demonstrated. Although total radioactive exposure to patients is usually minimal, use of these agents does require compliance with radiation safety committees and appropriate biohazard waste disposal.

Optical Real-Time Glomerular Filtration Rate Markers

A clinically applicable technique to rapidly measure GFR, particularly in critically ill patients with unstable kidney function, is highly desirable. The currently available GFR measurement approaches, as outlined above, are technically demanding, time-consuming, and often cost-prohibitive. Research is underway to develop rapid, accurate, safe, and inexpensive techniques to address this need.


Although the measured (24-hour) CLcr has been used as an approximation of GFR for decades, it has limited clinical utility for a multiplicity of reasons. Short-duration witnessed mCLcr correlates well with mGFR based on iothalamate clearance performed using the single-injection technique. In a multicenter study of 136 patients with type 1 diabetic nephropathy, the correlations of simultaneous mCLcr, and 24-hour CLcr (compared to CLiothalamate) were 0.81 and 0.49, respectively, indicating increased variability with the 24-hour clearance determination. In a selected group of 110 patients, measurement of a 4-hour CLcr during water diuresis provided the best estimate of the GFR as determined by the CLiothalamate. Furthermore, the ratio of CLcr to CLiothalamate did not appear to increase as the GFR decreased. These data suggest that a short collection period with a water diuresis may be the best CLcr method for estimation of GFR.

A limitation of using creatinine as a filtration marker is that it undergoes tubular secretion. Tubular secretion arguments the filtered creatinine by approximately 10% in subjects with normal kidney function. If the nonspecific Jaffe reaction is used, which overestimates the Scr by approximately 10% because of the noncreatinine chromogens, then the measurement of CLcr is a very good measure of GFR in patients with normal kidney function. Tubular secretion, however, increases to as much as 100% in patients with kidney disease, resulting in mCLcr values that markedly overestimate GFR. For example, Bauer et al. reported that the CLcr-to-CLinulin ratio in subjects with mild impairment was 1.20; for those with moderate impairment, it was 1.87; and in those with severe impairment, it was 2.32. Thus, a mCLcr is a poor indicator of GFR in patients with moderate to severe renal insufficiency, that is, stages 3 to 5 CKD.

Because cimetidine blocks the tubular secretion of creatinine the potential role of several oral cimetidine regimens to improve the accuracy and precision of mCLcr as an indicator of GFR has been evaluated. The CLcr-to-CLDPTA ratio declined from 1.33 with placebo to 1.07 when 400 mg of cimetidine was administered four times a day for 2 days prior to and during the clearance determination. Similar results were observed when a single 800-mg dose of cimetidine was given 1 hour prior to the simultaneous determination of CLcr and CLiothalamate; the ratio of CLcr to CLiothalamate was reduced from a mean of 1.53 to 1.12. Thus a single oral dose of 800 mg of cimetidine should provide adequate blockade of creatinine secretion to improve the accuracy of a CLcr measurement as an estimate GFR in patients with stage 3 to 5 CKD.

To minimize the impact of diurnal variations in Scr on CLcr, the test is usually performed over a 24-hour period with the plasma creatinine obtained in the morning, as long as the patient has stable kidney function. Collection of urine remains a limiting factor in the 24-hour CLcr because of incomplete collections, and interconversion between creatinine and creatine that can occur if the urine is not maintained at a pH less than 6.

Estimating of Glomerular Filtration Rate

Because of the invasive nature and technical difficulties of directly measuring GFR in clinical settings, many equations for estimating GFR have been proposed over the past 10 years. A series of related GFR estimating equations have been developed for the primary purpose of identifying and classifying CKD in many patient populations. The initial equation was derived from multiple regression analysis of data obtained from the 1,628 patients enrolled in the Modification of Diet in Renal Disease Study (MDRD) where GFR was measured using the renal clearance of 125I-iothalamate methodology. A four-variable version of the original MDRD equation (MDRD4), based on plasma creatinine, age, sex, and race, was shown to provide a similar estimate of GFR results when compared to a six-variable equation predecessor. However, this equation was shown to be inaccurate at GFR more than 60 mL/min/1.73 m2, for reasons not associated with standardization of Screening assay results. A recent study conducted by the FDA compared the eGFR estimated by the MDRD4 equation to the CLcr estimated by the Cockcroft-Gault equation in 973 subjects enrolled in pharmacokinetic studies conducted for new chemical entities submitted to the FDA from 1998 to 2010. The MDRD4 eGFR results consistently overestimated the CLcr calculated by the CG method. The FDA investigators concluded that “For patients with advanced age, low weight, and modestly elevated serum creatinine concentration values, further work is needed before the MDRD equations can replace the CG equation for dose adjustment in approved product information labeling.”

A single eGFR equation may not be best suited for all populations, and choice of equation has been shown to impact CKD prevalence estimates. This has led to a revitalized interest in the development of new equations to estimate GFR. The newest equations to be proposed for the estimation of GFR have been derived from wider CKD populations than the MDRD study, and include the CKD-EPI and the Berlin Initiative Study (BIS). The CKD-EPI equation was developed from pooled study data involving 5,500 patients, with mean GFR values of 68 +- 40 mL/min/1.73 m2. It has been reported that the CKD-EPI equation is less biased but similarly imprecise compared to MDRD4.

CKD-EPI Equation

The CKD-EPI study equation was compared to the MDRD equation using pooled data from patients enrolled in research or clinical outcomes studies, where GFR was measured by any exogenous tracer. The results of the study indicated that the bias of CKD-EPI equation was 61% to 75% lower than the MDRD equation for patients with eGFR of 60 to 119 mL/min/1.73 m2. Based on these findings, the CKD-EPI equation is most appropriate for estimating GFR in individuals with eGFR values more than 60 mL/min/1.73 m2. Both KDOQI and the Australasian Creatinine Consensus Working Groups now recommend that clinical laboratories switch from the MDRD4 to CKD-EPI for routine automated reporting. If one’s clinical lab does not automatically calculate eGFR using the CKD-EPI, it becomes a bit of a challenge since the equation requires a more complex algorithm than the MDRD equation.

Limitations of the pooled analysis approach used to develop the MDRD and CKD-EPI equations include the use of different GFR markers between studies, different methods of administration of the GFR markers and different clearance calculations. These limitations may partly explain the reduced accuracy observed with the MDRD4 equation at GFR values more than 60 mL/min/1.73 m2. Additionally, a recent inspection of the MDRD GFR study data showed that large intrasubject variability in GFR measures was a likely contributor to the inaccuracy of the gold standard method that was used to create the MDRD equation.

Cystatin C-Based Equations

Addition of serum cysC as a covariate in equations to estimate GFR has been employed as a means to improve creatinine-based estimations of GFR that historically were limited to the following variables: lean body mass, age, sex, race, and Scr.

Screen Shot 2017 05 01 at 7 48 20 PM

  • Alb, serum albumin concentration (g/dL); BUN, blood/serum urea nitrogen concentration (mg/dL);CKD, chronic kidney disease; cysC, cystatin C; eGFR, estimated glomerular filtration rate; Scr, serum or plasma creatinine (mg/dL).
  • k is 0.7 for females and 0.9 for males, alpha is -0.329 for females and -0.411 for males, min indicates the minimum of Screening/k or 1, and max indicates the maximum of Scr/k or 1.

A significant limitation of serum cysC as a renal biomarker is the influence of body mass on serum concentrations. When using a serum cyst-based estimate of GFR, which incorporates the serum cysC, age, race, and sex, a higher prevalence of CKD was reported in obese patients when compared to the MDRD4 equation. In a recent retrospective analysis of over 1,000 elderly individuals (mean age 85 years) enrolled in Cardiovascular Health Study, GFR was estimated using the CKD-EPI and CKD-EPI-cysC equation, specifically equation 9 in Table e42-6. In this population, all-cause mortality rates were significantly different between equations, suggesting that cysC does not accurately predict mortality risk in patients with low Screening, reduced muscle mass, and malnutrition. The combined use of serum cysC and creatinine in modified CKD-EPI equations has recently been reported. The CKD-EPIcreatinine_cystatin C, equation 10 in Table e42-6 is now recommended for use in patients where unreliable serum creatinine values are anticipated, such as extremes in body mass, diet, or creatinine assay interferences.

Liver Disease

Evaluation of renal hemodynamics is particularly complicated in patients with liver disease and cirrhosis, where filtration fraction is associated with the degree of ascites, renal artery vasoconstriction, and vascular resistance. The estimation of CLcr or GFR can be problematic in patients with preexisting liver disease and renal impairment. Lower-than-expected Scr values may result from reduced muscle mass, protein-poor diet, diminished hepatic synthesis of creatine (a precursor of creatinine), and fluid overload can lead to significant overestimation of CLcr.

Evaluations of new eGFR equations for use in patients with liver disease have yield mixed results. In summary, renal function assessment in patients with hepatic disease should be performed by measuring glomerular filtration, and GFR estimation equations that combine creatinine and cysC are preferred.

[Clinical Art][Pharmacokinetics] Interpretation of Plasma Drug Concentrations (Steady-State)

November 11, 2016 Clinical Skills, Critical Care, Pharmacokinetics, Practice No comments , , , , , , , , , , , ,

Plasma drug concentration are measured in the clinical setting to determine whether a potentially therapeutic or toxic concentration has been produced by a given dosage regimen. This process is based on the assumption that plasma drug concentrations reflect drug concentrations at the receptor and, therefore, can be correlated with pharmacologic response. This assumption is not always valid. When plasma samples are obtained at inappropriate times or when other factors (such as delayed absorption or altered plasma binding) confound the usual pharmacokinetic behavior of a drug, the interpretation of serum drug concentrations can lead to erroneous pharmacokinetic and pharmacodynamic conclusions and utimately inappropriate patient care decisions. These facors are discussed below.

Confounding Factors

To properly interpret a plasma concentration, it is essential to know when a plasma sample was obtained in relation to the last dose administered and when the drug regimen was initiated.

  • If a plasma sample is obtained before distribution of the drug into tissue is complete, the plasma concentration will be higher than predicted on the basis of dose and response. (avoidance of distribution)
  • Peak plasma levels are helpful in evaluating the dose of antibiotics used to treat severe, life-threatening infections. Although serum concentrations for many drugs peak 1 to 2 hours after an oral dose is administered, factors such as slow or delayed absorption can significantly delay the time at which peak serum concentrations are attained. Large errors in the estimation of Css max can occur if the plasma sample is obtained at the wrong time. Therefore, with few exceptions, plasma samples should be drawn as trough or just before the next dose (Css min) when determining routine drug concentration in plasma. These trough levels are less likely to be influenced by absorption and distribution problems. (slow or delayed absorption)
  • When the full therapeutic response of a given drug dosage regimen is to be assessed, plasma samples should not be obtained until steady-state concentrations of the drug have been achieved. If drug doses are increased or decreased on the basis of drug concentrations that have been measured while the drug is still accumulating, disastrous consequences can occur. Nevertheless, in some clinical situations it is appropriate to measure drug levels before steady state has been achieved. If possible, plasma samples should be drawn after a minimum of two half-lives beause clearance values calculated from drug levels obtained less than one half-life after a regimen has been initiated are very sensitive to small differences in the volume of distribution and minor assay errors. (Whether steady-state attained)
  • The impact of drug plasma protein binding on the interpretation of plasma drug coencentration has been discussed in thread "The Plasma Protein Concentration and The Interpretation of TDM Report" before.

Revising Pharmacokinetic Parameters

The process of using a patient's plasma drug concentration and dosing history to determine patient-specific pharmacokinetic parameters can be complex and difficult. A single plasma sample obtained at the appropriate time can yield information to revise only one parameter, either the volume of distribution or clearance, but not both. Drug concentrations measured from poorly timed samples may prove to be useless in estimating a patient's V or Cl values. Thus, the goal is to obtain plasma samples at times that are likely to yield data that can be used with confidence to estimate pharmacokinetic parameters. In addition, it is important to evaluate available plasma concentration data to determine whether they can be used to estiamte, with some degree of confidence, V and/or Cl. The goal in pharmacokinetic revisions is not only to recognize which pharmacokinetic parameter can be revised, but also the accuracy or confidence one has in the revised or patient-specific pharmacokinetic parameter. In the clinical setting, based on the way drugs are dosed and the recommended time to sample, bioavailability is almost never revised, volume of distribution is sometimes revised, and most often clearance is the pharmacokientic parameter that can be revised to determine a patient-specific value.

Volume of Distribution

A plasma concentration that has been obtained soon after administration of an initial bolus is primarily determined by the dose administered and the volume of distribution. This assumes that both the absorption and distribution phases have been avoided.

C1 = (S) (F) (Loading Dose) x e(-kt1) / V (IV Bolus Model)

When e(-kt1) approches 1 (i.e., when t1 is much less than t1/2), the plasma concentration (C1) is primarily a function of the administered dose and the apparent volume of distribution. At this point, very little drug has been eliminated from the body. As a clinical guideline, a patient's volume of distribution can usually be estimated if the absorption and distribution phase are avoided and t1, or the interval between the administration and sampling time, is less than or equal to one-third of the drug's half-life. As t1 exceeds one-third of a half-life, the measured concentration is increasingly infuenced by clearance. As more of the drug is eliminated (i.e., t1 increases), it is difficult to estimate the patient's V with any certainty. The specific application of this clinical guideline depends on the confidence with which one knows clearance. If clearance is extremely variable and uncertain, a time interval of less than one-third of a half-life would be necessary in order to revise volume of distribution. On the other hand, if a patient-specific value for clearance has already been determined, then t1 could exceed one-third of a half-life and a reasonably accurate estimate of volume of distribution could be obtained. It is important to recognize that the pharmacokinetic parameter that most influences the drug concentration is not determined by the model chosen to represent the drug level. For example, even if the dose is modeled as a short infusion, the volume of distribution can still be the important parameter controlling the plasma concentration. V is not clearly defined in the equation (see it below); nevertheless, it is incorporated into the elimination rate constant (K).

C2 =[(S) (F) (Dose/tin) / Cl]*(1-e-ktin)(e-kt2)

Although one would not usually select this equation to demonstrate that the drug concentration is primarily a function of volume of distribution, it is important to recognize that the relationship between the observed drug concentration and volume is not altered as long as the total elapsed time (tin + t2) does not exceed one-third of a half-life.

Our assumption in evaluating the volume of distribution is that although we have not sampled beyond one-third of a t1/2, we have waited until the drug absorption and distribution process is complete.


A plasma drug concentration that has been obtained at steady state from a patient who is receiving a constant drug infusion is determined by clearance.

Css ave = (S) (F) (Dose / tau) / Cl

So, the average steady-state plasma concentration is not influenced by volume of distribution. Therefore, plasma concentrations that represent the average steady-state level can be used to estimate a patient's clearnace value, but they cannot be used to estimate a patient's volume of distribution. Generally, all steady-state plasma concentrations within a dosing interval that is short relative to a drug's half-life (tau =<1/3 t1/2) approximate the average concentration. Therefore, these concentrations are also primarily a function of clearance and only minimally influenced by V.

Also the below equation could be used,

Css 1 =[(S)(F)(Dose)/V]/(1-e-kτ)*(e-kt1)

the expected volume of distribution should be retained and the elimination rate constant adjusted such that Css1 at t1 equals the observed drug plasma concentration.

Sensitivity Analysis

Whether a measured drug concentration is a function of clearance or volume of distribution is not always apparent. When this is difficult to ascertain, one can examine the sensitivity or responsiveness of the predicted plasma concentration to a parameter by changing one parameter while holding the other constant. For example, for maintenance infusion, a plasma concentration (C1) at some time intervnal (t1) after a maintenance infusion has been started should be:


when the fraction of steady that has been reached (1-e-kt1) is small, large changes in clerance are frequently required to adjust a predicted plasma concentration to the appropriate value. If a large percentage change in the clearance value results in a disproportionately small change in the predicted drug level, then something other than clearance is controlling (responsible for) the drug concentration. In this case, the volume of distribution and the amount of drug administered are the primary determinants of the observed concentration. Also in cases where the drug concentration is very low, it might be assay error or sensitivity that is the predominant factor in determining the drug concentration making the ability to revise for any pharmacokinetic parameter limited if not impossible.

This type of sensitivity analysis is useful to reinforce the concept that the most reliable revisions in pharmacokinetic parameters are made when the predicted drug concentration changes by approximately the same percentage as the pharmacokinetic parameter undergoing revision.

When a predicted drug concentration changes in direct proportion, or inverse proportion to an alteration in only one of the pharmacokinetic parameters, it is likely that a measured drug concentration can be used to estimate that patient-specific parameter. But when both clearance and volume of distribution have a significant influence on th prediction of a measured drug concentration, revision of a patient's pharmacokinetic parameters will be less certain because there is an infinite number of combinations for clearance and volume of distribution values that could be used to predict the observed drug concentration. When this occurs, the patient's specific pharmacokinetic characteristics can be estimated by adjusting one or both of the pharmacokinetic parameters. Nevertheless, in most cases additional plasma level sampling will be needed to accurately predict the patient's clearance or volume of distribution so that subsequent dosing regimens can be adjusted.

When the dosing interval is much shorter than the drug's half-life, the changes in concentration within a dosing interval are relatively small, and any drug concentration obtained within a dosing interval can be used as an approximation of the average steady-state concentration. Even though Css max and Css min exist,

Css max =[(S)(F)(Dose)/V]/(1-e-kτ)


Css min =[(S)(F)(Dose)/V]/(1-e-kτ)*(e-kτ)

and could be used to predict peak and trough concentrations, a reasonable approximation could also be achieved by using the Css ave, that is

Css ave =(S)(F)(Dose/τ)/Cl

This suggests that even though Css max and Css min do not contain the parameter clearance per se, the elimination rate constant functions in such a way that the clearance derived from Css max or Css min and Css ave would all essentially be the same.

In the situation in which the dosing interval is greater than one-third of a half-life, the use of Css max and Css min are appropriate as not all drug concentrations within the dosing interval can be considered as the Css ave. However, as long as the dosing interval has not been extended beyond one half-life, clearance is still the primary pharmacokinetic parameter that is responsible for the drug concentrations within the dosing interval. Although the elimination rate constant and volume of distribution might be manipulated in Css max and Css min, it is only the product of those two numbers (i.e., clearance) that can be known with any certainty: Cl = (K) (V).

If a drug is administered at a dosing interval that is much longer than the apparent half-life, peak concentrations may be primarily a function of volume of distribution. Since most of the dose is eliminated within a dosing interval, each dose can be thought as something approaching a new loading dose. Of course for steady-state conditions, at some point within the dosing interval, the plasma concentration (Css ave) will be determined by clearance. Trough plasma concentrations in this situation are a function of both clearance and volume of distribution. Since clearance and volume of distribution are critical to the prediction of peak and trough concentrations when the dosing interval is much longer than the drug t1/2, a minimum of two plasma concentrations is needed to accurately establish patient-specific pharmacokinetic parameters and a dosing regimen that will achieve desired peak and trough concentrations.

Choosing A Model to Revise or Estimate A Patient's Clearance at Steady State

As previously discussed, a drug's half-life often determines the pharmacokinetic equation that should be used to make a revised or patient-specific estimate of a pharmacokinetic parameter. A common problem encountered clinically, however, is that the half-life observed in the patient often differs from the expected value. Since a change in either clearance or volume of distribution or both may account for this unexpected value, the pharmacokinetic model is often unclear. One way to approach this dilemma is to first calculate the expected change in plasma drug concentration assocaited with each dose:

delta C = (S) (F) (Dose) / V

where delta C is the change in concentration following the administration of each dose into the patient's volume of distribution. This change in concentration can then be compared to the steady-state trough concentration measured in the patient.

(S) (F) (Dose) / V versus Css min


delta C versus Css min

When the dosing interval (tau) is much less than the drug half-life, delta C will be small when compared to Css min. As the dosing interval increases relative to tau, delta C will increase relative to Css min. Therefore, a comparison of delta C or (S) (F) (Dose) / V to Css min can serve as a guide to estimating the drug t1/2 and the most appropriate pharmacokineitc model or technique to use for revision. With few exceptions, drugs that have plasma level monitoring are most often dosed at intervals less than or equal to their half-lives. Therefore, clearance is the pharmacokinetic parameter most often revised or calculated for the patient in question. The following guidelines can be used to select the pharmacokinetic model that is the least complex and therefore the most appropriate to estimate a patient-specific pharmacokientic parameter.

Condition 1

When, (S) (F) (Dose) / V =< 1/4 Css min

Then, tau =<1/3 t1/2

Under these conditions, Css min ≈ Css ave

And Cl can be estimated by Cl = (S) (F) (Dose / tau) / Css ave

Rules/Conditions: Must be at steady state.

Condition 2

When, (S) (F) (Dose) / V =< Css min

Then, tau =< t1/2

Under these conditions, Css min + (1/2) (S) (F) (Dose) / V ≈ Css ave

And Cl can be estimated by Cl = (S) (F) (Dose / tau) / Css ave

Rules/Conditions: Must be at steady state; C is Css min; Bolus model for absorption is acceptable (dosage form is not sustained release; short infusion model is not required, that is, tin =<1/6t1/2)

Conditon 3

When, (S) (F) (Dose) / V > Css min

Then, tau > t1/2

Under these conditions: Css min + (S) (F) (Dose) / V = Css max

where V is an assumed value from the literature.

K is revised (Krevised):

Krevised = ln {[(Css min + (S) (F) (Dose / V)] / Css min} / tau = ln (Css max / Css min) / tau

Rules/Conditions: Must be at steady state; C is Css min; Bolus model for absorption is acceptable (dosage form is not sustained release; short infusion model is not required, that is, tin =< 1/6 t1/2)

Note that the approaches used become more complex as the dosing interval increases relative to the drug half-life. If a drug is administered at a dosing interval less than or equal to one-third of its half-life and the technique in Condition 3 is used to revise clearance, the revised clearance would be correct. The calculation is not wrong, just unnecessarily complex. However, if a drug is administered at a dosing interval that exceeds one half-life and the technique in Condition 1 is used to revise clearance, the revised clearance value would be inaccurate because Css min cannot be assumed to be approximately equal to Css ave. While it could be argued that the technique used in Condition 3 would suffice for all the previous conditions, it is more cumbersome and tends to focus on the intermediate parameters, K and V rather than Cl. One should also be ware that as the dosing interval increases, relative to the drug's half-life, the confidence in a revised clearance diminishes because the volume of distribution, which is an assumed value from the literature, begins to influence the revised clearance to a greater degree. As a general rule, the confidence in Cl is usually good when the dosing interval is < t1/2, steady state has been achieved, and drug concentrations are obtained properly.

[Clinical Art][Circulation] Hemodynamic Monitoring – Tissue Oxygenation and Cardiac Output

October 24, 2016 Cardiology, Clinical Skills, Critical Care, Hemodynamics No comments , , , , , , , , , , , , , , , ,

Key Points

  • No hemodynamic monitoring device will improve patient outcome unless coupled to a treatment, which itself improves outcome.
  • Low venous oxygen saturations need not mean circulatory shock but do imply circulatory stress, as they may occur in the setting of hypoxemia, anemia, exercise, as well as circulatory shock.
  • There is no "normal" cardiac output, only one that is adequate or inadequate to meet the metabolic demands of the body. Thus, targeting a specific cardiac output value without reference to metabolic need, or oxygen-carrying capacity of the blood, is dangerous.
  • Cardiac output is estimated, not measured, by all devices routinely used in bedside monitoring (though we shall call it measured in this text).
  • Cardiac output estimates using arterial pulse pressure contour analysis cannot be interchanged among devices and all suffer to a greater or lesser extent by changes of peripheral vasomotor tone commonly seen in the critically ill.
  • Since metabolic demands can vary rapidly, continuous or frequent measures of cardiac output are preferred to single or widely spaced individual measures.
  • Integrating several physiologic variables in the assessment of the adequacy of the circulation usually gives a clearer picture than just looking at one variable.
  • Integrating cardiac output with other measures, like venous oxygen saturation, can be very helpful in defining the adequacy of blood flow.

Clinical Judgement of Hypoperfusion

Tissue hypoperfusion is a clinical syndrome, thus the presentation depends on which organ(s)/organ system(s) are being hypoperfused.

Table 1 Common Clinical Presentations of Tissue Hypoperfusion
Brain/CNS altered mental status, confusion
Pulmonary system dyspnea on exertion
Gastrointestinal tract slowed bowel function, abdominal discomfort, nausea/vomiting, anorexia
Renal system decrease urine output, increased serum creatinine
Hepatic system increased transaminases
Extremities/constitutional cool extremities, poor capillary refill, general fatigue/malaise
SvO2 in the case of normal CaO2 and VO2, SvO2 would decrease
Serum lactate level elevated serum lactate level

Note that some factors other than tissue hypoperfusion are able to cause the "hypoperfusion"-like presentation(s) which are similar as the one discussed above. For example, in the sepsis, the elevated levels of lactic acid might be caused by reasons other than hypoperfusion. In another case, the low urine output of a patient might be caused by the patient's ESRD per se rather than the low perfusion of kidneys. Also, if the patient had hypoglycemia or severe hyponatremia (which cause the neurons to edema), he or she would definitely lose the consciousness (differential diagnosis? or differential clinical judgement).

Tissue Oxygenation

Although mean artieral pressure (MAP) is a primary determinant of organ perfusion, normotension can coexist with circulatory shock.

Since metabolic demand of tissues varies by external (exercise) and internal (basal metabolism, digestion, fever) stresses, there is no "normal" cardiac ouput that the bedside caregiver can target and be assured of perfusion adequacy. Cardiac output is either adequate or inadequate to meet the metabolic demands of the body. Thus, although measures of cardiac output are important, their absolute values are relevant only in the extremes and when targeting specific clinical conditions, such as preoptimization therapy.

How then does one know that circulatory sufficiency is present or that circulatory shock exists? Clearly, since arterial pressure is the primary determinant of organ blood flow, systemic hypotension (i.e., mean arterial pressure <60 mm Hg) must result in tissue hypotension. Organ perfusion pressure can be approximated as mean arterial pressure (MAP) relative to tissue or outflow pressure. But, if intracranial pressure or intra-abdominal pressure increases, then estimating cerebral or splanchnic perfusion pressure using MAP alone will grossly overestimate organ perfusion pressure. In addition, baroreceptors in the carotid body and aortic arch increase vasomotor tone to keep cerebral perfusion constant if flow decreases, and the associated increased systemic sympathetic tone alters local vasomotor tone to redistribute blood flow away from more efficient O2 extracting tissues to sustain MAP and global O2 consumption (VO2) in the setting of inappropriately decreasing DO2. Thus, although systemic hypotension is a medical emergency and reflects severe circulatory shock, the absence of systemic hypotension does not ensure that all tissues are being adequately perfused.

  • Arterial pulse oximetry: SaO2
  • Venous Oximetry: ScvO2, SvO2
  • Tissue Oximetry
  • StO2 vascular occlusion test

Arterial Pulse Oximetry


Arterial blood O2 saturation (SaO2) can be estimated quite accurately at the bedside using pulse oximetry. Routinely, pulse oximeters are placed on a finger for convenience sake. However, if no pulse is sensed, then the readings are meaningless. Such finger pulselessness can be seen with peripheral vasoconstriction associated with hypothermia, circulatory shock, or vasospasm. Central pulse oximetry using transmission technology can be applied to the ear or bridge of the nose and reflectance oximetry can be applied to the forehead, all of which tend to retain pulsatility if central pulsatile flow is present. Similarly, during cardiopulmonary bypass when arterial flow is constant, pulse oximetry is inaccurate. The primary important functions of SaO2 are summarized below.

  • SaO2 is routinely used to identify hypoxemia. Hypoxemia is usually defined as an SaO2 of <90% (PaO2 of <60 mm Hg).
  • SaO2 is also used to identify the causes of hypoxemia. The most common causes of hypoxemia are ventilation-perfusion (V/Q) mismatch (we will discuss this topic in the clinical art of respiratory medicine) and shunt. With V/Q mismatch alveolar hypoxia ocurs in lung regions with increased flow relative to ventilation, such that the high blood flow rapidly depletes alveolar O2 before the next breath can refresh it. Accordingly, this process readily lends itself to improve oxygenation by increasing FiO2 and minimizing regional alveolar hypoxia. Collapsed or flooded lung units will not alter their alveolar O2 levels by this maneuver and are said to be refractory to increase in FiO2. Accordingly, by measuring the SaO2 response to slight increases in FiO2 one can separate V/Q mismatch (shuntlike states) from shunt (absolute intrapulmonary shunt, anatomical intracardiac shunts, alveolar flooding, atelectasis [collapse]). One merely measures SaO2 while switching from room air FiO2 of 0.21 to 2 to 4 L/min nasal cannula (FiO2 ~0.3). Importantly, atelectatic lung units (collapse alveoli) should be recruitable by lung expansion whereas flooded lung units and anatomical shunts should not. Thus, by performing sustained deep inspirations and having the patient sit up and take deep breaths one should be able to separate easily recruitable atelectasis from shunt caused by anatomy and alveolar flooding. Sitting up and taking deep breaths is a form of exercise that may increaes O2 extraction by the tissues, thus decreasing SvO2. The patient with atelectasis will increase alveolar ventilation increasing SaO2 despite the decrease in SvO2, whereas the patient with unrecruitable shunt will realize a fall in SaO2 as the shunted blood will carry the lower SvO2 to the arterial side. Summary: SaO2 is used to identify V/Q mismatch and shunt, atelectasis and anatomical/flooding shunt, respectively.
  • Detection of volume responsiveness. Recent interest in the clinical applications of heart-lung interactions has centered on the effect of positive-pressure ventilation on venous return and subsequently cardiac output. In those subjects who are volume responsive, arterial pulse pressure, as a measure of left ventricular (LV) stroke volume, phasically decreases in phase with expiration, the magnitude of which is proportional to their volume responsiveness. Since the pulse oximeter's plethysmographic waveform is a manifestation of the arterial pulse pressure, if pulse pressure varies from beat-to-beat so will the plethysmographic deflection, which can be quantified. Several groups have documented that the maximal variations in pulse oximeter's plethysmographic waveform during positive-pressure ventilation covaries with arterial pulse pressure variation and can be used in a similar fashion to identify those subjects who are volume responsive.

PS: Intrapulmonary shunt fraction is increased in the following situations:

  • When the small airways are occluded; e.g., asthma
  • When the alveoli are filled with fluid; e.g., pulmonary edema, pneumonia
  • When the alveoli collapse; e.g., atelectasis
  • When capillary flow is excessive; e.g., in nonembolized regions of the lung in pulmonary embolism

Venous Oximetery


SvO2 is the gold standard for assessing circulatory stress. A low SvO2 defines increased circulatory stress, which may or may not be pathological.

To the extent that SaO2 and hemoglobin concentration result in an adequate arterial O2 content (CaO2), then ScvO2 and SvO2 levels can be taken to reflect the adeuqacy of the circulation to meet the metabolic demands of the tissues. This is the truth. But attention must be paid that one needs to examine the determinants of DO2, global oxygen consumption (VO2), and effective O2 extraction by the tissues before using ScvO2, or SvO2 as markers of circulatory sufficiency.

Since VO2 must equal cardiac output times the difference in CaO2 and mixed venous O2 content (CvO2) (VO2 = CO x [CaO2 – CvO2]),if CaO2 remains relatively constant then CvOwill vary in proportion to cardiac output (CvO2 = CaO2 – VO2 / CO). Since the amount of O2 dissolved in the plasma is very small, the primary factor determining changes in CvO2 will be SvO2. Thus, SvO2 correlates well with the O2 supply-to-demand ratio.

Several relevant conditions may limit this simple application of SvO2 in assessing circulatory sufficiency and cardiac output (Table 32-1). If VO2 were to increase (as occurs with exercise), hemoglobin-carrying capacity to decrease (as occurs with anemia, hemoglobinopathies, and severe hemorrhage), or SaO2 to decrease (as occurs with hypoxic respiratory failure), then for the same cardiac output, SvO2 would also decrease. Similarly, if more blood flows through nonmetabolically extracting tissues as occurs with intravascular shunts, or mitochondrial dysfunction limits O2 uptake by tissues, then SvO2 will increase for a constant cardiac output and VO2 even though circulatory stress exists and may cause organ dysfunction.

Table 32-1 Limitations to the Use of SvO2 to Trend Circulatory Sufficiency
Independent events that decrease SvO2 independent of cardiac output
Event Process
Exercise Increased VO2
Anemia Decreased O2-carrying capacity
Hypoxemia Decreased arterial O2 content
Independent events that increase SvO2 independent of cardiac output
Event Process
Sepsis Microvascular shunting
End-stage hepatic failure Macrovascular shunting
Carbon monoxide poisoning Mitochondrial respiratory chain inhibition


SvO2 and ScvO2 covary in the extremes but may change in opposite directions as conditions change.

ScvO2 threshold values to define circulatory stress are only relevant if low (a high ScvO2 is nondiagnostic).

ScvO2 does not sample true mixed venous blood and most vena cacal blood flow is laminar, thus if the tip is in one of these laminar flow sites it will preferentially report a highly localized venous drainage site O2 saturation. Clearly, the potential exists for spurious estimates of SvO2. Most central venous catheters are inserted from internal jugular or subclavian venous sites with their distal tip residing in the superior vena cava, usually about 5 cm above the right atrium. Thus, even if measuring a mixed venous sample of blood at that site, ScvO2 reflects upper body venous blood while ignoring venous drainage from the lower body. Accordingly, ScvO2 is usually higher than SvO2 by 2% to 3% in a sedated resting patient because cerebral O2 consumption is minimal and always sustained above other organs.

Tissue Oximetry

Tissue O2 saturation (StO2) varies little until severe tissue hypoperfusion occurs.

StO2 coupled to a VOT allows one to diagnose circulatory stress before hypotension develops.

The most currently used technique to measure peripheral tissue O2 saturation (StO2) is near-infrared spectroscopy (NIRS). NIRS is a noninvasive technique based in the differential absorption properties of oxygenated and deoxygenated hemoglobin to assess the muscle oxygenation. Although there is a good correlation between the absolute StO2 value and some other cardiovascular indexes, the capacity of the baseline StO2 values to identify impending cardiovascular insufficiency is limited (sensitivity, 78%; specificity, 39%).

screen-shot-2016-10-24-at-12-52-46-pmHowever, the addition of a dynamic vascular occlusion test (VOT) that induces a controlled local ischemic challenge with subsequent release has been shown to markedly improve and expand the predictive ability of StO2 to identify tissue hypoperfusion. The VOT StO2 response derives from the functional hemodynamic monitoring concept, in which the response of a system to a predetermined stress is the monitored variable. The rate of DeO2 is a function of local metabolic rate and blood flow distribution. If metabolic rate is increased by muscle contraction, the DeO2 slope increases, whereas in the setting of altered blood flow distribution the rate of global O2 delivery is decreased. Sepsis decreases the DeO2. The ReO2 slope is dependent on how low StO2 is at the time of release, being less steep if StO2 is above 40% than if the recovery starts at 30%, suggesting that the magnitude of the ischemic signal determines maximal local vasodilation. This dynamic technique has been used to assess circulatory sufficiency in patients with trauma, sepsis and during weaning from mechanical ventilation.

Cardiac Output

There is no "normal" cardiac output (QT).

QT is either adequate or inadequate

Other measures besides QT define adequacy

Shock reflects an inadequate DO2 to meet the body's metabolic demand and cardiac output is a primary determinant of DO2. Indeed, except for extreme hypoxemia and anemia, most of the increase in DO2 that occurs with resuscitation and normal biological adaptation is due to increasing cardiac output. Since cardiac output should vary to match metabolic demands, there is no "normal" cardiac output. Cardiac output is merely adequate or inadequate to meet the metabolic demands of the body. Measures other than cardiac output need to be made to ascertain if the measured cardiac output values are adequate to meet metabolic demands. The two most common catheter-related methods of estimating cardiac output are indicator dilution and arterial pulse contour analysis.

Indicator Dilution

The principle of indicator dilution cardiac output measures is that if a small amount of a measurable substance (indicator) is ejected upstream of a sampling site and then thoroughly mixed with the passing blood then measured continuously downstream, the area under the time-concentration curve will be inversely proportional to flow based on the Stewart-Hamilton equation. The greater the indicator level, the slower the flow, and the lower the indicator level, the higher the flow. The most commonly used indicator is temperature (hot or cold) because it is readily available and indwelling thermistors can be made to be highly accurate.

Arterial Pulse Contour Analysis

screen-shot-2016-10-24-at-8-44-47-pmThe primary determinants of the arterial pulse pressure are LV stroke volume and central arterial compliance (pulse pressure ≈ SV / C). Compliance is a function of size, age, sex, and physiological inputs, like sympathetic tone, hypoglycemia, temperature, and autonomic responsiveness of the vasculature. Hamilton and Remington explored this interaction over 50 years ago developing the overall approach used by most of the companies who attempt to report cardiac output from the arterial pulse. The main advantage of these arterial pressure-based cardiac output monitoring systems over indicator dilution measurements is their less invasive nature.

However, since all these devices presume a fixed relation between pressure propagation alnog the vascular three and LV stroke volume, if vascular elastance (reciprocal of compliance) changes, then these assumptions may become invalid. Thus, a major weakness of any pulse contour device is the potential for artificial drift in reported values if major changes in arterial compliance occur.