Month: June 2015

Volume of Distribution and Factors Affecting It

June 28, 2015 Pharmacokinetics No comments

Definition

The volume of distribution for a drug or the "apparent volume of distribution" does not necessarily refer to any physiologic compartment in the body. It is simply the size of a compartment necessary to account for the total amount of drug in the body if it were present throughout the body at the same concentration found in the plasma. The equation for the volume of distribution is expressed as follows:

V = Ab/C

where V is the apparent volume of distribution, Ab the total amount of drug in the body, and C the plasma concentration of drug.

The plasma volume of the average adult is approximately 3 L. Therefore, apparent volumes of distribution that are larger than the plasma compartment (>3 L) only indicate that the drug is also present in tissues or fluids outside the plasma compartment. The actual sites of distribution cannot be determined from the V value. For example, a drug with a volume of distribution similar to total body water (0.65 L/kg) does not indicate that the drug is equilibrate equally through the total body water. The drug may or may not be bound in or excluded from certain tissues. However, the average binding results in an apparent volume of distribution that is approximately equal to that of total body water. Without additional specific information, the actual sites of a drug's distribution are only speculative.

The apparent volume of distribution is a function of the lipid versus water solubilities and of the plasma and tissue protein binding properties of the drug.

Factors That Alter Volume of Distribution

Decreased tissue binding of drugs in uremic patients is a common cause of a reduced apparent volume of distribution for several agents. Decreased tissue binding will increase the C by allowing more of the drug to remain in the plasma.

Decreased plasma protein binding, on the other hand, tends to increase the apparent volume of distribution because more drug that would normally be in plasma is available to equilibrate with the tissue and the tissue binding sites. Decreased plasma protein binding, however, also increases the fraction of free or active drug so that the desired C that produces a given therapeutic response decreases. This is based on the assumption that the majority of drug in the body is actually outside the plasma compartment and that the amount of drug bound to plasma protein comprises only a small percentage of the total amount in the body.

This principle is illustrated by the pharmacokientic behavior of phenytoin in uremic patients. Plasma phenytoin concentrations in uremic patients are frequently one-half of those observed in normal patients given the same dose. The lower plasma levels, however, produce the same free or pharmacologically active phenytoin concentration as levels twice as high in non-uremic patients because the free fraction (fu) is increased from 0.1 to 0.2 in these individuals, indicating that the target plasma concentrations (bound + free) in uremics should be about half of the usual target concentration. Furthermore, a loading dose of phenytoin which produces a normal therapeutic effect is the same for both uremic and non-uremic patients because the volume of distribution increases by approximately twofold (0.65 L/kg to 1.44 L/kg) in uremic individuals.

Flow Resistance of Vessels in Series and Vessels in Parallel

June 21, 2015 Cardiology, Physiology and Pathophysiology No comments , ,

Any network of resistances, however complex, can always be reduced to a single “equivalent” resistor that relates the total flow through the network to the pressure difference across the network. Of course, one way of finding the overall resistance of a network is to perform an experiment to see how much flow goes through it for a given pressure difference between its inlet and outlet. Another approach to finding the overall resistance of a network is to calculate it from knowledge of the resistances of the individual elements in the network and how they are connected. When one looks at the overall design of the body’s vascular system, one sees two patterns: 1.the arterial, arteriolar, capillary, and venous segments are connected in series; and 2.within each segment, there are many vessels arranged in parallel.


Vessels in Series

When vessels with individual resistances R1, R2, …, Rn are connected in series, the overall resistance of the network is simply the sum of the individual resistances, as indicated by the following formula:

Rs=R1 + R2 +…+ RnScreen Shot 2015-06-21 at 5.32.39 PM

Figure 6-3A shows an example of three vessels connected in series between some region where the pressure is Pi and another region with a lower pressure Po, so that the total pressure difference across the network, ΔP, is equal to Pi – Po. By the series resistance equation, the total resistance across this network (Rs) is equal to R1 + R2 + R3. By the basic flow equation, the flow through the network (Q) is equal to ΔP/Rs. It should be intuitively obvious that  Q is also the flow (volume/time) through each of the elements in the series, as indicated in Figure 6-3B. Fluid particles may move with different  velocities (distance/time) in different elements of a series network, but the volume that passes through each element in a minute must be identical.

As shown in Figure 6-3C, a portion of the total pressure drop across the network occurs within each element of the series. The pressure drop across any element in the series can be calculated by apply the basic flow equation to that element, for example, ΔP1=QR1. Note that the largest portion of the overall pressure drop will occur across the element in the series with the largest resistance to flow (R2 in Figure 6-3).

One implication of the series resistance equation is that elements with the highest relative resistance to flow contribute more to the network‘s overall resistance than do elements with relatively low resistance. Therefore, high-resistance elements are inherently in an advantageous position to be able to control the overall resistance of the network and therefore the flow through it.


Vessels in Parallel

As indicated in Figure 6-4, when several tubes with individual resistances R1, R2, …, Rn are brought together to form a parallel network of vessels, one can calculate a single overall resistance for the parallel network Rp according to the following formula:Screen Shot 2015-06-21 at 6.48.11 PM

1/Rp=1/R1 + 1/R2 +…+ 1/Rn

The total flow through a parallel network is determined by ΔP/Rp. As the preceding equation implies, the overall effective resistance of any parallel network will always be less than that of any of the elements in the network. In general, the more parallel elements that occur in the network, the lower the overall resistance of the network. Thus, for example, a capillary bed that consists of many individual capillary vessels in parallel can have a very low overall resistance to flow even through the resistance of a single capillary is relatively high.

 

Basic Concepts in Statistics

June 16, 2015 Clinical Research, Clinical Trials No comments , , , , ,

327px-Phs.svgThree Kinds of Data

There are three types of data, including interval data where some variables are measured on a scale with constant intervals, nominal/categorical data, and ordinal data. For interval data, the absolute difference between two values can always be determined by subtraction. Interval variables include some such as tempterature, blood pressure, height, weight, and so on. There are other data, such as gender, state of birth, or whether or not a person has a certain disease, that are not measured on an interval scale. These variables are examples of nominal or categorical data, where individuals are classified into two or more mutually exclusive and exhaustive categories. For example, people could be categorised as male or female, dead or alive, or as being born in one of the 50 states, District of Columbia, or outside the United States. In every case, it is possible to categorise each individual into one and only one category. In addition, there is no arithmetic relationship or even ordering between the categories. Ordinal data fall between interval and nominal data. Like nominal data, ordinal data fall into categories, but there is an inherent ordering (or ranking) of the categories. Level of health (excellent, very good, good, fair, or poor) is a common example of a variable measured on an ordinal scale. The different values have a natural order, but the differences or “distances” between adjoining values on an ordinal scale are not necessarily the same and may not even be comparable. For example, excellent health is better than very good health, but this difference is not necessarily the same as the difference between fair and poor health. Indeed, these difference may not even be strictly comparable.

The Normal Distribution

If the observed measurement is the sum of many independent small random factors, the resulting measurements will take on values that are distributed in normal/Gaussian distribution. Note that the distribution is completely defined by the population mean μ and population standard deviation σ. The μ and σ, and the size of population are all the information one needs to describe the population fully if the distribution of values follows a normal distribution.


Getting The Data

We can get the data by examine every single member of the population, however, usually it is physically or fiscally impossible to do this, and we are limited to examining a sample of n individuals drawn from the population in the hope that it is representative of the complete population. Without knowledge of the entire population, we can no longer know the population mean – μ and population standard deviation – σ. Nevertheless, we can estimate them from the sample. To do so the sample has to be “representative” of the population from which it is drawn.

  • Random Sample

All statistical methods are built on the assumption that the individuals included in your sample represent a random sample from the underlying (and unobserved) population. In a random sample every member of the population has an equal probability (chance) of being selected for the sample. The most direct way to create a simple random sample would be to obtain a list of every member of the population of interest, number them from 1 to N (where N is the number of population members), then use a computerised random number generator to select the n individuals for the sample. Every number has the same chance of appearing and there is no relationship between adjacent numbers. We could create several random samples by simply select samples that have not been selected before and the important point is not to reuse any sequence of random number already used to select a number. In this way, we ensure that every member of the population is equally likely (equal probability/chance) to be selected for observation in the sample.

The list of population members from which we drew the random sample is known as sampling frame. Sometimes it is possible to obtain such a list (for example, a list of all people hospitalised in a given hospital on a given day), but often no such list exists. When there is no list, investigators use other techniques for creating a random sample, such as dealing telephone numbers at random for public opinion polling or selecting geographic locations at random from maps. The issue of how the sampling frame is constructed can be very important in terms of how well and to whom the results of a given study generalize to individuals beyond the specific individuals in the sample. The procedure of random selection of samples above is known as simple random sample, by which we randomly selection samples from a population as a whole group. Conversely, investigators sometimes use stratified random samples in which they first divide the population into different population into different subgroups (perhaps based on gender, race, or geographic location), then construct simple random samples within each subgroup (strata). This procedure is used when there are widely varying numbers of people in the different subpopulations so that obtaining adequate sample sizes in the smaller subgroups would require collecting more data than necessary in the larger subpopulations if the sampling was done with a simple random sample. Stratification reduces data collection costs by reducing the total sample size necessary to obtain the desired precision in the results, but makes the data analysis more complicated. The basic need to create a random sample where each member of each subpopulation (strata) has the same chance of being selected is the same as in a simple random sample.

  • The Mean and Standard Deviation

Having obtained a random sample from a population of interest, we are ready to use information from that sample to estimate the characteristics of the underlying population.

Sample mean = Sum of values/Number of observations in sample, that is

Screen Shot 2015-05-08 at 5.30.25 PM

in which the bar over the X denotes that it is the mean of the n observations of X.

The estimate of the population standard deviation is called the sample standard deviation s or SD and is defined as,

Screen Shot 2015-05-08 at 5.36.44 PM

The definition of the sample standard deviation, SD, differs from the definition of the population standard deviation σ in two ways: 1.the population mean μ has been replaced by our estimate of it, the sample mean X, and 2.we compute the “average” squared deviation of a sample by dividing n-1 rather than n. The precise reason for dividing by n-1 rather than n requires substantial mathematical arguments, but we can present the following intuitive justification: The sample will never show as much variability as the entire population and dividing by n-1 instead of n compensates for the resultant tendency of the sample standard deviation to underestimate the population standard deviation. In conclusion, if you are willing to assume that the sample was drawn from a normal distribution, summarise data with the sample mean and sample standard deviation, the best estimates of the population mean and population standard deviation, because these two parameters completely define the normal distribution. When there is evidence that the population under study does not follow a normal distribution, summarise data with the median and upper and lower percentiles discussed later in this thread.

  • Standard Error of The Mean/How Good Are These Estimates (sample mean and sample standard deviation)

The mean and standard deviation computed from a random sample are estimates of the mean and standard deviation of the entire population from which the sample was drawn. There is nothing special about the specific random sample used to compute these statistics, and different random samples will yield slightly different estimates of the true population mean and standard deviation. To quantitate how accurate these estimates are likely to be, we can compute their standard errors. It is possible to compute a standard error for any statistic, but here we shall focus on the standard error of the mean. This statistic quantifies the certainty with which the mean computed from a random sample estimates the true mean of the population from which the sample was drawn.

To compute the standard error of the mean, you have to get two or more samples, in which each sample has a certain size of individuals. For each sample, you could get a mean value of the individuals in the sample. For example, if you have four samples, you would get 4 means. And the standard error of the mean could be calculated by the equation for the calculation of SD, as described above. We denote the standard error of the mean σX.

Just as the standard deviation of the original sample of a certain amount of individuals is an estimate of the variability distribution for the whole population, σX is an estimate of the variability of possible values of means of samples of a certain amount of individuals. Since when one computes the mean, extreme values tend to balance each other, there will be less variability in the values of the sample means than in the original population (that is, SD>σX). σX is a measure of the precision with which a sample mean estimates X the population mean μ.

The standard error of the mean tells not about variability in the original population, as the standard deviation does, but about the certainty with which a sample mean estimates the true population mean.

Since the precision with which we can estimate the mean increases as the sample size increases, the standard error of the mean decreases as the sample size increases. Conversely, the more variability in the original population, the more variability will appear in possible mean values of samples; therefore, the standard error of the mean increases as the population standard deviation increases. The true standard error of the mean of samples of size n drawn from a population with standard deviation σ is,

Screen Shot 2015-06-16 at 11.25.36 PM

Mathematicians have shown that the distribution of mean values will always approximately follow a normal distribution, regardless of how the population from which the original samples were drawn is distributed. We have developed what statisticians call the Central Limit Theorem. It says,

  • The distribution of sample means will be approximately normal regardless of the distribution of values in the original population from which the samples were drawn.
  • The mean value of the collection of all possible sample means will equal the mean of the the original population.
  • The standard deviation of the collection of all possible means of samples of a given size, called the standard error of the mean, depends on both the standard deviation of the original population and the size of the sample.

 

Something You Should Know About The Laboratory Tests

June 14, 2015 Clinical Skills No comments , ,

Laboratory Tests might be the most common exams during a outpatient encounter or in the inpatient setting. However, as a clinician the laboratory is not the all. Rather the results of laboratory tests must be integrated into the more comprehensive judgement which includes not only laboratory tests but also patient history, signs and symptoms, etc.


Reference Range

The reference range is a statistically-derived numerical range obtained by testing a sample of individuals assumed to be healthy. The upper and lower limits of the range are not absolute (i.e., normal versus abnormal), but rather points beyond which the probability of clinical significance begins to increase. The term reference range is preferred over the term normal range. The reference population is assumed to have a Gaussian distribution with 68% of the values within one standard deviation (SD) above and below the mean, 95% within ±2 SD, and 99.7% within ±3 SD.

The reference range for a given analyze is usually established in the clinical laboratory as the mean or average value plus or minus two SDs. Acceptance of the mean ±2 SD indicates that one in 20 normal individuals will have test results outside the reference range (2.5% have values below the lower limit of the reference range and 2.5% have values above the upper limit of the reference range). Accepting a wider range (e.g., ±3 SD) includes a larger percentage of normal individuals but increases the chance of including individuals with values only slightly outside of a more narrow range, thus decreasing the sensitivity of the test (increasing the false negative).

Qualitative laboratory tests are either negative or positive and without a reference range; any positivity is considered abnormal. For example, any amount of serum acetone, porphobilinogen, or alcohol is considered abnormal. The presence of glucose, ketones, blood, bile, or nitrate in urine is abnormal. The results of the Venereal Disease Research Laboratory (VDRL) test, the LE prep test, tests for red blood cell (RBC) sickling, and the malaria smear are either positive or negative.


Factors That Influence the Reference Range

Many factors influence the reference range. Reference ranges may differ between labs depending on analytical technique, reagent, and equipment. The initial assumption that the sample population is normal may be false. For example, the reference range is inaccurate if too many individuals with covert disease are included in the sample population. Failure to control for physiologic variables (e.g., age, gender, ethnicity, body mass, diet, posture, and time of day) introduces many unrelated factors and may result in an inaccurate reference range. Reference ranges calculated from non randomly distributed test results or from a small number of samples may not be accurate.

Reference ranges may change as new information relating to disease and treatments becomes available. For example, the National Cholesterol Education Program’s Third Report of the Expert Panel on Detection, Evaluation, and Treatment of High Blood Cholesterol in Adults (ATP III), released in 2001, includes recommendations to lower and more closely space reference range cutoff points for low-density lipoprotein cholesterol, high-density lipoprotein cholesterol, and triglycerides. The availability of more sensitive thyrotropin (TSH) assays and the recognition that the original reference population data was skewed has led some clinicians to conclude that there is a need to establish a revised reference range for this analyte.


Critical Value

The term critical value refers to a result that is far enough outside the reference range that it indicates impending morbidity (e.g., potassium <2.8 mEq/L). Because laboratory personnel are not in a position to consider mitigating circumstances, a responsible member of the healthcare team is notified immediately on discovery of a critical value test result. Critical values may not always be clinically relevant, however, because the reference range varies for the reasons discussed above.


Semiquantitative Test

A semiquantitative test is a test whose results are reported as either negative or with varying degrees of positivity but without exact quantification. For example, urine glucose and urine ketones are reported as negative or 1+, 2+, 3+; the higher numbers represent a greater amount of the measured substance in the urine, but not a specific concentration.


Sensitivity

The sensitivity of a test refers to the ability of the test to identify positive results in patients who actually have the disease (TP rate). Sensitivity assesses the proportion of TPs disclosed by the test. A test is completely sensitive (100% sensitivity) if it is positive in every patient who actually has the disease. The higher the test sensitivity, the lower the chance of a false-negative result; the lower the test sensitivity, the higher the chance of a false-negative result. However, a highly sensitive test is not necessarily a highly specific test. For the calculation of sensitivity and specificity please refer to the thread of [Lab] Sensitivity, specificity, and predictive value of laboratory tests at http://forum.tomhsiung.com/pharmacy-students-and-residents/pharmacy-student-resident-life/696-lab-sensitivity-specificity-predictive-value-laboratory-tests.html.

Highly sensitive tests are preferred when the consequences of not identifying the disease are serious; less sensitive tests may be acceptable if the consequence of a false negative is less significant or if low sensitivity tests are combined with other tests. For example, inherited phenylalanine hydroxylase deficiency (phenylketonuria or PKU) results in increased phenylalanine concentrations. High phenylalanine concentrations damage the central nervous system and are associated with mental retardation. Mental retardation is preventable if PKU is diagnosed and dietary interventions initiated before 30 days of age. The phenylalanine blood screening test, used to screen newborns for PKU, is a highly sensitive test when testing infants at least 24 hours of age. In contrast, the prostate specific antigen (PSA) test, a test commonly used to screen men for prostate cancer, is highly sensitive at a low PSA cutoff value but highly specific only at high PSA cutoff value. Thus, PSA cannot be relied on as the sole prostate cancer screening method.

Sensitivity also refers to the range over which a quantitative assay can accurately measure the analyte. In this context, a sensitive test is one that can measure low levels of the substance; an insensitive test cannot measure low levels of the substance accurately. For example, a digoxin concentrations as low as 0.7 ng/mL. Concentration below 0.7 ng/mL would not be measurable and would be reported as “less than 0.7 ng/mL” whether the digoxin concentration was 0.69 ng/mL or 0.1 ng/mL. Thus this relatively insensitive digoxin assay would not differentiate between medication non adherence with an expected digoxin concentration of 0 ng/mL and low concentrations associated with inadequate dosage regimens.


Specificity

Specificity refers to the percent of negative results in people without the disease (TN rate). Specificity assesses the proportion of TNs disclosed by the test; the lower the specificity, the higher the chance of a false-positive result. A test with a specificity of 95% for the disease in question indicates that the disease will be detected in 5% of people without the disease (5% of total people would be false-positive). Test with high specificity are best for confirming a diagnosis because the test are rarely positive in the absence of the disease. Several newborn screening test (e.g., PKU, galactosemia, biotinidase deficiency, congenital hypothyroidism, and congenital adrenal hyperplasia) have specificity levels above 99%. In contrast, PSA test is an example of test with low specificity. The PSA is specific for the prostate but not specific for prostate carcinoma. Urethral instrumentation, prostatitis, urinary retention, prostatic needle biopsy, and benign prostatic hyperplasia elevate the PSA. The erythrocyte sedimentation rate (ESR) is another nonspecific test; infection, inflammation, and plasma cell dycrasias increase the ESR.

Specificity as applied to quantitative laboratory tests refers to the degree of cross-reactivity of the analyte with other substances in the sample. For example, vitamin C cross-reacts with glucose in some urine tests, falsely elevating the urine glucose test results. Quinine may cross-react with or be measured as quinidine in some assays, falsely elevating reported quinidine concentration.


Specimen

A specimen is a sample (e.g., whole blood, venous blood, arterial blood, urine, stool, sputum, sweat, gastric secretions, exhaled air, cerebrospinal fluid, or tissues) that is used fro laboratory analysis. Plasma is the watery acellular portion of blood. Serum is the liquid that remains after the fibrin clot is removed from plasma. While some laboratory tests are performed only on plasma or serum, other laboratory tests can be performed on either plasma or serum.

Catheter-Related Blood Stream Infections (Pathogenesis and Diagnosis)

June 1, 2015 Critical Care, Infectious Diseases No comments , , , ,

Because patients in ICU are commonly plugged with blood catheters, catheter-related blood stream infection (CRBI) can happen and increase the morbidity, the mortality, and the healthcare cost in those patients. Today we talk about the pathogenesis and diagnosis of catheter-related blood stream infections.

Pathogenesis

The source of catheter-related bloodstream infections are indicated in the figure on the right.

1.Microbes can gain access to the blood stream via contaminated infusates like blood products, but this occurs rarely.

2. Contamination of the internal lumen of vascular catheters can occur through break points in the infusion system, such as catheter hubs. This may be a prominent route of infection for catheters inserted through subcutaneous tunnels.

3.Microbes on the skin can migrate along the subcutaneous tract of an indwelling catheter and eventually reach (and colonize) the intravascular portion of the catheter. This is considered the principal route of infection for percutaneous (non-tunneled) catheters, which include most of the catheters inserted in the ICU.

4.Microorganisms in circulating blood can attach to the intravascular portion of an indwelling catheter. This is considered a secondary seeding of the catheter from a source of septicemia elsewhere, but proliferation of the microbes on the catheter tip could reach the point where the catheter becomes a source of septicemia.

Microbes are not freely moving organisms, and have a tendency to congregate on insert surfaces. When a microbe comes in contact with a surface, it releases adhesive molecules (called adhesins) that firmly attach it to the surface. The microbe then begins to proliferate, and the newly formed cells release polysaccharides that coalesce to form a matrix known as slime (because of its physical properties), which then encases the proliferating microbes. The encasement formed by the polysaccharide maxtrix is called a biofilm. Biofilms arScreen Shot 2015-06-01 at 5.24.42 PMe protective barriers that shield microbes from the surrounding environment, and this protected environment allows microbes to thrive and proliferate.

Biofilms on medical devices are problematic because they show a resistance to host defenses and antibiotic therapy. Phagocytic cells are unable to ingest organisms that are embedded in a biofilm, and antibiotic concentrations that eradicate free-living bacteria must be 100 to 1,000 times higher to eradicate bacteria in biofilms.

Diagnosis

There are three culture-based approaches to the diagnosis of catheter-related bloodstream infection. The evaluation of suspected CRBI requires one of the three possible decisions for the suspect catheter: 1.remove the catheter and insert a new catheter at a new venipuncture site; 2.replace the catheter over a guide wire using the same venipuncture site; and 3.leave the catheter in place.

Option #1 Remove the catheter and insert a new catheter at a new venipuncture site.

Option #2 Replace the catheter over a guide wire using the same venipuncture site.

Option #3 Leave the catheter in place.

The option #1 is recommended for patients with neutropenia, a prosthetic valve, indwelling pacemaker wires,  evidence of severe sepsis or septic shock, or purulent drainage from the catheter insertion site. Otherwise, catheter can be left in place or replace over a guidewire. Option #3 is desirable because most evaluation for CRBI do not confirm the diagnosis (so replacing the catheter is not necessary), and because guidewire exchanges can have adverse effects.

So after the decision of evaluating the diagnosis of CRBI, the following is the culture-based diagnosis procedure. There are three types of culture-based diagnosis procedures fro CRBI, including semiquantitative culture of catheter tip, differential quantitative blood cultures, and differential time to positive culture.

Samples obtaining and Interception

Semiquantitative Culture of Catheter Tip

The standard approach to suspected CRBI is to remove the catheter and culture the tip, as shown below,

1.Before the catheter is removed, swab the skin around the catheter insertion site with an antiseptic solution.

2.Remove the catheter using sterile technique and sever the distal 5 cm (2 inches) of the catheter. Place the severed segment in a sterile culturette tube for transport the the microbiology laboratory, and request a semiquantitative or roll-plate culture. If an antimicrobial-impregnated catheter is removed, inform the lab of such so they can add the appropriate inhibitors to the culture plate.

3.Draw 10 mL of blood from a peripheral vein for a blood culture.

4.The diagnosis of CRBI is confirmed if the same organism is isolated from the catheter tip and the blood culture, and growth from the catheter tip exceed 15 colony forming units (cfu) in 24 hours.

Because the out surface of the catheter is cultured, this method will not detect colonization on the inner (luminal) surface of the catheter (which is the surface involved if microbes are introduced via the hub of the catheter). Nevertheless, semiquantitative catheter tip cultures are considered the “gold standard” method for the diagnosis of CRBI.

Differential Quantitative Blood Cultures

This method is designed for catheters that are left in place, and is based on the expectation that when the catheter is the source of a bloodstream infection, blood withdraw through the catheter will have a higher microbial density than blood obtained from a peripheral vein. This requires a quantitative assessment of microbial density in the blood, where the results are expressed as number of colony forming unites per mL.

1.Obtain specialized Isolator culture tubes from the microbiology laboratory. These tubes contain a substance that lyses cells to release intracellular organisms.

2.Decontaminate the hub of the catheter with an antiseptic solution (use the distal lumen in multilane catheters) and draw 10 mL of blood through the catheter and directly into the Isolator culture tube.

3.Draw 10 mL of blood from a peripheral vein using the Isolator culture tube.

4.Send both specimens to the microbiology lab fro quantitative cultures. The blood will be processed by lying the cells to release microorganisms, separating the cell fragments by centrifugation, and adding broth to the supernatant. This mixture is placed on a culture plate and allowed to incubate for 72 hours. Growth is recorded as the number of colony forming unites per milliliter (cfu/mL).

5.The diagnosis of CRBI is confirmed if the same organism is isolated from the catheter blood sample and the peripheral blood sample, and the colony count in the catheter blood is at least 3 times greater than the colony count in peripheral blood.

Because blood is withdrawn through the lumen of the catheter, this method may not detect microbes on the outer surface of the catheter. However, the diagnostic accuracy of this method is 94% when compared with catheter tip cultures (the gold standard).

Differential Time to Positive Culture

This method is also designed for catheters that remain in place, and is based on the expectation that when a catheter is the source of a bloodstream infection, the blood withdrawn through the catheter will show microbial growth earlier than blood obtained from a peripheral vein. This method uses routine (qualitative) blood cultures; and requires 10 mL of blood drawn through the catheter, and 10 mL of blood from a peripheral vein. The diagnosis of CRBI is confirmed if the same organism is isolated from the catheter blood and peripheral blood, and growth is first detected at least 2 hours earlier in the catheter blood.

This approach is technically easier and less costly than comparing quantitative blood cultures, but he diagnostic accuracy is lower.