Improving hygiene in endoscopy

The use of flexible endoscopes for endoscopic retrograde cholangiopancreatography (ERCP) is increasing as it represents a relatively non-invasive method for the diagnosis and treatment of certain conditions of the biliary and pancreatic ductal systems, such as gallstones, undefined biliary strictures, bile duct injury or leaks, and cancer. The design of duodenoscopes, however, is complex; they have long narrow channels and a recessed elevator at the distal end that enables good use of any accessories. All the external surfaces and internal channels are in contact with body fluids, presenting a risk of contamination and transmission of infection from patient to patient as well as from patient to endoscopy personnel. As these flexible devices are heat labile and not suitable for steam sterilization, careful cleaning (reprocessing) is needed to minimize the risk of contamination.

A recent event on Hygiene Solutions in Endoscopy was held at PENTAX Medical R&D Center (October 2019, Augsburg, Germany) to discuss insights on the need for infection control and how to minimize contamination. The event brought together a number of key opinion leaders in the field of ERCP hygiene (endoscopists, microbiologists and chief nurses) and included Paul Caesar, Hygiene and Infection Prevention expert at the Tjongerschans Hospital (Heerenveen, The Netherlands), Dr Hudson Garrett Jr., Global Chief Clinical Officer at PENTAX Medical and Assistant Professor of Medicine (Division of Infectious Diseases) at the University of Louisville School of Medicine, Kentucky, USA, as well as Wolfgang Mayer, Managing Director of Digital Endoscopy at PENTAX Medical.

Endoscopy-associated infection
The healthcare community is increasingly aware of the risk of hospital-acquired infection associated with endoscopy following documentation of several outbreaks of patient infections linked to duodenoscopes in the USA and around the world in the last decade as well as regulatory recalls. However, Paul Caesar made the point that in reality there is very little data regarding infection rates. One of the issues is that patients are discharged from hospital more and more quickly following procedures. Then, if any infection subsequently develops, the patient usually attends their local general practitioner and the link to the endoscopy is not made. The point was made that currently no surveillance is done for post-endoscopy infection and this should be put into place to generate reliable data on infection rates.

Endoscope reprocessing
The role of endoscope reprocessing is crucial for mitigating the risk of infection and is achieved by mechanical cleaning detergent cleaning, high level disinfection, and rinsing and drying (Figs 1–3). However, research shows that in 45% of cases key reprocessing steps are skipped. Additionally, 75% of the reprocessing staff reported time pressures and non-compliance with guidelines related to reprocessing as a result. Paul Caesar emphasized this point saying, “Manual cleaning is still the most important step in reprocessing. However, in daily practice this stage is often downgraded to just a simple flush and brush. I call upon the field, to shift from reprocessing quantity to quality”. Another crucial step is to ensure that the device is thoroughly dried before storage. This reduces the risk of biofilm formation and bacterial growth. However, there are currently no official guidelines for the optimum drying time; even within Europe alone different countries use different drying times.

Suggestions for the improvement of reprocessing included:
1. proper explanation to and understanding by staff of the importance of the reprocessing stages to gain their commitment to following the procedure fully;
2. use of shorter visual pictogram explanations of the reprocessing stages rather than manuals that are approximately 150 pages long and are too complicated to thoroughly read and understand; and
3. traceability and tagging of the people performing the various tasks so that all the steps can be scanned and shown to be done in an optimal fashion.

Improving duodenoscope design

According to Calderwood et al., patient-to-patient transmission of infection has been linked to the elevator channel endoscopes (such as duodenoscopes) and attributed to persistent contamination of the elevator mechanism, the elevator cable and the cable channel. One solution to infection control is to use disposable duodenoscopes. However, this is not practical for every endoscopy because of the cost and the environmental impact. The one-time use of a disposable device is therefore recommended only for high-risk patients.

Hudson Garrett confirmed the company’s commitment to minimizing infection outbreaks with careful consideration of advice and requirements from the CDC (Centers for Disease Control and Prevention) and FDA (U.S. Food and Drug Administration) in the USA, and “using integrated feedback from all clinical stakeholders, optimizing reprocessing processes, and innovating products to directly tackle patient safety and infection prevention needs”. This has led to the development of a duodenoscope with a disposable distal cap with integrated elevator, hence eliminating the part of the device that is most associated with contamination. Additionally, use of the company’s dedicated dryer helps to ensure the device is fully dry, reducing the risk of microbial growth and subsequent potential contamination that can result from moisture. PENTAX Medical also has a strong commitment to the training of reprocessing staff, which (according to current data) requires a minimum of 8 hours to be done properly.

Elderly women: Neglected But Fast-Growing Demographic

Elderly women account for a large part of the world’s population. The number of females aged 60 and over is on course to cross one billion in 2050. This would correspond to a tripling of the level from 335 million in 2000. Older women out-number older men, and this imbalance rises with age. Indeed,  the fastest growing sub-group among ageing women consists of those over 80. Globally, there are about 125 women for every 100 men in the over-60 age group. Among the over-80s, the gap is much higher, at 190 women for 100 men.

Longer but not necessarily healthier lives
The increase in number of elderly women has been accompanied by the growth of their very specific health needs. Although women in Europe outlive men by six years, the difference in healthy life expectancy is only nine months. In effect, their extra years are severely burdened by disease and ill health.
In spite of such facts, there is a remarkable lack of data specifically focused on the health of elderly women. For instance, figures from the European statistical service, Eurostat, show standardized death rates per 100,000 inhabitants for all women, and for women under-65. Although it would be possible to determine the figure for women greater than 65 years in age, it is remarkable that this is not provided on the Eurostat site.

Data limitations
In 2005, a group called Older Women Network Europe (OWN-Europe) observed that though there was an abundance of studies on ageing, there was little gender analysis of potentially major differences in health on ageing women versus ageing men.
Ironically enough, OWN-Europe’s own website (www.own-europe.org) has been taken over by an entity dedicated to promoting anti-cellulitis stockings in the Japanese language. The organisation itself has been subsumed into AGE Platform Europe, which is a forum promoting awareness about issues affecting the aged in general, rather than differences in issues and concerns between elderly women and elderly men. As noted, this was OWN-Europe’s critique to begin with.
Another organisation, Dublin-based European Institute of Women’s Health (EIWH) has since sought to fill this gap. Though also concerned with general women’s health issues, it has an elderly-focused approach on key topics of interest – for example, providing data-based position papers on specific risks to elderly women, as compared both to men and younger women, in areas such as dementia, breast cancer, cardiovascular disease etc.

Age-related risks for women
Differences in Eurostat cause-of-death rates for women under 65 years in age versus all women yield some interesting conclusions.
Diseases of the cardiovascular system (circulatory disease and heart disease) account for the largest share of deaths in elderly women in Europe, well ahead of cancer. Lung cancer results in about
65 percent higher deaths than breast cancer, with colorectal cancer only slightly behind.
There is a steep rise in the age-related risk of dying from cardiovascular disease (CVD). This is outweighed slightly by the much smaller rate of death from respiratory disease. The age-related risk increase is also marked in dying from diseases of the nervous system.  Once again, the risk of older women dying from lung cancer as compared to younger women is significantly higher than breast cancer, while the age-related growth in risk is also high for colorectal cancer.

Lack of attention: The CVD example
Attention to specific age-related health issues in women has been inadequate.
For example, though it has been long known that CVD is a significant cause of female death, women present different symptoms than men. For example, a heart attack in a woman is often confused with indigestion—not pain in the chest. Women are also less likely to seek or to be provided with medical help and to be properly diagnosed until late in the disease process. Such factors are believed to explain why women are less likely to survive a heart attack, particularly when treated by a male doctor.

Other scourges
On the other side of the spectrum are conditions such as osteoporosis and osteoarthritis, which do not result in death, but lead to chronic pain and limit quality of life. They do not get adequate attention, since they are seen as an inevitable part of ageing – or as less serious conditions than heart disease or cancer. Both osteoporosis and osteoarthritis have a high propensity for women.

Osteoporosis: early start for women
Osteoporosis, for example, is four times more common in women aged over 50 than in men. One of the reasons is that women have a lower peak bone mass and show a younger onset of bone loss compared with men – on average, by 10 years.
For women, rapid declines in bone mass occur in the 65-69 age group as opposed to 74-79 for men. A second factor playing a role here are the hormonal changes which occur at menopause; these can alter calcium composition in a woman’s body.
Meanwhile, initiatives like hormone replacement therapy (HRT), once widely used in the wealthier countries, have become mired in controversy. Recent studies suggest that rather than prevent heart disease after menopause as was originally believed, HRT is associated with an increased risk of stroke and heart disease among some ageing women.

Osteoarthritis in one of 5 elderly women, twice rate in men

Osteoarthritis too shows the above patterns. This degenerative joint disease is associated with ageing and principally affects the articular cartilage. It impacts on joints which have been stressed over the years – such as the fingers, the knees, hips, and the lower spine region. 80% of osteoarthritis patients have limitations in movement, and 25% cannot perform their major daily activities of life.
Globally, an estimated 18 percent of women aged over 60 years have symptomatic osteoarthritis, which is almost twice a rate of 9.6 percent reported in men. Moreover, the incidence of osteoarthritis in the 60-90 age group rises 20-fold in women as compared to 10-fold in men.

Osteoarthritis and CVD
Osteoarthritis, in particular, has serious implications for another major problem, namely CVD. Meanwhile, some studies have demonstrated a high prevalence of CVD in osteoarthritis patients. One found that 54% of people with knee and hip osteoarthritis had co-existing CVD.

Need for more research on women
The above observations underwrite a need for research on diseases and health conditions of concern to women in general, and elderly women in particular.
Although CVD is one of the best known examples of differences between the sexes in symptomatic and other responses to disease, there are other cases. For instance, among men and women smoking the same number of cigarettes, women are 20 to 70 percent more likely to develop lung cancer.
One of the first areas of attention is to increase the number of clinical trials dedicated to such issues and encourage the participation of women in trials.

After thalidomide, women discouraged in clinical trials

Low female representation in clinical trials became a structural problem after the US Food and Drug Administration (FDA) issued a guideline in 1977 banning most women of ‘childbearing potential’ from participating in clinical research studies. This was the result of drugs like thalidomide, which caused severe birth defects.
Nevertheless, few denied, even then, that new drugs were metabolized differently by men and women due to factors such as body size, fat distribution and the hormonal environment.
It soon also became apparent that even new life-saving drugs might not work as well in women as they did in men. Worse still was one study in 2001, which reported that female patients have a 1.5 to 1.7-fold greater risk of developing adverse drug reactions than men, due to gender-related differences in pharmacokinetics as well as immunological and hormonal factors.
In the three years 1997-2000, eight of the 10 drugs for which the FDA withdrew approval had harmful side effects for women.

US changes approach, but gap still large

In the late 1980s, the FDA issued new guidelines to encourage inclusion of more women in studies and in 1993, formally rescinded its policy discouraging women from participating in studies.
Additional studies between 2011 and 2013 evaluated the inclusion and analysis of women in federally-funded randomized clinical trials. The researchers found that most such US studies, which were not sex-specific, had an average enrolment of 37% women. However, almost two out of three studies did not specify their results by sex and did not explain why the influence of sex in their findings was ignored.

The European case
The situation is similar in Europe. For instance, in spite of the role of CVD in female mortality, a EuroHeart report found that women comprised only a third of CVD trial participants, while one of two studies did not report the results by gender. Until the 1990s, clinical research in Europe followed the US lead and focused mainly on men. As the US began to shift stance towards encouraging women in trials, Europe followed suit, using the Inter-national Conference on Harmonisation (ICH) as a vehicle. ICH guidelines require Phase I response data be obtained for relevant sub-populations “according to gender.” However, many of the require-ments offer opt-outs with wording like “if the size of the study permits,” or recommend that demographic subgroups be “examined.”

New Regulation on Clinical Trials

EU rules on clinical trials are due to be overhauled after a new Clinical Trial Regulation (Regulation (EU) No 536/2014) comes into application. The Regulation harmonises clinical trial assessment and supervision via a Clinical Trials Information System (CTIS), which will be maintained by the European Medicines Agency (EMA).
The Regulation was adopted in 2014, but will enter into force after the CTIS is certified through an independent audit. This is still ongoing.
The new Regulation recommends that “gender and age groups” which would use a medicinal product should participate in its clinical trials. However, it still leaves an opt-out if exclusion is “otherwise justified in the protocol”, although “non-inclusion has to be justified”.
In other words, the jury is still out.

Point of Care Testing: Complementing the Laboratory

Point-of-care testing (POCT) is typically described as a clinical test which is done at, or close to, the physical location of a patient. This could be at a patient’s home, in a pharmacy, a GP’s office or an in-hospital bed site. POCT typically consists of portable devices and instruments, which return results quickly. As a result, POCT permits immediate intervention or treatment.
POCT can also be defined usefully by specifying what it is not. In this case, a POCT is simply a test that is not analysed in a laboratory. POCT short circuits many steps involved in the latter. It eliminates the need to collect a specimen, transfer it to the lab, perform the test, and transmit results back to the provider.
POCT is increasingly used to diagnose and manage a range of diseases, from chronic conditions such as diabetes to acute coronary syndrome (ACS). Recent additions include genetic tests.

Driven by miniaturisation
The POCT era is considered to have begun in the 1970s, with a test to measure blood glucose levels during cardiovascular surgery. In 1977, a rapid pregnancy test called ‘epf’ became the first POCT for use wholly outside a hospital.
Since the late 1980s, one of the key drivers of POCT has been product miniaturization, with increasingly sophisticated and ever-smaller mechanical and electrical components integrated onto chips that can analyse biological objects at the microscale. The pace of miniaturization has accelerated at a breakneck speed in recent years, to mobile handheld and wearable POCT devices. These can be inte-
grated with other applications within a healthcare facility, or aid patients in monitoring and self-management of chronic conditions.

Wide product range, but handful of tests dominate
The most widely-used POCTs include “blood glucose testing, blood gas and electro-
lytes analysis, rapid coagulation testing, rapid cardiac markers diagnostics, drugs of abuse screening, urine strips testing, pregnancy testing, faecal occult blood analysis, food pathogens screening, haemoglobin diagnostics, infectious disease testing and cholesterol screening.” Nevertheless, just three tests – urinalysis by dipstick, blood glucose and urine pregnancy – are believed to account for the majority of POCT.

Comparisons with the lab
Beyond definition, the relationship of POCT to a laboratory is close for a very good reason. Most clinical cases for POCT use lab testing as a comparator. In other words, the first question that comes to many people when using POCT is whether its results match those of a laboratory. Although evidently quicker to obtain, is POCT as reliable? Another topic for comparison consists of the cost of POCT versus lab tests.

Costs: a vexed question

Even in the heady early days of POCT, there was awareness about potential cost downsides. One of the first efforts to address this question was a US study, published in 1994 in ‘Clinical Therapeutics’. [1] The study, by the Office of Health Policy and Clinical Outcomes at the Thomas Jefferson University Hospital in Philadelphia, sought to determine time and labour costs for POCT versus central laboratory testing on a cohort of 210 patients presenting to the emergency department.
The patients had blood drawn for a Chem-7 profile (sodium, potassium, chloride, carbon dioxide, blood urea nitrogen, glucose, and creatinine), or for cell blood count (CBC). Largely due to much quicker turnaround time (TAT), physicians reported that POCT would have resulted in earlier therapeutic action for 40 of 210, or 19 percent of patients. Costs for POCT were, however, over 50 percent higher, and also showed significant variability, depending on test volume. The authors speculated that increasing volumes of POCT would reduce costs “substantially.”

Volumes lower cost
The perception that POCT is much more expensive than a centralized laboratory persists. There are several reasons for this. Consumables generally cost more than tests done with automated laboratory instruments. On its part, POCT simply cannot achieve the scale economy associated with the latter. It also requires more staff downtime.
However, right from the early stages of POCT use, it seemed likely that unit costs could be reduced by increasing test volumes, as anticipated in the 1994 study by Jefferson University Hospital.
POCT was also to quickly demonstrate enhanced utility for certain kinds of tests. In 1997, a study at an Indiana hospital reported a near-halving in unit costs of panels, from USD 15.33 to USD 8.03, following POCT implementation for blood gases and electrolytes [2].

Levelling the field of play
One of the biggest hurdles in making cost comparisons of POCT with lab tests is the difficulty of levelling the playing field. It is also difficult to use such an exercise to draw generalised conclusions, since key conditions often vary significantly from one care facility to another. POCT is also complex to manage, and it is especially challenging to maintain regulatory compliance, especially in large institutions.
Though the cost of consumables is straightforward to determine, this is hardly so for labour.
Labour costs for a lab test are not limited to staff in the laboratory. They also include costs of staff in the pre-analysis phase, for phlebotomy, nursing and other services. Many of the latter entail administrative overheads. Typically, these would consist of formalities in the collection of phlebotomy supplies, the completion and submission of a test request, the labelling of tubes, specimen packaging and despatch.
In contrast, POCT eliminates most pre-analytic steps, along with associated staff costs and overheads. POCT can be undertaken by personnel who are not trained in clinical laboratory sciences.

Cost versus value
Although it seems to be common sense that POCT labour costs are significantly less than for a laboratory test, calculating this precisely requires a complex time-and-motion study which takes account of differences in wages and other costs for phlebotomists, nurses, administrative staff and medical technologists.
Unit product cost therefore reflects only a part of the overall equation, as far as justifying the case for a test is concerned. Indeed, many experts now urge for making assessments based on unit value rather than unit cost.
The role of TAT
With POCT, faster TAT promises better treatment, reduced patient stay, superior workflow and improved clinical outcomes. POCT is however less about reducing TAT than making results available in an optimal and clinically relevant time frame. This, in turn, is frequently dictated by conditions for which care is targeted as well as the setting in which it is delivered.
Delayed test results also impact upon cost in indirect ways. For instance, radiology departments use creatinine POCT before administering contrast agents, since patients with impaired renal function can develop contrast-induced kidney injury. This allows for quick decisions about patients and efficient use of costly CT scanners. If physicians had to wait for test results from a laboratory, the scanner would risk having to idle in a stand by status.

POCT can sometimes be only choice
Some tests have to be performed at point of care since there is no choice, in terms of time for transport to a lab.
One good example is an activated clotting-time test. This is used to monitor cardiac patients undergoing high-dose heparin therapy, whose blood immediately starts to clot after collection of a sample. Another is a POCT glucose test, where a quick result is crucial in determining insulin dosage for diabetic patients.
Elsewhere, whole blood cardiac-marker POCT tests in an A&E facility allow physicians to make rapid decisions on patients with acute coronary syndromes in terms of triage and disposition for observation, catheterization or transfer to a cardiac ICU.
Yet another example is a rapid flu test, used to identify patients who could benefit from antiviral therapy requiring administration as soon as possible after infection, in order to reduce symptomatic intervals. None of the above permit the wait times required for a lab test.

The grey zones
Still, there are grey zones where lab tests have advantages, which are non-negotiable under certain conditions.
One example is routine monitoring of international normalized ratios (INR) for patients on warfarin. The latter is used for prophylaxis against stroke and systemic embolism in patients with atrial fibrillation or mechanical heart valves. The goal of testing is to ensure that anticoagulant levels are appropriate. Over a certain threshold, there is a risk of bleeding, while below it, there is the danger of clotting.
While warfarin toxicity can result in life-threatening risk of bleeding, inappropriate warfarin dose reduction can lead to inadequate protection from a stroke or systemic embolism.
Lab-based testing entails the patient travelling to a GP, or having a caregiver come to take blood at the patient’s home, and doing this regularly. However, even a one-day TAT for the lab test can be a major problem in terms of warfarin dosage. The utility of POCT here seems clear. The GP can know the results and adjust the medication dosage immediately. In addition, POCTs can enable certain categories of patient to self-test and manage warfarin therapy.

Lab tests as gold standard
However, POCT tests can vary significantly from laboratory analysers. In the case of warfarin monitoring, this happens as INR values rise. Correction factors are also typically device- and institution-specific. They cannot be uniformly applied across institutions. Many clinicians therefore require POCT INRs which are greater than 5.0 to be confirmed with a venipuncture sample and a lab test.
Lab tests therefore remain a gold standard. Instrumentation in a laboratory provides robust analytics during a test, and includes a host of quality controls, from test strengths and timings to testing accuracy. These are incorporated into a laboratory information system (LIS) and stored in a patient case file. POCT simply cannot provide such a depth of information.

Gaps being closed
In brief, both POCT and laboratory testing have pluses and minuses. POCT provides definite advantages and reduce risk in some situations.
However, laboratory testing is more advanced, more closely follows scientific process and is fully integrated with the kinds of technical redundancies necessary to ensure greater accuracy and validation of records.
Nevertheless, gaps between the two are being closed, especially through software technology.
Some hospitals now have dedicated satellite labs in emergency rooms and outpatient facilities equipped with POCT.

[1]  https://www.ncbi.nlm.nih.gov/pubmed/7859247
[2] Bailey TM, Topham TM, Wantz S, et al. Laboratory process improvement through point-of-care testing. Jt Comm J Qual Improv 1997;23(7):362–80

Contrast Enhancement: Expanding Frontiers of Ultrasound

Contrast enhanced agents have been key to enhancing the diagnostic capability of computed tomography (CT), magnetic resonance imaging (MRI) and clinical radiography. Since the turn of the millennium, contrast enhancement for ultrasound (CEUS) has also emerged as an imaging tool. Along with developments in scanning hardware, new contrast agents have expanded the application envelope of ultrasound. During CEUS, tiny liquid suspensions of biodegradable gas-filled microspheres (also known as ‘microbubbles’) are injected as tracer for microscopic ultrasound imaging examinations. The microbubbles are metabolized and expelled from the body within minutes. Clinical applications for ultrasound contrast agents potentially extend to any organ or physiological system that is evaluated with conventional ultrasound, with the singular exception of the fetus. As of now, major applications are in cardiac and hepatic imaging. Other applications are being explored, including paediatric CEUS.

From imaging complement to alternative
There is growing evidence that CEUS is valuable, accurate and cost-effective. It often complements CT and MRI, and in several instances, has become an important alternative to either. This especially concerns patients with renal failure, those who wish to avoid the radiation risk of CT or cannot cope with being shut inside a scanner.
Interest in CEUS has grown sharply since 2016, after the Food and Drug Administration (FDA) approved a microbubble contrast agent for liver CEUS, paving the way for much faster growth in the US market.
The microvascular challenge
Clinically, one of the key drivers for CEUS has been limits to the performance of ultrasound imaging and Doppler techniques. While B-Mode provides anatomical information, Doppler allows for visualization of the larger vessels in the macrovascular system, based on the velocity of blood flow in the intravascular lumen. However, there are limits to both spatial resolution and Doppler sensitivity.
The utility of conventional ultrasound reduces rapidly when a clinician needs to visualise smaller vessels and capillaries, lying within deeper structures of the body’s microvascular system.
To achieve this, and more specifically, determine differences in arrival-, dwell- and wash-out time within specific regions of parenchymal tissue, there is a need for direct imaging via tracers. It is in this capacity that contrast agents play a useful role. They improve the sensitivity and specificity of ultrasound and greatly expand its scope for application.

The advantages of CEUS

CEUS has certain intrinsic advantages when compared to other imaging modalities.
It permits ultra-high temporal imaging of contrast enhancement profiles at between 20 and 50 images per second, for a duration of about 5-8 minutes. This makes it possible for continuous visualization of images in all phases – from the early arterial to the late phase – and seek to ensure no patterns are missed. CEUS also allows for both follow-up examinations at short intervals, and, given its lack of ionising radiation, for repeated examinations over a long period of time – a common requirement for chronic diseases. CEUS is also convenient. It can be used at multiple bedsite locations – from intensive care units (ICUs) and operating rooms to recovery rooms and ambulatory units.
Contrast agents for ultrasound have been found to be safe with no cardio-, hepato-, or nephro-toxic effects. Laboratory checks to assess liver, renal or thyroid function before administration are therefore not required.
Evaluating liver lesions
In the liver, CEUS has proven its utility when clinicians encounter focal lesions during cross-sectional imaging of an asymptomatic patient. Though most such collateral encounters are benign, it is necessary to pursue dedicated imaging characterization and diagnosis, in order to exclude malignancy. This is especially true when the lesions are large or otherwise atypical and when the patient is from a high-risk group.
Traditionally, the evaluation of lesions was undertaken with magnetic resonance imaging (MRI) or multiphase CT. However, the former was generally limited in availability, while multiphase CT invoked concerns about radiation. CEUS is seen to be safe, non-invasive and available.
When CEUS is used in the liver, microbubble delivery occurs via two routes, namely the hepatic artery and portal vein. Blood flow through the latter needs to first transit gastrointestinal circulation, and therefore arrives at a later time point. This permits differentiation between the two wash-in phases.
CEUS enhances the display of vascularity in liver lesions, and is both accurate and reproducible. The vascular supply for focal liver lesions is characteristic of a particular lesion type and different from normal liver tissue. While abnormal vascularity of hepatocellular carcinoma can be demonstrated early during the contrast inflow phase, metastases are characterised in the late phase. In addition, the timing and the intensity of washout can differentiate hepatocellular malignancies from non-hepatocellular ones. The former demonstrate delayed and weak washout. Non-hepatocellular tumours show strong, early washout.

The need for right dosing
Using the optimal dose is important. Too high a contrast agent dose results in artefacts, particularly in the early phases of enhancement. These include acoustic shadowing, over-enhancement of small structures and signal saturation, which is also detrimental for quantification.
On the other hand, a low dosage causes the concentration of microbubbles to be sub-diagnostic in the late phase, challenging the detection of wash out.
If the wash out is early, the dose was probably too low. Here, it can be important to evaluate the status of the liver as being healthy or diseased. In difficult cases, a second (higher) dose may be administered, with no or only limited scanning in the early phases to reduce bubble destruction. The exact dose depends on the contrast agent, ultrasound equipment (software version, transducer), type of examination, organ and target lesion, size and age of the patient.

Other challenges for CEUS in the liver
Apart from the challenge of dosing, there are other limitations too in the use of CEUS in the liver. Very small lesions may be overlooked. The smallest detectable lesions are considered to be 3-5 mm in diameter.
There are also some specific shortcomings, such as fat layers surrounding the falciform ligament. These can cause enhancement defects which might be confused with a lesion.
Given limits to penetration, deep-seated lesions may also not always be accessible. However, some clinicians suggest that bringing the liver closer to the transducer via use of left lateral decubitus positioning can overcome such a limitation.

CEUS and cardiology
CEUS has also shown remarkable utility in cardiology.  After the tracer injection, micro-bubbles follow the flow and distribution of red blood cells. opacify the cardiac chambers and enhance delineation of the left ventricular border. The microbubbles are then ejected into the arterial circulatory system, allowing for visualization of blood flow into the parenchymal organs.
An assessment of cardiac function depends on proper delineation of the endocardial border and wall motion patterns. This is where conventional ultrasound faces serious limits. Intracardiac echo reflections couple to weak signals from structures in parallel to the echo beam. The ensuing delineation of the endocardial border can therefore be unclear, resulting in an inaccurate left ventricle assessment.
What contrast agents achieve here is to completely fill the ventricular cavity, and thereby delineate it in a similar fashion to cardiac MRI.
Proper assessment of cardiac function is especially important for stress echo tests in order to demonstrate inducible ischaemia. Here, the risk of a stress examination means that inadequate image quality is unacceptable. In addition, precise delineation of the cardiac chamber is required to make an assessment of heart insufficiency and decide on whether an automatic implantable cardioverter defibrillator (AICD) is indicated. Such precision is also required with cancer chemotherapy patients, in order to assess cardiotoxicity.

New contrast agents
First-generation ultrasound contrast agents were based on air, which was sufficiently soluble in blood for use with the equipment of the time. Second-generation agents contain an inert lipophilic gas with very low solubility, thus avoiding early leakage of the gas. This provides more stability to the microbubbles.
Modern contrast agents have a shell made out of a thin and flexible phospholipid membrane. One side, which faces the surrounding blood, has hydrophilic properties. On the other, lipophilic chains make contact with the encapsulated gas.
Over recent years, technology development has focused on ultrasound contrast agents which reduce microbubble size and increase persistence within the blood in the circulatory system, to 10 or more minutes. Researchers are also seeking to develop new materials and gases to control the encapsulating shell or surface of the microbubble, in order to inhibit dissolution and diffusion.

Constraints faced by microbubbles
In spite of the above developments, there are some constraints with microbubbles.
They do not last long in circulation, due to being taken up by immune system cells, the liver or spleen. They also have low adhesion efficiency, which means only a small fraction bind to an area of interest. Microbubbles can also burst at low ultrasound frequencies and at high mechanical indices, which, in turn, can lead to local microvasculature ruptures and haemolysis.

Guidelines on CEUS
The use of CEUS varies widely from one country to another, and even between different healthcare facilities in the same country.
Guidelines were first issued for the use of CEUS for liver applications in 2004. They were updated in 2008, reflecting growth in the availability of contrast agents. CEUS has also been recommended in guidelines for several non-liver applications, under the auspices of EFSUMB.
The latest guidelines date to 2012. They are published under the auspices of the World Federation for Ultrasound in Medicine and Biology (WFUMB) and the European Federation of Societies for Ultrasound in Medicine and Biology (EFSUMB). The aim is to create standard protocols for CEUS in liver applications across the world.
According to the guidelines, CEUS is indicated for liver lesion characterization in the following clinical situations:
• Incidental findings on routine ultrasound
• Lesion(s) or suspected lesion(s) detected with US in patients with a known history of a malignancy, as an alternative to CT or MRI
• Need for a contrast study when CT and MRI contrast are contraindicated
• Inconclusive MRI/CT
• Inconclusive cytology/histology results

Paediatric applications
One new frontier for CEUS applications consist of children.
Currently, sulphur hexafluoride gas microbubbles have been approved by the FDA in the US for characterising focal liver lesions in children and vesico-ureteral reflux. In Europe, CEUS in children is indicated for vesico-ureteral reflux, although there is
significant off-label use too.

CMEF Spring 2020

Never Stop

Premium Quality Feather Cutting Instruments Deliver Traditional Japanese Blade Excellence Enchanced by Unique Research Advances.

Recording medical papers for all brands and models