Imaging: the new frontier for clinical decision support

The clinical decision support (CDS) system is one of the most exciting areas of healthcare IT. It leverages state-of-the-art IT tools ranging from data-mining algorithms to complex neural networks, and seeks to address one of healthcare IT’s biggest challenges – Big Data. For its proponents, CDS is a means to standardize clinical practice with a framework of evidence-based clinical rules.

Information overload and CDS
In a recent publication, Ken Ong, Chief Medical Informatics Officer of New York’s Queens Hospital, discusses the importance of CDS tools and processes to modern medical practice. He cites the quadrupling in medical journal articles from 200,000 in 1970 to over 800,000 in 2010, and calculates that given the current rate of publication in medical literature, a medical school graduate reading two articles every day ‘would be 1,225 years behind at the end of the first year.’ Another interesting figure concerns national clinical care guidelines for preventive services and chronic disease management. Ong writes that were physicians to follow all these, alongside doing their routine tasks for a typical patient panel, they would need a workday of 21.7 hours. His conclusion is simple: ‘Information overload coupled with a paucity of time suggest the value of CDS and greater team-based care.’

Reduction of inappropriate imaging
In its radiology incarnation, a CDS platform provides evidence-based information and patient-tailored tools to make imaging decisions at the point of care. The system is optimized within clinical workflow and allows a physician to quickly determine what type of imaging exam is needed for a patient with specific symptoms, effectively steering choices away from low-yield exams. This ensures the appropriate use of radiation, while avoiding unnecessary exposure. It also evidently save costs.
In practical terms, radiology CDS is provided as an interface to a computerized physician order entry (CPOE) system. In February 2012, The Journal of the American College of Radiology’ published results of a pilot study at Boston’s Brigham and Women’s Hospital on a web-enabled (CPOE) system with embedded imaging decision support. The project was run between 2000 and 2010 across the hospital’s outpatient, emergency and inpatient departments and established significant increases in meaningful use for electronically created studies (from 0.4 percent to 61.9 percent) and for electronically signed studies (from 0.4 percent to 92.2 percent).
Also in 2012, the American College of Cardiology announced the results of a two-year old initiative known as Imaging in FOCUS’, which aimed at reducing inappropriate use through CDS software. The initiative had considerable success, with participating practices reporting a sharp reduction in inappropriate ordering, by close to 50% in one year (from 12 to 7 percent).

Laggard in healthcare IT

In spite of this, CDS has until recently been limited to prescriptions, laboratory tests and treatment protocols, with imaging described as ‘a laggard on the health IT technology adoption curve.’
In the US health IT investments of higher priority to hospitals-certified electronic health record (CEHRT) technology needed to comply with the federal meaningful use (MU) programme, better security systems, and ICD-10 conversion software-have superseded investments in radiology CDS.

A boost from PAMA

However, radiological CDS systems received a boost in the US after passage of the Protecting Access to Medicare Act (PAMA) in April 2014. Although much of its focus is on physician reimbursement, PAMA also provides incentives to change physician behaviour with regard to imaging. The key clause in PAMA is Section 218 which encourages the development and use of clinical practice guidelines for ordering imaging tests. These guidelines, in turn, form the core of radiology decision support tools.
PAMA closes a gap in the meaningful use clauses of the EHR Incentive Reimbursement Program, which has been targeted at the electronic health record.
EHR design does not accommodate radiology workflow and processes – and therefore had little relevance for radiologists so far. This is what PAMA seeks to address.
The impact of PAMA on CDS is likely to be major, after it takes effect. The deadline was originally set for January 1 next year, but has since been shifted to ‘approximately the summer of 2017,’ in order to give more time to healthcare providers to get used.
After PAMA is in force, physicians in their office, in the hospital outpatient or emergency department settings will have to consult appropriate use criteria (AUC) when ordering CT, MRI and nuclear medicine-based imaging such as PET (X-ray, fluoroscopy, and ultrasound exams are excluded). PAMA explicitly states that physicians offering diagnostic interpretation will be reimbursed by Medicare only for claims which confirm that a certified CDS system was used.

ACR Select: appropriate use for imaging
Although there are several initiatives, the radiological CDS system which seems most likely to become a global reference is ACR Select. This system, which debuted at the Radiological Society of North America (RSNA) Annual Meeting in 2012, was developed jointly by the American College of Radiology (ACR) and National Decision Support Company (NDSC). ACR Select is designed to ‘reduce inappropriate use of diagnostic imaging’ by using CDS software to track AUC criteria.
ACR Select offers a database with more than 130 topics and 614 variant conditions that provide evidence-based guidance for the appropriate use of all imaging procedures. More than 300 volunteer physicians, representing more than 20 radiology and non-radiology specialty organizations, participate on the ACR expert panels to continuously update these guidelines.
An ACR Select interface is provided for computerized physician order entry (CPOE) applications. The interface pops up when a physician requests an imaging exam for a patient. The physician is required to input information on the latter’s clinical condition, along with the imaging exam sought. ACR Select then gives an appropriateness score, accompanied by a colour code – green, yellow, or red which instructs whether a study is clinically indicated based on the ACR’s appropriateness criteria.

Europe sees no need to reinvent the wheel
Developments in the US have spilled over into Europe.
In autumn 2013, Hospital Clinic of Barcelona started to test ACR Select, with the aim of adapting its appropriateness criteria to European standards of practice. Shortly afterwards, a team of senior radiologists began work developing Europe-specific and evidence-based imaging referral guidelines. These were based not only on translating the US criteria into Spanish, but also adapting them to local clinical situations, diagnostic codes, and country-specific practices. The target was ‘to cover around 80 percent of requests in daily practice by reviewing the clinical scenarios, indications and recommendations’ for a large range of topic groups.
The embryonic system was subsequently tested at 80 general practitioners in Hospital Clinic Barcelona’s network. The GPs were provided feedback on how their requests for imaging exams matched appropriateness criteria. The tests were then rolled out to other specialists, including emergency physicians.

At the European Congress of Radiology (ECR) in Vienna in March 2014, Dr. Lluis Donoso Bach, director of the diagnostic imaging centre at Hospital Clinic of Barcelona, pointed out that the economic crisis had led radiologists looking for innovative ways ‘to do more with less.’ Europe, he said, could benefit by adapting ACR Select to its needs, and avoid going through an exhaustive process of creating its own criteria for appropriate imaging.
In the months to come, some ten pilot projects to adapt ACR Select to Europe were launched in various other European countries, including the United Kingdom, Germany, Italy, Spain, Portugal, and Sweden.

Conflicts in European models, global ambitions
In retrospect, one of the most persuasive arguments swinging the choice of radiology CDS towards ACR Select consisted of conflicts between emerging European CDS models. The ESR had first sought to develop a CDS system based on guidelines from the French and British radiological societies. However, preliminary work soon identified considerable discrepancies’ between the two sets of rules and this led the ESR to turn to ACR Select.
Yet another advantage of a joint Euro-American approach is acknowledged by the ESR. It gives ‘a global dimension for the ACR and ESR’s common vision of establishing a global set of imaging referral guidelines in the future.’ As Pharma Times’ noted, the collaboration is ‘a decisive first step towards harmonizing AUC for imaging at a global level’. It added that interest in the system from Australia and Asia suggests ‘that the radiology field is indeed headed towards a globalization of ordering guidelines.’
In March 2016, National Decision Support Company (NDSC) established a European subsidiary in Vienna, home of the ESR. Outside Europe, one of its first targets is the Middle East.

ESR launches Europeanised prototype
In March 2015, the European Society of Radiology (ESR) formally launched a prototype of the adapted US CDS system, which it called iGuide. The launch took place at the ECR in Vienna. During the occasion, Dr. Lluis Donoso Bach also took over as ESR President, with his term lasting until 2016.
During the launch, Erika Denton, National Clinical Director for Diagnostics with NHS England, discussed some figures regarding the localization and adapting of ACRSelect into the ESR iGuide. There were 16% rating changes – that is, changes in the ratings attributed to an orderable imaging exam; 9 % category changes – that is, changes in the imaging modality being recommended in a given clinical scenario.

iGuide
iGuide makes evidence-based, imaging referral guidelines available and easy to use across Europe. It is designed as a user-friendly system available at the point of care, and can be stand-alone or integrated with ordering systems and linked to electronic health records. As with ACR Select, it aims to ensure ‘a simpler, faster and reliable clinical workflow.’
iGuide also retains an element of flexibility. Users can localize recommendations according to their needs starting from the evidence-based core. In addition, the ESR iGuide can be adapted to users’ needs and institutional settings, for example by taking into account the availability of certain types of imaging equipment. This is not only relevant for Europe, but across other heterogeneous global markets, and will be crucial to eventually make the Euro-American effort an international success.
The ESR plans to continuously update iGuide to provide users with the latest evidence, instead of publishing a complete overhaul every few years.

Best practices in patient ventilation

Accelerating demand from ICUs has been driving the use of mechanical ventilation (MV). This is due to demographic changes triggering growth in elderly patient numbers, as well as advances in the ability to delay or prevent mortality. Nevertheless, there are also significant differences in the management of ventilated patients, and no necessary correlation in outcomes. Given the relatively high costs of mechanical ventilation, experts are seeking ways to develop and share best practices.

Growth in ICU drives demand
The Society for Critical Care Medicine (SCCM) estimates 20-30% of patients admitted to an intensive care unit (ICU) require MV. The scale of the challenge is underlined by the fact that about one-fifth of all acute care admissions in the US and 58% of emergency department admissions are made to an ICU.

The above facts are somewhat ironical. The mechanical ventilator is one of the most powerful symbols of modern medical technology and progress in intensive care technologies has allowed more patients to survive acute critical illness than ever before. However, the very same advances have created what one study describes as ‘a large and growing population of patients with prolonged dependence on mechanical ventilation and other intensive care therapies.’
The roots of such developments go back decades. In 1985, two North American clinicians coined the term chronically critically ill’ in an article about the ICU titled ‘To Save or Let Die’? It is estimated that between 5 and 10% of patients who require mechanical ventilation for acute conditions develop chronic critical illness. Many of these result in death.
Other sources endorse these findings. In 2004, a study on patients with tracheostomy for respiratory failure found that the mortality of ventilator-dependent patients was as high as 57%.

Europe and the US
The situation is challenging in Europe, too, in spite of differences vis-a-vis the US. For instance, although the UK has a seven-fold lower level of ICU beds per capita than the US, 68% of UK patients are mechanically ventilated within 24 hours after ICU admission, well over twice the 20-30% level estimated by the SCCM in the US. In spite of this, there are no differences in mortality for mechanically ventilated patients admitted from the ER.
The impact of these spill over into other areas. Although strictly comparable figures are not available, differences in the ICU environment between one European country and another would clearly have an impact. The per capita density of adult ICU beds varies seven-fold from 3.3/100,000 population in the United Kingdom to 24.0/100,000 in Germany.

Prolonged mechanical ventilation
One of the most pressing challenges, with respect to divergent practices, is the duration of ventilation.
Prolonged mechanical ventilation (PMV) is now generally accepted to be ventilation that lasts for 21 or more days. There are few studies of PMV incidence, and even these are accompanied by variations in definitions.
Nevertheless, a Canadian workshop cites two studies , to estimate that on an international’ basis, patients requiring PMV account for up to 10% of all mechanically ventilated patients, 40% of ICU bed days, and 50% of ICU costs. These figures may be slightly over-estimated. One US study, for example, finds PMV accounting for 7.7% of ventilated ICU admissions.
In Europe, the proportion of PMV is clearly lower than 10% of ventilated patients. In Scotland, for example, the University of Edinburgh’s Old Medical School reports the incidence of PMV to be 4.4% of ICU admissions and 6.3% of ventilated ICU admissions.

The challenges of PMV growth
The rate of PMV has been growing, rapidly, both due to an ageing population and technological advances which allow delaying or preventing mortality in the ICU. In the US, data show patients requiring prolonged mechanical ventilation to be steadily rising. One study covering the period 1993 to 2002 found the incidence of tracheostomy for prolonged mechanical ventilation growing by about 200%, and surpassing changes in the overall incidence of respiratory failure by a factor of three.
The resource load on PMV patients is clearly higher. Up to 40% of ICU resources may be spent on them, even though they represent only 10-15% of the ICU population. The University of Edinburgh study mentioned above found that PMV patients used 29.1% of all ICU bed days. In spite of this, the majority of PMV patients die within six months.

The costs of ventilation
Overall, the sharp growth in demand for mechanical ventilation and the frequent lack of correlation with outcomes is a major strain on financial and human resources, making it necessary to optimize ventilator use by developing best practices.
The cost of mechanical ventilation has been estimated at 1,522 US dollars per day (about 1,345 euros) in the US, and 2,110 euros per day in a recent European evaluation. The US figures are adjusted for patient and hospital characteristics, while the European figures are unadjusted. Nevertheless, it appears that intensive care unit costs are highest during the first two days of admission, stabilizing at a lower level thereafter. Still, the burden of PMV is clearly enormous. In the US, estimated costs per one-year survivor are as high as 423,596 US dollars (371,500 euros).

Costs are also non-financial. These include long-term physical and psychological consequences which impact upon quality of life and often impose substantial symptom burden. One study of 23 hospitals in the US pointed to the risks of ‘prolonged ventilator dependence, reduced mobility, as well as anxiety and depression.’ The study also called for an interdisciplinary, rehabilitative approach in the ICU. This trend correlates with wider lessons acquired over half-a-century of ICU care.
Future innovations in ventilation are likely to be focused ‘on reducing the need for user input, automating multi-element protocols, and carefully monitoring the patient for progress and complications.’

Delivery models: the role of home ventilation
Differences between the US and Europe in delivery models also influence the development of best practices.
The preferred models of care in the US include ‘delivery of protocolized rehabilitation-based care either within the acute ICU or specialized post-ICU venues.’ Patients are generally transferred to respiratory units within an acute hospital or to a long-term acute care hospital, physically located within the former or set up as free-standing institutions.
One crucial factor in the US is the lack of home ventilation, due to current funding models. In Europe, home ventilation is generally present or attaining an increasing profile. Nevertheless, there is still significant variability in practices across countries. The prevalence of home ventilation per 100,000 population averages 6.6 in Europe, but ranges from 17 in France to 0.1 in Poland.

Divergence in care practices and cognitive bias
Heterogeneity of care is probably one of the strongest indicators of the need for best practices. In the context of MV, the need for the latter is underlined by a finding that ICU clinicians are prone to cognitive biases and this may lead to systematic and predictable errors.
The most prominent divergences in practice seem to lie in sedation management and weaning.

Sedation management
Sedation management has been the subject of interest for decades, but is still marked by a lack of consensus.
In 2000, The New England Journal of Medicine’ published results of a study by the University of Chicago study on the benefit of administering sedatives to MV patients by continuous infusion, against daily interruption which allowed patients to wake up’ and be assessed by clinicians. The latter practice was found to reduce the duration of mechanical ventilation as well as the length of stay in the ICU, and sedative dosage.
In 2008, a study in The Lancet’ by the Vanderbilt School of Medicine in the US proposed that a protocol pairing daily interruption of sedatives (spontaneous awakening) with daily spontaneous breathing resulted in better outcomes for MV patients and should become routine practice.
In 2010, a team at the Odense Hospital in Denmark compared interrupted sedation of MV patients versus patients who received no sedation at all. Their findings, also published in The Lancet’, indicated that patients receiving no sedation had significantly more days without ventilation and a shorter ICU stay, with no difference in accidental extubations, need for CT or MRI brain scans or ventilator-associated pneumonia. The researchers called for a study ‘to establish whether this effect can be reproduced in other facilities.’

One ambitious recent effort to study differences in sedation management involved a multicentre study of 40 ICUs in France and Switzerland. The researchers found that a quarter of the participating units did not even have a sedation-management protocol in place. This, they speculated, might be due to a lack of awareness about protocols, or because of limited resources. Another possibility was that physicians tend to resist cookbook recipes’ and limitations to their autonomy. In other words, they observed, the presence of a written procedure ‘does not mean that physicians will follow it.’ Even in ICUs with sedation management protocols, ‘approximately 20% of the physicians were unaware’ about their existence.

Weaning
Another priority for protocols concerns weaning MV patients in the ICU. Studies have shown that 20% of MV patients fail to wean in the ICU and become dependent on mechanical ventilation.
In 2005, as a first step, an international consensus panel proposed classifying weaning into three types, based on difficulty and duration. These consisted of simple’ weaning (successful extubation on a first attempt), difficult’ weaning (patients who require up to three spontaneous breathing trials/SBT, or 7 days) and prolonged’ weaning (patients failing at least three SBT attempts or requiring over 7 days after the first attempt).
The classification was, however, the subject of a major attack in 2011 by Dean Hess, the Assistant Director of Respiratory Care at Massachusetts General Hospital and Neil MacIntyre, a Professor of Pulmonary Medicine at Duke University Medical Center. Writing in The American Journal of Respiratory and Critical Care Medicine’, the two took the international panel to task for using the term weaning’ interchangeably with discontinuation’ of mechanical ventilation. They also attacked the very concept of weaning, suggesting that little evidence supported a gradual reduction of respiratory support. They urged clinicians to focus on treatment of the underlying disease process rather than manipulating the ventilator settings.

Indeed, the linkage between sedation management and weaning, and the lack of hard data and conclusions on either, was highlighted in a 2014 commentary by Italian, French and German ICU clinicians titled Sedation and weaning from mechanical ventilation: time for best practice’ to catch up with new realities?’. The article, published in Multidisciplinary Respiratory Medicine’, argues that ‘delivery of sedation in anticipation of weaning of adult patients from prolonged mechanical ventilation is an arena of critical care medicine where opinion-based practice is currently hard to avoid because robust evidence is lacking.’

The gamma knife – a new tool against epilepsy ?

The gamma knife is the best known system for radio surgery (RS). It allows non-invasive brain surgery to be performed in one session, with extreme precision. Based on preoperative radiological examinations, such as CT or MR scans and angiography, the gamma knife provides highly accurate irradiation of deep-seated targets in the brain, using a multitude of collimated beams of ionizing radiation with scalpel-like precision.

No surgical incision, no anesthesia
The uniqueness of the gamma knife (and RS surgery in general) is that no surgical incision is required. This serves to minimize risk to adjoining tissue, reduce the risk of surgical complications. It also eliminates the side effects and dangers of general anesthesia, which would be indispensable for the type of medical conditions it is used to target.
A gamma knife typically contains 201 cobalt-60 sources. Each is mounted in a circular array within a shielded system. The device aims gamma rays via a specialized helmet surgically fixed to the patient’s skull to a target point in the brain. The ‘blades’ of the gamma knife are the beams of gamma radiation programmed to target the lesion at the point where they intersect. In a single treatment session, beams of gamma radiation focus precisely on the lesion. Over time, most lesions slowly decrease in size and dissolve. The exposure is brief and only the tissue being treated receives a significant radiation dose, while the surrounding tissue remains unharmed.

Revolution for brain surgery
The gamma knife has revolutionized brain surgery. Over the last three decades, it has changed the landscape of neurosurgery – treating a range of conditions from brain tumours to vascular malformations with an unmatched level of accuracy. The gamma knife enables patients to undergo a non-invasive form of brain surgery without surgical risks, a long hospital stay or subsequent rehabilitation.
The gamma knife was officially named the Leksell gamma knife, after its lead inventor Lars Leksell, who developed the system in 1967 at the Karolinska Institute in Stockholm. Other key team members included Ladislau Steiner, a Romanian-born neurosurgeon and Borje Larsson, a radiobiologist from Sweden’s Uppsala University.

The CyberKnife
1990 saw the launch of another form of radio-surgical system based on linear accelerators. The best known of these is the CyberKnife, invented in the US by John R. Adler, a Stanford University Professor of Neurosurgery and Radiation Oncology. Unlike the gamma knife, the CyberKnife does not use radioisotopes. Instead, it uses a linear accelerator mounted on a moving arm to deliver X-rays, once again, to a very precise area. The CyberKnife does not use a frame to secure the patient. Instead, a computer monitors a patient’s position during treatment, using fluoroscopy. In other words, the CyberKnife allows for tracking a tumour, rather than fixing the patient. As it does away with a frame, its targets go beyond the brain.

Gamma knife and CyberKnife: Indications
Typically, a gamma knife is used to treat cancer that has metastasized to the brain from another part of the body, acoustic neuroma (a slow-growing tumour of the nerve connecting the ear and brain, pituitary tumours and non-cancerous brain tumours. Its application has also been extended to include certain blood vessel malformations, and fistulas, neuralgia and tremors due to Parkinson’s disease.

On its part, the different design of the CyberKnife allows it to also treat a host of other cancers (breast, kidney, liver, lung, pancreas, prostate and certain skin cancers. The CyberKnife is however, generally not used to treat non-cancerous brain tumours such as chordoma and meningioma.

Gamma knife and epilepsy: a European initiative

In recent years, the gamma knife has drawn attention due to its showing ‘some promise’ for treating certain types of epilepsy.
Attention to such possibilities however date back to 1993, when the first gamma knife treatment for temporal lobe epilepsy was performed at the Hopital Timone in Marseille, France. Just over 5 years later, Na Homolce Hospital in Prague followed with a four-year evaluation on the use of gamma knife in 14 mesial temporal lobe epilepsy (MTLE) patients.

Encouraging results from first study
A pioneering study on gamma knife and epilepsy at France’s Hopital Timone was published in 2000. It covered 25 patients with drug-resistant MTLE with 16 followed up for a period of over 24 months. Thirteen (81%) were seizure free, with two improved. The median latent interval from the gamma knife intervention to seizure cessation was 10.5 months (varying from 6 to 21 months), with two patients immediately becoming seizure free. No cases of permanent neurological deficit (except three cases of non-symptomatic visual field deficit), or morbidity, or mortality were observed.
Although the authors concluded that the ‘optimal parameters for treatment’ remain to be defined, as do studies on ‘dose-related efficacy, effectiveness over longer follow-up periods, and neuropsychological effects’, gamma knife interventions could be ‘a reasonable option,’ and its introduction into epilepsy treatment can reduce the invasiveness and morbidity.’

First and second follow ups to French study

The first five-year follow up to the above released its findings from France in 2004. It found a reduction in median seizure frequency, from 6.16 the month before treatment to 0.33 at 2 years after treatment. In two years, as many as 65% of patients (13 of 20) were seizure free. Five patients reported transient depression, headache, nausea, vomiting, and imbalance. There was ‘no permanent neurological deficit reported except nine visual field deficits.’ Finally, no neuropsychological deterioration was observed two years after treatment and the ‘quality of life was significantly better than that before surgery.’
A second follow-up, in 2008, noted that the gamma knife was ‘an effective and safe treatment for mesial temporal lobe epilepsy.’ Results, it found were ‘maintained over time with no additional side effects. Long-term results compare well with those of conventional surgery.’ The findings remained encouraging, with the mean delay for appearance of the first neuroradiological changes at 12 months. However, all patients who had been initially seizure free experienced a relapse of isolated aura or complex partial seizures during the crucial tapering of the antiepileptic drug. Restoration of medication resulted in good control of seizures.

Efforts in the US: focus on caution
In 2009, one of the first major multi-centric US studies on the gamma knife and epilepsy, led by a team from the University of California, San Francisco, reported three-year outcomes using radiosurgery (RS) for unilateral MTLE.
The authors found seizure remission rates comparable with those reported for open surgery. There were also ‘no major safety concerns with high-dose RS compared with low-dose RS.’ However, they called for additional research to determine whether RS ‘may be a treatment option for some patients with mesial temporal lobe epilepsy.’
Caution was again urged the next year when the US research group noted that RS was a promising treatment for intractable MTLE. However, they also observed ‘that the basis of its efficacy is not well understood…’ The researchers, however, minced no words in their observation that ‘Temporal lobe stereotactic radiosurgery resulted in significant seizure reduction in a delayed fashion which appeared to be well-correlated with structural and biochemical alterations observed on neuroimaging. Early detected changes may offer prognostic information for guiding management.’

Growing interest and availability in US
Nevertheless, there is growing interest across the US in using the gamma knife for epilepsy.
Its potential is highlighted (albeit, to varying degrees) by top facilities such as the Mayo Clinic and other leading hospitals like the University of California at San Francisco. On the other side, the University of Pittsburgh Medical Center explicitly specifies the gamma knife for treatment-resistant epilepsy. An active programme of use is also announced by St. Louis Children’s Hospital, for ‘certain epileptogenic lesions,’ corpus callosotomies as well as hypothalamic hamartomas – a benign plume-like malformation that causes a syndrome characterized by treatment-resistant epilepsy.
Some smaller centres in the US are also describing the Gamma Knife as ‘giving patients with epilepsy another option for treatment.’

Europe seemingly lags US
Although France pioneered studies into the use of the gamma knife in epilepsy, interest in Europe still lags that being shown in the US. One reason may also be that other efforts in Europe have been evidently unsuccessful. For example, a four-year study in the late 1990s in the Czech Republic on using the gamma knife in epileptic patients concluded: ‘Radiosurgery with 25, 20, or 18-Gy marginal dose levels did not lead to seizure control in our patient series, although subsequent epilepsy surgery could stop seizures.’ On the other hand, higher doses were associated with the risk of brain edema, intracranial hypertension, and a temporary increase in seizure frequency.

The ROSE study
Both in the US and Europe, the outlook on using Gamma Knife in MTLE is clearly one of cautious optimism.
Trials conducted to date seem to show mixed results, or do not provide researchers enough conviction, as yet.
For the moment, attention remains focused on an ongoing multi-centre trial called ROSE (Radiosurgery or Open Surgery for Epilepsy). The randomized, double blind trial is funded by the US National Institutes of Health, and is being conducted at 13 centres in the US and the prestigious All India Institute of Medical Sciences in New Delhi.

The trial takes up the hypothesis ‘that radiosurgery is as safe and effective as temporal lobectomy in treating patients with seizures arising from the medial temporal lobe.’ It randomizes patients to either technique and is due to compare seizure remission, cognitive outcomes, and cost. The trial will not only measure outcomes (determined during the course of the final year of a 3-year follow-up period). It will also pay attention to interim measures concerning patient safety, quality of life etc., and compare these between the two groups. The eventual aim is to guide physicians to direct patients between traditional and RS techniques matched to patient characteristics.

Mortara’s WiFi based Telemetry Monitoring: one more success story

Mortara Instrument’s new family of Surveyor WiFi telemetry solutions is designed to offer diagnostic-quality ECG acquisition and to work on the existing WiFi network, with no need for a dedicated network infrastructure. Its outstanding features have been the key decision factors for Policlinico San Donato (Milan, Italy), one of the top-ranking centers for the study and treatment of cardiovascular diseases, to select Mortara telemetry system.

Mortara designed the Surveyor S4 solution based on three main criteria: cost saving, coverage and clinical excellence.

Cost is a major priority of today’s healthcare professionals and also one of Mortara’s top concerns. The Surveyor S4, thanks to its advanced design, can operate on existing WiFi infrastructure to broadcast physiological signals. It eliminates the cost of a proprietary antenna network, which is required by traditional telemetry systems. Removable, rechargeable batteries allow a lower ecological footprint than disposable batteries, while also reducing running costs.

Coverage (i.e. the areas where the patients can be monitored) is also revolutionized with the Surveyor S4; the use of WiFi technology allows patients to be monitored virtually wherever the WiFi signal is available throughout the facility. This means more freedom for the patient, but also extends patient monitoring to more departments; the ability to clinically monitor and evaluate patients is enhanced without additional beds being added to the traditional telemetry area.

Mortara takes pride in delivering clinical excellence. VERITASTM is the suite of algorithms created by Mortara to analyze ECG signals. The Surveyor S4 family includes the latest algorithms that provide clinicians with absolute reliable data. From basic to lethal arrhythmias, VERITAS is the ideal companion for clinicians. In addition, all Surveyor S4 mobile monitors offer diagnostic quality acquisition; combined with the true 12-lead ECG amplifier, they offer best-in-class 12-lead ST segment analysis. True 12-lead ECG monitoring allows physicians to detect early ST segment changes and obtain a complete evaluation of the cardiac profile of the patient, without additional tests.

Founded in 1969, IRCCS Policlinico San Donato is part of an 18-hospital network that provides over 5,000 beds, and is also host to the Medicine School of the University of Milan. The clinical arrhythmology and electrophysiology ward, run by Professor Carlo Pappone, is one of the international excellence centers for the treatment of all types of cardiac arrhythmias.

Atrial fibrillation, Brugada syndrome, Wolff-Parkinson-White (WPW) syndrome, and cardiac electro-stimulation are among the main research fields. In particular, the research on, and treatment of, supraventricular arrhythmias is a primary focus and area of expertise for this group of clinicians, as testified by the number of publications on international top-ranking journals, and directly witnessed by the large population of patients who have already successfully undergone trans-catheter ablation procedures.

Given the outstanding reputation of his center, Professor Pappone has chosen Mortara as the best-in-class partner in order to deliver excellent diagnosis and treatment.

Policlinico San Donato is one of the many centers where the Mortara monitoring solutions have been adopted and that every day helps to improve healthcare throughout the world.

Mortara Instrument, Inc. Announces Expansion of ECG Warehouse Contract Award with the U.S. FDA

Mortara has been awarded a multi-year contract for ongoing maintenance and support of the FDA ECG Warehouse including continuous ECG studies analyzed by VERITASTM.

Mortara collaborated with the FDA to develop the ECG Warehouse which was initially deployed in 2005. The ECG Warehouse acts as a repository for annotated electrocardiograph (‘ECG’) studies provided to the FDA in support of new drug applications. With the ECG Warehouse, the FDA uses Mortara’s VERITAS ECG algorithms and viewing technologies to review ECG data submitted as part of new drug applications.

Since inception of the ECG Warehouse, more than 9 million resting ECGs have been analyzed with Mortara’s VERITAS algorithms, making this one of the largest cloud-based clinical data repositories in the world. The ECG Warehouse has subsequently been expanded to also include continuous 12-lead recordings, which now number nearly 800 in total. The warehouse tools include web-based upload, navigation of continuous data, arrhythmia identification and waveform morphology comparison.

Under this expanded ECG Warehouse contract, Mortara will continue to support Sponsor and ECG Central Laboratory upload of ECG studies, provide support to FDA personnel and provide on-going basic development enhancements to the ECG Warehouse including advances in the VERITAS ECG algorithms.

‘Mortara is pleased to continue its longstanding relationship with the FDA in providing the ECG Warehouse solution,’ said Dr. Justin Mortara, CEO of Mortara. ‘This award is testimony to our leadership role in ECG acquisition and algorithm technologies. We are honored to be chosen by the FDA and to play our part in the cardiac safety evaluation of new drugs.’

About Mortara
For over 30 years, Mortara Instrument, Inc. has served as a leading designer, developer, and manufacturer of diagnostic cardiology and, most recently, patient monitoring technologies. Mortara is focused on delivering world-class medical devices, as evidenced by its innovative portfolio of solutions designed to serve throughout the continuum of clinical care. The company’s comprehensive range of products spans modalities including resting ECG, cardiac stress exercise, Holter monitoring, cardiac and pulmonary rehabilitation, and ambulatory blood pressure and multi-parameter patient monitoring. Mortara’s global headquarters is located in Milwaukee, Wisconsin with direct operations in Australia, Germany, Italy, the Netherlands, and the United Kingdom. While Mortara distributes its products and technologies globally, it remains dedicated to manufacturing in the United States in order to consistently deliver the quality products for which it is known.
Mortara’s approach to innovation has a global reach that impacts both mature and emerging healthcare systems. To learn more about Mortara and its expanding product portfolio, including the Burdick and Quinton brands, visit www.mortara.com.

Breast cancer screening with tomosynthesis (3D mammography) with acquired or synthetic 2D mammography compared with 2D mammography alone (STORM-2): a population-based prospective study

Bernardi D. et al. The Lancet Oncology. 2016 Aug;17(8):1105-1113

Background
Breast tomosynthesis (pseudo-3D mammography) improves breast cancer detection when added to 2D mammography. In this study, we examined whether integrating 3D mammography with either standard 2D mammography acquisitions or with synthetic 2D images (reconstructed from 3D mammography) would detect more cases of breast cancer than 2D mammography alone, to potentially reduce the radiation burden from the combination of 2D plus 3D acquisitions.

Findings
Between May 31, 2013, and May 29, 2015, 10 255 women were invited to participate, of whom 9672 agreed to participate and were screened. In these 9672 participants (median age 58 years [IQR 53-63]), screening detected 90 cases of breast cancer, including 74 invasive breast cancers, in 85 women (five women had bilateral breast cancer). To account for these bilateral cancers in cancer detection rate estimates, the number of screens used for analysis was 9677. Both 2D-3D mammography (cancer detection rate 8.5 per 1000 screens [82 cancers detected in 9677 screens]; 95% CI 6.7-10.5) and 2D synthetic-3D mammography (8.8 per 1000 [85 in 9677]; 7.0-10.8) had significantly higher rates of breast cancer detection than 2D mammography alone (6.3 per 1000 [61 in 9677], 4.8-8.1; p<0.0001 for both comparisons). The cancer detection rate did not differ significantly between 2D-3D mammography and 2D synthetic-3D mammography (p=0.58). Compared with 2D mammography alone, the incremental cancer detection rate from 2D-3D mammography was 2.2 per 1000 screens (95% CI 1.2-3.3) and that from 2D synthetic-3D mammography was 2.5 per 1000 (1.4-3.8). Compared with the proportion of false-positive recalls from 2D mammography alone (328 of 9587 participants not found to have cancer at assessment) [3.42%; 95% CI 3.07-3.80]), false-positive recall was significantly higher for 2D-3D mammography (381 of 9587 [3.97%; 3.59-4.38], p=0.00063) and for 2D synthetic-3D mammography (427 of 9587 [4.45%; 4.05-4.89], p<0.0001). Interpretation
Integration of 3D mammography (2D-3D or 2D synthetic-3D) detected more cases of breast cancer than 2D mammography alone, but increased the percentage of false-positive recalls in sequential screen-reading. These results should be considered in the context of the trade-off between benefits and harms inherent in population breast cancer screening, including that significantly increased breast cancer detection from integrating 3D mammography into screening has the potential to augment screening benefit and also possibly
contribute to overdiagnosis.

Should TAVI be extended to lower risk patients?

The relatively new procedure for aortic valve replacement, namely Transcatheter Aortic Valve Implantation (TAVI), first performed in 2002, is considered to be an appropriate approach when conventional surgical aortic valve replacement (SAVR) for severe aortic stenosis is contraindicated because patients have left ventricular dysfunction or are very elderly with comorbidities. During the procedure a catheter with a balloon at its tip loaded with a new tissue valve is inserted into a femoral artery and is passed to the opening of the aortic valve where the inflation of the balloon allows the new valve to be positioned and expanded prior to the removal of the catheter and deflated balloon. Trials including two year follow ups comparing TAVI with conservative treatment in high risk, inoperable patients all show that the procedure is associated with higher survival time. However recent results also suggest that TAVI may be superior to SAVR in intermediate risk patients. So should TAVI be extended to intermediate and even low risk, younger patients or is this inadvisable?
Earlier data have shown that significantly more patients suffered from stroke after TAVI compared with patients undergoing SAVR, as the former procedure tended to produce debris from the degenerated aortic valve and aorta. Paravalvular leaks have also been reported more frequently after TAVI, impacting on patient survival time. There is also a reported higher incidence in conduction abnormalities after the procedure, often occurring because of too deep implantation of the new valve; in such cases it becomes necessary to implant a pacemaker. Less common complications have included arterial dissection and perforation, myocardial ischemia and cardiogenic shock. However, during the decade since TAVI became the standard of care for inoperable patients with severe aortic stenosis, three major factors have contributed to the substantially lowered risk of complications following the procedure. Firstly preoperative assessment has benefitted from the many recent advances in cardiac diagnostic imaging. Secondly both valve delivery systems and valves have evolved, with the better controlled positioning of more compact, newer generation valves, preceded by pre-implantation site preparation, all allowing superior annular sealing and appropriate valve expansion without causing significant tissue trauma. Last but not least, surgical teams have now acquired a wealth of experience in performing the procedure. The results of randomized trials could well demonstrate that TAVI has even become a prudent therapy choice for younger patients with a low perioperative risk.

Cardiac imaging – strengthening case for real-time MR

4D cardiac imaging, which generates a three-dimensional motion picture of a beating’ heart, offers cardiologists a revolutionary new tool. Indeed, the ability to acquire images across all phases of a heartbeat cycle is the only way to meaningfully visualizing morphological anomalies and make an authentic assessment of cardiac function.
Traditionally, ultrasound has been a preferred modality for 4D cardiac imaging. However, 4D cardiac MRI (known formally as cardiovascular’ MRI) has been gaining ground. Coupled with MRA (magnetic resonance angiography), it enables cardiologists to view images of the heart, major blood vessels and blood flow.

The novelty of 4D
4D cardiac imaging is a recent technique. Its novelty is best illustrated by an editorial in The Journal of the American College of Cardiology’. The editorial, published as recently as 2009, observed the role of ‘2- and 3-dimensional coronary mapping’ in high-resolution digital imaging.
Major imaging vendors now offer real-time 3D/4D imaging products – across all modalities, PET/CT, MRI and ultrasound. However, the bulk of 4D applications so far have involved ultrasound – especially for cardiac imaging. This may be changing, with increased attention, above all, to MRI.

Ultrasound’s longer legacy
One reason for ultrasound’s pole position in 4D consists of a longer legacy. In the early 1980s, researchers from Duke University in the US reported that though MRI was faster, ultrasound offered the closest achievement of ‘3D real-time acquisition,’ or what is now called 4D.
Technical standardization bodies also moved quickly to endorse and drive the take-up of 4D ultrasound. In 2008, the DICOM (Digital Imaging and Communications in Medicine) initiative approved Supplement 43 which addressed the exchange of real time 3D ultrasound datasets between different vendors. In 2011, IHE (Integrating the Health Enterprise) published a White Paper on 3D/4D imaging workflow.

Early adoption of 4D ultrasound by cardiologists
On their part, cardiologists were enthusiastic early adopters of 3D (and later 4D) ultrasound. The IHE’s White Paper mentioned above was written by its Cardiology Technical Committee. Another factor strongly favouring ultrasound was mobility, since small ultrasound devices could be transported to the patient.
During this period, competing imaging modalities seemed to stand little chance as far as cardiology was concerned.
Computerized tomography (CT) was dismissed since it required cardiologists to use complex post-processing techniques in order to visualize the bearing heart. Cardiac magnetic resonance imaging (MRI) was considered relatively expensive, with limited availability and requiring specialized training.

GE’s cSound: industry seizes the ultrasound opportunity
Industry was quick to seize the ultrasound opportunity. In 2015, healthcare technology giant GE released new software for its ultrasound machines called cSound. cSound-equipped machines intelligently process data being returned by an ultrasound signal, analysing almost 5 gigabytes of data every second, and then filtering it on a pixel-by-pixel basis via algorithms which produced real-time 4D views. This allowed cardiologists to observe how blood swirls around clots in arteries, measure blood leakage around the valves and assess damage. cSound reinforced GE’s presence at the cutting edge of ultrasound, reinforcing a technique patented by the company in the early 2000s and known as Spatial Temporal Image Correlation (STIC). STIC allowed for the quick capture of a full fetal heart cycle beating in real-time.

4D PET/CT and MRI turn to diagnostic oncology
Proponents of 4D PET (positron emission tomography)/CT and MRI were however not sitting by idly. Rather than cardiology, they turned their attention to other specialities, above all oncology where 4D offered huge potential in diagnostics.
4D PET, for example, seemed unmatched in characterizing solitary pulmonary nodules, while 4D CT offered a revolutionary approach in oncology – such as gating tumours and determining treatment margins. On its part, 4D MRI demonstrated a superiority to CT in soft-tissue imaging and in cases where radiation exposure was a concern.

From 4D to 5D imaging
As of now, the focus in diagnostics is to combine the anatomical with functional or molecular imaging, in order to make precise assessments of biological and metabolic pathways. Key modalities include PET with radio-labelled tracers for molecular imaging, and MRI using molecular markers for functional imaging. The molecular/functional enhancement is often referred to as 5D, and to its proponents, offers hope in increasing the specificity and sensibility of diagnostics.
At some stage in the future, it is inevitable that cardiologists will see the virtues of 5D imaging for diagnostics.

The challenge from multi-detector ultrasound scanners
Meanwhile, cardiac ultrasound faces competition in certain applications from other imaging modalities.
In recent years, multi-detector CT scanners seem to offer considerable promise, particularly for non-invasive detection of coronary artery disease and higher flexibility for analysis and visualization of individual vessels. These images, nevertheless, continue to require special processing and rendering tools for assessment of segmental narrowing or occlusions.

The growing promise of 4D cardiac MRI
Rather than CT, cardiac (or cardiovascular) MRI in 4D seems to have rapidly become the principal technology paradigm challenger to ultrasound.
Cardiac MRI scanners do not use open’ magnets which face serious limitations in the case of moving objects – such as a beating heart. The magnet strengths most widely used for cardiac MRI are 1.5T and 3T – although the latter, in some conditions, require software to cancel artifacts. Higher strength magnets are, however, the technology of choice in studying conditions such as aortic construction.
What is also a key advantage of cardiac MRI compared to CT is its lack of ionizing radiation, high spatial resolution and the ability to provide a functional cardiac assessment in one scan.

The technique of 4D cardiac MRI is closely based on traditional MRI. However, it is optimized for use in the cardiovascular system in real time, principally via ECG gating and rapid imaging sequences. This results in acquisition of images at each stage of a sequence of cardiac cycles, and functional assessment of the heart. Blood, in such sequences (technically known as balanced steady state free precession or bSSFP), appears bright due to contrast with blood flow. As a result, 4D cardiac MRI makes it possible to discriminate in a relatively easy fashion between the myocardium and blood.

With and without contrast agents
Cardiac MRI typically uses several approaches to make a comprehensive assessment of the heart and cardiovascular system. Some of the most promising applications include the ability to visualize heart muscle fat or scar in high resolution without the need for a contrast agent. This is based on a technique called spin echo’, which shows blood as black, and identifies myocardium abnormalities through differences in intrinsic contrast.

On the other hand, contrast agents like gadolinium-DTPA can be used for applications such as infarct imaging – where healthy heart muscle appears dark, and infarction areas show in bright white. Contrast agents in cardiac MRI have also proven their worth for treatment of coronary artery narrowing, which starves the heart muscle of oxygen. The contrast agent reveals any transient perfusion defects from artery constriction. Knowing about the presence of such a defect assists in guiding interventional procedures.

Image quality, superior access to anatomical structures
Cardiac MRI provides images of superior quality, accuracy and versatility, alongside access to anatomical structures which are tough to achieve with ultrasound. Examples of these include congenital heart anomalies as well as anatomical changes after surgical interventions.
The latest generation of MRI scanners allow for acquiring high-resolution isotropic data with detailed anatomical information and identical resolution in all three dimensions. Frontier areas of research for 4D MRI include qualitative and quantitative flow pattern analysis in mice with aortic constriction.

Detecting hemodynamic alterations with 4D MRI
At present, one of the most promising cardiac applications for 4D MRI consists of the detection of haemodynamic alterations. The incorporation of pharmacological stress procedures allows for enhanced detection of alterations in heart function during stress-induced ischemia.
In April 2014, a team at Northwestern University reported that 4D flow MRI would help better understand altered hemodynamics in patients with cardiovascular diseases and improve patient management and monitoring of therapeutic response. Their study, published in Cardiovascular Diagnosis and Therapy’, noted that these hemodynamic insights could also lead to new risk stratification metrics in patients and impact upon individualized treatment decisions in order to optimize patient outcomes.

Diagnostics and prognosis of heart events
Cardiac MRI is also being seen as a diagnostic tool to predict heart events. In May 2016, a study led by John P. Greenwood from the University of Leeds in Britain noted that it was ‘a better prognosticator of risk for serious cardiovascular events than SPECT, regardless of a person’s risk factors, angiography results, or initial treatment, and that it would be a powerful tool for ‘the diagnosis and management of patients with suspected coronary heart disease.’ The serious events, assessed over a 5-year period, included death, myocardial infarction/acute coronary syndrome, unscheduled coronary revascularization, or hospitalization for stroke, transient ischemic attack, heart failure, or arrhythmia.
The study was based on a multi-parametric cardiovascular MRI protocol, and performed on a 1.5T MRI scanner and published in the Annals of Internal Medicine’. It was formally known as the Clinical Evaluation of Magnetic Resonance Imaging in Coronary Heart Disease (CE-MARC), and billed as ‘the largest prospective comparison of cardiovascular MRI and nuclear myocardial perfusion imaging (MPI) with SPECT’ with X-ray angiography used as the reference standard.

Genotoxicity poses calls for caution
There have, nevertheless, been some calls for caution due to the chance of genotoxic effects of cardiac MRI scanning.
In October 2011, a study by researchers at Seoul National University in South Korea, assessed high-field intensity 3T clinical MRI scans in cultured human lymphocytes in vitro and ‘observed a significant increase in the frequency of single-strand DNA breaks following exposure to a 3T MRI.’
In June 2013, another study on cardiac MRI in European Heart Journal’ reported similar conclusions, this time in vivo. The study, by researchers from University Hospital Zurich, prospectively enrolled 20 patients, and found a ‘significant increase in median numbers of DNA DSBs in lymphocytes induced by routine 1.5T’ MR scanners. The study also made a recommendation, urging cardiac MRI to ‘be used with caution and that similar restrictions may apply as for X-ray-based and nuclear imaging techniques in order to avoid unnecessary damage of DNA integrity with potential carcinogenic effect.’

Finns call for further studies
Nevertheless, there has been no study so far on the genotoxic effects of MRI compared with those of CT scans. In addition, cardiac MRI risk research has been based entirely on cell level experiments with no conclusive and definitive evidence of actual cancer risk. This is in direct contrast to the link between ionizing radiation and cancer risk.
MRI is therefore still considered by its proponents as the safest alternative.
Indeed, weeks after the University Hospital Zurich study, Finnish researchers published a riposte, again in the European Heart Journal”, arguing that the ‘cellular mechanism’ of how cardiac MRI induced DNA damage was unknown ‘and may be different from that of radiation.’ They concluded that it was ‘obvious that further larger studies are warranted before any restrictions’ were imposed on the use of cardiac MRI.

Implantable cardioverter defibrillators – driven by MR compatibility, subcutaneous devices

In spite of a relatively short history, the use of implantable cardioverter defibrillators (ICDs) has been growing by leaps and bounds. For clinicians, an ICD offers a direct means to avoid sudden cardiac death. Other reasons for the popularity of ICDs include advances in technology, above all miniaturization. More recently, new implantation methodologies such as subcutaneous ICD promise a further boost to their use. The working of ICDs are also easy to explain to patients. There is, nevertheless, one major challenge which ICDs have to still address: limitations to battery life.

Primary and secondary prevention
The principle behind an ICD is relatively straightforward, and covers two broad types of prevention: primary and secondary.
Primary prevention, which accounts for the bulk of ICD implants, refers to patients who have not yet suffered life-threatening arrhythmia.
Secondary prevention concerns survivors of cardiac arrest secondary to ventricular fibrillation or sustained tachycardia (together known as a tachyarrhythmia). Although the user group is smaller, secondary prevention makes the strongest case for an ICD.

Differentiating ventricular tachycardia and ventricular fibrillation
After implantation, the ICD continuously monitors cardiac rhythm and detects abnormalities. ICDs are programmed to recognize and differentiate between ventricular tachycardia (VT) and ventricular fibrillation (VF), after which they deliver therapy in the form of a low- or high-energy electric shock or programmable overdrive pacing to restore sinus rhythm – in the case of ventricular tachycardia, to break the tachycardia before it progresses to fibrillation. Overdrive or anti-tachycardia pacing (ATP) is effective only against VT, not ventricular fibrillation.

Defibrillation now almost 70 years old

The first defibrillation of a human heart dates to 1947, when Claude Beck, an American surgeon at Western University in Ohio, sought to revive a 14-year-old boy whose pulse had stopped during wound closure, following cardiothoracic surgery. Cardiac massage was attempted for 45 minutes, but failed to restart the heart. Ventricular fibrillation was confirmed by ECG. Beck saw no other choice but to deliver a single electric shock. This did not work. However, along with intracardiac administration of procaine hydrochloride, a second shock restored sinus rhythm. Beck’s success led to worldwide acceptance of defibrillation. However, his alternating current (AC) device (subsequently commercialised by RAND Development Corporation) was capable of defibrillating only exposed hearts.

Merging defibrillation and cardioversion
On its part, the pioneering of cardioversion (and the coining of this term) is credited to Bernard Lown, a physician at the Peter Bent Brigham Hospital in Boston. Lown merged defibrillation and cardioversion, and coupled these to portability. In 1959, he successfully applied transthoracic AC shock via a defibrillator to a patient with recurrent bouts of ventricular tachycardia (VT), who had failed to respond to intravenous procainamide. This was the first termination of an arrhythmia other than VF.
Two years later, Lown joined a young electrical engineer called Barouh Berkovitz, who had been researching a relatively safer direct current (DC) defibrillator – based on earlier work in the Soviet Union and Czechoslovakia.
Together, Lown and Berkovits pioneered the concept of synchronizing delivery of an electric shock with the QRS complex sensed by ECG, and a monophasic waveform for shock delivery during a rhythm other than VF. Their work led to launch of the first DC cardioverter-defibrillator in patients.

The implantable ICD device: parallel pathways
The Lown-Berkovits effort was confined to external devices. The concept of an implantable, automated cardiac defibrillator dates to work by Michel Mirowski at Israel’s Tel Hashomer Hospital in the mid-1960s. Mirowski moved to the US in 1968, where he joined forces with Morton Mower, a cardiologist at Sinai Hospital in Baltimore. The two tested a prototype automated defibrillator on dogs.
As often happens in science, another researcher had also been approaching the challenge on a parallel path. In 1970, Dr. John Schuder from the University of Missouri successfully tested an implanted cardiac defibrillator, again in a dog. Schuder also developed the low-energy, high voltage, biphasic waveforms which paved the way for current ICD therapy.
The first human ICD, however, was credited to Mirowski and Mower, along with Dr. Stephen Heiman, owner of a medical technology business called Medrac. In 1980, a defibrillator based on their design was implanted in a patient at Johns Hopkins University, followed shortly afterwards by a model incorporating a cardioverter. The ICD obtained approval from the US Food and Drug Administration (FDA) in 1985.

From thoracotomy to transvenous implantation
The first generation of ICDs were implanted via a thoracotomy, using defibrillator patches applied to the pericardium or epicardium, and connected by transvenous and subcutaneous leads to the device, which was contained in a pocket in the abdominal wall.
ICDs have since become smaller and lighter (thicknesses below 13 mm and weights of 70-75 grams). They are typically implanted transvenously with the device placed, like a pacemaker, in the left pectoral region. Defibrillation is achieved via intravascular coil or spring electrodes.

ICDs versus pharmacotherapy
Over the past two decades, clinical trials have demonstrated the benefits of ICDs compared to antiarrhythmic drugs (AADs). Three randomized trials, known as AVID (Antiarrhythmic versus Implantable Devices), the Canadian Implantable Defibrillator (CIDS) study, and Cardiac Arrest Study Hamburg (CASH), were initiated between the late 1980s and early 1990s in the US, Canada and Europe, respectively.
In 2000, a meta-analysis of the three studies was published in European Heart Journal.’ This found that ICDs reduced the relative risk of recurrent sudden cardiac death by 50% and death from any cause by 28%.

Use after myocardial infarction, quality of life issues
Follow-on initiatives looked at other issues. The Multicenter Automatic Defibrillator Implantation Trial (MADDIT) found that ICD benefited patients with reduced left ventricular function after myocardial infarction (MI). In 2005, the Sudden Cardiac Death in Heart Failure trial (SCD-HeFT) established that ICD reduced all-cause death risk in heart failure patients by 23% as compared to a placebo and absolute mortality by 7.2% after five years.
Quality-of-life (QoL) issues have also assisted acceptance of ICDs. In 2009, psychologists and cardiologists at universities in North Carolina and Florida concluded that QoL in ICD patients was at least equal to, or better than, that of AAD users.

Guidelines on ICD use – differences between US and Europe
Professional bodies have established guidelines on the use of ICDs and routinely provide updates. In the US, these originate from the American College of Cardiology, American Heart Association and the Heart Failure Society of America, and in Europe from the European Society of Cardiology.
Although there are many areas of agreement, some differences exist between the US guideline and the European Society of Cardiology. One difference is that in the US guideline, cardiac resynchronization therapy (CRT) is recommended in New York Heart Association (NYHA) class I patients who have LVEF ≤30%, have ischemic heart disease, are in sinus rhythm, and have a left bundle branch block (LBBB) with a QRS duration ≥150 ms. There is no similar recommendation in the European Society of Cardiology document.

The European Society of Cardiology recommendations include patients with QRS duration <120 ms. The US does not recommend CRT for any functional class or ejection fraction with QRS durations <120 ms. ICD and magnetic resonance
The biggest driver of ICD use in recent years, however, may consist of compatibility with magnetic resonance (MR) imaging. Like other metallic objects, ICDs have been contraindicated for MR. This is however set to change, after the first MR-compatible ICD (Medtronic’s Evera SureScan) received FDA approval in September 2016.
The relevance of MR was researched in significant depth by a team at Pittsburgh’s Allegheny General Hospital, led by Dr. Robert Biederman, medical director of its Cardiovascular MRI Center. The study covered patients in three implantable cardiac device case groups, namely cardiovascular, musculoskeletal and neurology.
The findings were conclusive. In 92-100% of cardiac and musculoskeletal, and 88% of neurology cases, MR exam provided value for the final diagnosis. In 18% of neurology cases, the MR exam altered the diagnosis entirely. In the bulk of cases, said Dr. Biederman, the information could not be obtained with cardiac catheterization, echo or nuclear. In addition, patients were saved from a biopsy of the heart muscle, with all its attendant risks.

The launch of leadless, subcutaneous ICDs
Meanwhile, other factors too are driving development of ICDs. One of the biggest shortcomings of ICDs is the need to run an electric lead through blood vessels. These are susceptible to breakages.
In 2012, Boston Scientific received FDA approval for the world’s first leadless, subcutaneous ICD (S-ICD). Rather than leads, the device uses a pulse generator and electrode beneath the skin with a shocking coil implanted under the left arm. A second-generation S-ICD system, branded Emblem, was approved in 2015.
Nevertheless, S-ICDs have drawbacks. Lacking a lead in sufficient contact with the heart, they cannot pace patients out of bad heart rhythms. S-ICDs are also not MR compatible.

The challenge of battery life

Many experts believe that the principal challenge facing ICDs is battery life. According to the Mayo Clinic, batteries in an ICD ‘can last up to seven years.’ It recommends monitoring battery status every 3-6 months during routine checkups, and states when the battery is ‘nearly out of power,’ the old shock generator needs to be ‘replaced with a new one during a minor outpatient procedure.’
Nevertheless, there has recently been some attention about the risk of the latter. In 2014, a research team led by Daniel B. Kramer of Harvard Medical School studied 111,826 patients in the US National Cardiovascular Data Registry (NCDR) who had end-of-battery life ICD generator replacements. They found more than 40% of patients died within five years of ICD generator replacement, and almost 10% within a year. The authors, however, emphasized that atrial fibrillation, heart failure, and left ventricular ejection fraction were independently associated with poorer survival as were noncardiac co-morbidities (chronic lung disease, cerebrovascular disease, diabetes and kidney conditions). What was needed, they concluded, would be a non-ICD control group.
A recent article in the British Medical Journal’ (BMJ) suggests that battery life needs to be extended to 25 years or more to avoid the risks associated with replacement. The author, Dr. John Dean, a cardiologist at Royal Devon and Exeter Hospital in the UK, points out that 1-5% of battery replacements also carry infection risk for patients.

The future: patient needs and superior waveforms
Ultimately, it is patient needs which will drive the next wave in ICD development. While the medical devices industry has focused on device miniaturization, longer battery life is also clearly a priority. Indeed, a 2004 study in Pacing and Clinical Electrophysiology’ found 90% of ICD patients saying they would trade off smaller ICDs for longer-lasting models.
ICD manufacturers are also looking at developing more sophisticated cardioversion/defibrillation waveforms in order to reduce the threshold of defibrillation, and thereby reduce pain and discomfort.

Hospital security in the 21st century – from cybertheft to bio-terror

Hospitals straddle a unique crossroads in terms of cybersecurity, crime and potentially, terror. In spite of a rapid shift to computerized prescriptions and electronic records, the hospital business is inherently complex, marked by privacy constraints as well as legacy IT infrastructure. In an era of cost cuts, hospital managers have also been tempted more by imaging scanners and surgical robots, rather than (invisible) firewalls and encryption systems.

by Ashutosh Sheshabalaya and Antonio Bras Monteiro

UCLA 2014: six years after Britney Spears, access still unhindered
As recently as 2014, after a massive hack, one of the world’s most prestigious hospitals, at the University of California Los Angeles (UCLA), acknowledged that its patient data was not encrypted.
At stake was data on 4.5 million patients, some dating to 1990. Six years previously, UCLA had paid out $865,000 ( Euro 778,000) after an employee stole medical data on celebrities including singer Britney Spears and actress Farah Fawcett, and put them up for sale.

Situation challenging in both US and Europe
Many hospitals are accepting they have a serious cybersecurity problem on their hands. This follows mounting public concern – especially in the US – about growth in hospital data theft.
Although American politicians have called for emulating some of Europe’s medical data security practices, the European situation hardly justifies complacence, as we shall see.

Data on 80 million patients hacked, 9.3 million offered for sale
In the broadest terms, healthcare lags other economic sectors in terms of information security. In the US, healthcare accounted for three of the top seven security breaches in 2015. During the year, just one hacking incident at insurer Anthem Inc. potentially compromised medical data on 80 million Americans.
The situation has since worsened. In June 2016, Baltimore-based privacy monitor Protenus reported a staggering 11 million patient records stolen in 29 incidents (24 at hospitals).
During the month, one hacker made two back-to-back online sale offers – for 655,000 medical records, followed a few weeks later by 9.3 million records. The numbers are of course impressive. However, as the hacker underlined to DarkNet news aggregator DeepDotWeb, this was only a start. ‘A lot more,’ he said, was still ‘to come.’

Identity theft – from drugs, explosives and insurance claims to duplicated you-and-me
One of the biggest risks is identity theft. Data on patients, including names, birth dates, social security and insurance policy numbers, diagnostic, treatment and credit card information, can be misused in several ways. Criminals also have an easy choice. If a target refuses to pay ransom, hackers can still sell the data.
Stolen IDs are used to buy drugs and equipment for resale, or to make insurance claims. Certain prescription medicines can be converted into synthetic addictive drugs, or especially potent explosives.
A basic identify kit sells for $1,500 ( Euro 1,350), though certain medical data can raise the price dramatically. This compares to the couple of dollars sought for basic credit card information. Identity kit data can be used to professionally forge follow-on credentials such as new credit cards and lines of credit, insurance and social security subscriptions, driving licenses, marriage certificates (for illegal immigrants) and passports.
Giving criminals the luxury of time, medical identity theft is seldom noticed quickly, again unlike credit cards. Personal medical information, moreover, can be tailored to follow up with blackmail and other kinds of demands.

A fast-growing and expensive problem
A February 2015 study by Ponemon Institute, a think-tank on data protection, shows US identity theft rising annually at about 20% since 2012. An estimated 2.3 million adults were affected by medical identity theft in 2014, up from 1.4 million in 2009.
The cost to patients is substantial. Ponemon found medical identity theft costing an average of $13,500 ( Euro 12,150)in out-of-pocket legal expenses and financial losses.

Endangering patients
Beyond costs lie other dangers. These are often exacerbated by delays in hospitals informing patients about medical data theft. As we shall see, such a lapse is hardly rare, and victims can end up with a thief’s health data incorporated into their own. A patient record may show a diabetic as being diabetes free, with other misinformation about allergies or blood type being potentially fatal.
Reversing this is not always straightforward.
In summer 2015, the Wall Street Journal’ reported an identity theft at Centerpoint Medical in Independence, Missouri, leading to erroneous billing about a non-existent injury. Although the error was pointed out to the hospital in January 2014, the hospital and a collections agency remained in hot pursuit until the year end for payments – and interest.
The intervention by the influential US newspaper led to Centerpoint dropping the bills and charges. However, when the (real) patient’s record was found to contain wrong information about an allergy, a review was not permitted, in order to protect the thief’s health information – covered by the privacy provisions of HIPAA (Health Insurance Portability and Accountability Act).

USBs, laptops – physical theft remains a major problem
In spite of such growing threat awareness, the risk management spectrum remains immature. Most hospitals lack protocols to prevent data transfer to small, high-capacity USB sticks and CD-ROMs, or control access for laptops. Indeed, Department of Health and Human Services (HHS) data show that over 40% of US medical data breaches involve portable media devices.
One good example is Chicago’s Advocate Medical Group where a laptop theft from an unmonitored’ room in 2013 led to the loss of data, including social security numbers, on 4 million people. Advocate Medical took one month to notify patients, although many faced a clear risk of identity theft.

No encryption, not even passwords
One year previously, Howard University Hospital notified 35,000 patients that their medical data had been compromised, after a contractor at the hospital downloaded files onto a personal laptop, which was then stolen. The data, included names, addresses, Social Security numbers and medical information. It was password-protected but unencrypted.
Several non-technical hospital staff, unfortunately, remain unaware about this crucial difference.
For example, at the end of 2013, Kaiser Permanente’s Anaheim Medical Center reported a breach of 49,000 records from an unencrypted, missing USB drive. A similar situation occurred again in May 2016 after 29,000 emergency room patient records were compromised at Indiana University’s Arnett Hospital, after being accidentally’ downloaded to a USB drive. This time the data was neither encrypted nor password protected.

Europe has similar problems as US
The situation in Europe, too, is hardly encouraging. As far back as 2007, Britain’s Nottingham University Hospitals Trust faced the theft of a USB stick with patient data from a doctor. The theft came to light after a whistle-blower wrote to the British Medical Journal’ and noted that it was common for doctors to carry patient data around on USB sticks in order to permit patient hand-overs. Although the Trust’s policy required confidential data storage on USB sticks to be limited to 128-bit encryption and be used solely on hospital computers, only the naive (continue to) believe that enforcing such a policy is possible.
One year later, a manager at Colchester Hospital in Essex was sacked after his laptop containing medical data was stolen by thieves who broke into his car while he holidayed in Edinburgh. At the time, the hospital’s CEO said the sacking was a clear endorsement about ‘how seriously’ he took ‘security and patient confidentiality.’ However, there was no explanation about why private medical data was present, and then too in an unencrypted form, on the laptop of a holidaying executive, when it could well have been accessed via a secure online network.

Theft of laptop with 8.3 million (unencrypted) UK records
The quantity of physical data theft from UK hospitals also continues to grow, even as security practices remain stuck. In 2011, an (unencrypted) laptop was stolen from an (unlocked) office in the headquarters of Central London NHS (National Health Service). The laptop contained hospital records of 8.3 million identifiable patients.
Overall, according to an investigation by Pulse’ magazine, 55 UK hospitals have reported breaches, including records dumped in public places, or provided to the wrong patients.
The lack of a risk management policy was demonstrated emphatically in April 2014. In spite of claims that the (massive) UK national records database ‘has never been compromised,’ Freedom of Information disclosures showed four serious medical data security breaches since 2009.

French hospitals: laconic about cybercrime

France, too, is in a similar quandary. It is implementing a single national medical database with information on 66 million residents. This complements an electronic medical record (known as DMP 2) with open architecture to make it easier for sharing data among hospitals and healthcare professionals.
In May 2016, the journal Le Nouvel Observateur’ noted though several French hospitals had been targeted by cybercriminals, there was a deafening silence about the issue. In addition, it said, there was little clarity about whether patients would be informed in case of a data breach. What was especially alarming was that only 50 experts were responsible for computer security at 1,000 French hospitals.

US Senate tightens the screws at end of 2012
In the US, meanwhile, although the privacy of medical health data is codified by HIPAA and reporting rules from 2009 require hospitals to notify both the authorities and the media if a data breach affects 500 or more patients, there are no requirements for criminal prosecution.
Until November 2012, in spite of more than 22,000 complaints about HIPAA privacy violations, the US government imposed just one fine. During that month, after a particularly feverish spell of attacks, the US Senate took HHS to task in a public hearing. By June 2013, HHS had made fines of over $1.5 million ( Euro 1.35 million).

Howard University hospital attacked twice in 2012
2012, the year of the Senate hearings, was clearly a turning point in US attention to medical data safety.
In May, prosecutors charged Laurie Napper, a technician at Howard University Hospital for using her position at the hospital to gain access to patients’ names, addresses and Medicare numbers and selling this information. This was barely a few months after the same hospital had notified 35,000 patients about their medical data being compromised.

US military medical records compromised
In November 2012, TRICARE, the health insurer for the US military, announced the theft of backup computer tapes with 5 million names, Social Security numbers, and, in some cases, clinical notes and lab test results. The fact that these records also contained the home addresses of military personnel added another category of security risk to the theft.

Whether due to larger fines for medical privacy violations and/or a fast-growing number of cybercriminals, Ponemon Institute found that 40% of US healthcare organizations reported a criminal cyber attack in 2013, twice the level of 20%
in 2009.

After Chinese attack, FBI heightens attention to hospital cybersecurity
One key development has been the FBI’s entry in 2014 into hospital cybersecurity. One of the trigger events was a theft by Chinese hackers of data on 4.5 million patients held by one of the US’ largest hospital operators, Community Health Systems Inc.
Soon after, as noted previously, US health insurance giant Anthem Inc. reported what may be the biggest medical record hack in the world. Anthem holds data on 80 million Americans, including names, dates of birth, Social Security numbers, Medicare and health plan identification numbers as well as diagnostic and medical/surgical procedural data. Ironically, only a few weeks before, Anthem’s CEO announced that his company and the health insurance industry ranked at the end of the list in customer service.
The risk of attacks by hostile foreign interests was, however, not new. Indeed, in the tipping point year of 2012, Utah’s Department of Health reported that hackers from eastern Europe had stolen medical information on 800,000 people, or almost 25% of the State’s residents.

Shutting down a hospital: the problem of ransomware

Beyond medical identity theft lies ransomware, which may be the fastest growing security risk. Rather than stealing data, ransomware locks down systems and encrypts files. Typically, a pop-up screen then demands ransom in exchange for a key to decrypt files and return access to a user.
Ransomware offers one of the best risk-reward portfolios for criminals who target hospitals. The technology is relatively unsophisticated and versatile, and hackers can make money quickly via extortion rather than seeking to sell data on the black market.
In February 2016, Hollywood Presbyterian Medical Center called in the FBI after ransomware forced its IT systems offline. Physicians could not access electronic records or communicate via email. Some emergency patients were diverted to other hospitals while outpatients missed treatments. Although reports about a $3.6 million ( Euro 3.24 million) ransom payment were reduced to $17,000 ( Euro 15,300), the fact that ransom money was paid is likely to increase the risk of copycat cybercriminals. The FBI recommends organizations do not pay ransom.
At the end of March, MedStar Health, a ten-hospital group in Maryland with over 100 outpatient facilities and 30,000 staff, became the largest medical entity to be successfully attacked by ransomware. Though MedStar stated there was ‘no evidence of compromised information,’ the bulk of its electronic operations was shut down. This time too, the FBI, was called in.
By June 2016, at least a dozen US hospitals had been targeted by ransomware. The number is likely to grow.

Ransomware forces German hospital to use pen and paper, postpone surgeries
The threat of ransomware is also serious in Europe.
In February 2016, the respected German publication Deutsche Welle’ (DW) reported that a number of hospitals in the country had fallen prey to ransomware, disrupting core healthcare services and internal systems. DW named several leading hospitals, including the Lukas Hospital in Neuss and the Klinikum Arnsberg hospital in North Rhine-Westphalia.
The Lukas Hospital was forced to revert to phone calls, fax and pen-and-paper records for several weeks, with high-risk surgeries postponed until handwritten notes had been filed.
On the other hand, Klinikum Arnsberg fared far better. A quick response saved it after the ransomware, entering via email, was detected on one server. All other servers, some 200 in total, were switched off to prevent contagion.

From IP to terror: other cyber-risks associated with healthcare
The healthcare threat spectrum extends beyond hospitals.
In October 2013, the US Food and Drug Administration (FDA) reported an alarming security breach at its Center for Biologics Evaluation and Research. The hack compromised 14,000 accounts, including proprietary pharmaceutical company data.
Issues of intellectual property (drug formulae, manufacturing processes etc.) and trade secrets are of evident interest, to competitors, both at home and abroad. This is not a trifling matter, given the billions of dollars spent in developing and marketing a drug, and the billions more expected from its sale.
The interest in biologics in particular, shown by the hack at the FDA, has been of concern since several biologic products have recently begun to come off patent, while many more are expected to do so in the future.
Last but not least, biological products include vaccines – with all their attendant implications for terrorist attacks. At the end of May, one of France’s biggest hospitals, the Pitie-Salpetriere at Paris, was subject to a break-in at a laboratory storing bacteria. In November 2015, just after the Paris terrorist attacks, another city hospital, Necker, had reported the theft of Hazmat suits – which can be used to protect against bacteria/biowarfare agents. Whether there is a connection between the two is something one can only speculate about.
There will no doubt be other risks. For example, we know of one case of theft of a hospital’s fire safety plans. These identified storage areas for radioactive substances and hazardous waste. Here again, the authorities seem to be at a loose end.
Until hospitals and other actors in the healthcare industry develop and implement security best practices, the threat of disruptions, caused by petty criminals and ranging through to foreign corporate spies and terrorists, will clearly persist.

The authors
Ashutosh Sheshabalaya and Antonio Bras Monteiro
SolvX Solutions
Email: office@solvx.com

SolvX provides security and risk consulting services out of offices in Europe, the Middle East and Asia.