DBT makes major stride with Hologic’s launch of innovative mammography system

At the European Society for Breast Imaging (EUSOBI) meeting last September in Berlin, Hologic officially launched the 3Dimensions™ mammography system which offers a variety of groundbreaking features designed to provide higher quality 3D™ images for radiologists, enhanced workflow for technologists, and a more comfortable mammography experience, with low-dose options, for patients (see featured item).
On this occasion, International Hospital talked to Lori Fontaine, Vice President of Clinical Affairs for Hologic.

Is the launch at EUSOBI only for Europe or is it global?
The 3Dimensions™ mammography system received CE Mark in July 2017 making it commercially available in EMEA, followed shortly thereafter by the U.S. launch in August 2017.

Can you give some details and figures on dose reduction for the new system?
We know that dose is a common concern across Europe, and the 3Dimensions system helps address this by providing low-dose options for patients, among many other benefits. The 3Dimensions system results in a 45 percent dose reduction with a generated 2D image compared to 2D FFDM alone. 

Is the improvement in image clarity regardless of breast density likely to reduce the need for a secondary ultrasound in the screening of high density breasts?
We already know the 3Dimensions system’s Clarity HD high-resolution 3D™ imaging reduces recalls by up to 40 percent compared to 2D alone, and given Clarity HD works to deliver exceptional 3D™ images, regardless of breast size or density, it makes sense that the 3Dimensions system would be an ideal option for women with dense breasts. This is especially true since the 3Dimensions system operates in tandem with Hologic’s 3D Mammography™ exam, the only mammogram approved by the U.S. Food and Drug Administration as superior for women with dense breasts compared to 2D alone, which further demonstrates that tomosynthesis should be the standard of care for women across the globe when it comes to breast cancer screening.

Do you have any information and figures on the adoption rate of DBT by radiologists in the various European countries, are there significant country variations (or regional between US, Europe and Asia)?
Digital Breast Tomosynthesis (DBT) adoption rates vary by country. While DBT has been approved in EMEA since 2009, the majority of EMEA countries limit the use of DBT to diagnostic imaging as they have concerns regarding dose and reading time. Hologic remains at the forefront of technology innovation and is working to overcome these barriers, so that all women can be screening with DBT.

Hologic was the first company to receive FDA approval for DBT use in both the screening and diagnostic setting in the U.S. in 2011. Today, DBT is used in approximately 40 percent of all U.S. screening mammography exams and is covered by the majority of insurance companies. The evidence of the benefit of Hologic’s 3D Mammography exam as a better mammogram continues to expand and resulted in the addition of DBT to the National Comprehensive Cancer Network (NCCN) Guidelines in 2016. NCCN is recognized globally as an alliance of 27 U.S. cancer centers that develop recommendations designed to help healthcare professionals diagnose, treat and manage cancer care.

Medical errors: hospitals and doctors can and must do better

Primum non nocere (first, do not harm) remains a basic tenet of medical practice. Unfortunately, the complexities of modern medicine, the large pool of available medications as well as the multiplication of technical procedures combined with the frequent difficulty of reaching definite diagnoses and the high number of medical professionals taking care of a single patient have resulted in a growing number of medical errors, a significant part of which prove fatal for the patient. Data on the number of deaths caused by medical errors are not readily obtainable, nevertheless a number of recent studies in the US have reported figures greater than 200,000 deaths per year. For example, a patient safety expert team from John Hopkins University has calculated that over 250,000 deaths are caused by medical error in the US, based on an analysis of medical death rate data over an eight-year period. This figure was published last May in the BMJ and places medical error as the third highest cause of death, accounting for 10% of all US deaths. For the healthcare industry, this translates into about six potentially preventable deaths per year per US hospital, definitely not good statistics. The situation is somewhat similar in Europe, even if there aren’t any official figures at the EU level as Eurostat doesn’t list medical error as a possible cause of death since its statistics – like those of the US CDC – rely on the medical information contained on death certificates and on the coding of causes of death according to the WHO International Classification of Diseases (ICD) codes. Results from German studies on patient safety show that close to 20,000 deaths are caused by preventable adverse events in the country’s hospitals. These deaths cover a wide range of preventable causes, including hospital-acquired infections, embolisms, surgical errors, delay in diagnosis (especially for pediatric patients) and misdiagnosis – the latter probably ranking quite high even if very difficult to detect in research. Apart from deaths, there is a much bigger number of cases, up to 20-fold higher, where people suffer from serious adverse effects, sometimes for the rest of their lives. In addition to the individual harm incurred, there is also a high cost for society that includes additional healthcare expenditure, social costs and loss of economic capacity. Evidence shows that up to 70% of the harm caused by medical errors can be prevented through comprehensive systematic approaches to patient safety. At the hospital level, there is an urgent need for action, not least by physicians – they should be the first to recognize that every single death caused by a preventable adverse effect is one too many.

Enterprise imaging – radiology positioned to drive new wave

Informed, data-driven decision-making is crucial for delivering quality healthcare and efficiency. However, data in a health facility is often scattered across multiple sites and may not be available as and when needed by clinicians. For example, many have different departmental systems, such as the Radiology Information System (RIS), Picture Archive and Communication System (PACS), Cardiovascular Information System (CVIS), Laboratory Information System (LIS) etc..Data in such systems is dispersed and often inaccessible, due to the presence of multiple IT silos. However, to permit the best use of healthcare resources and deliver the highest quality of care, all of them will need to interact mutually, and do so with the electronic medical record, too. This remains a major challenge. Healthcare data remains fragmented and heterogeneous. Given the sheer volume of imaging data in a hospital, enterprise imaging is being seen as the way to begin address such challenges.

Historical advantages of radiology
Experts generally consider radiology to be one of the best-placed clinical specialities to drive the integration of healthcare data at an enterprise level.
Radiology has some historical advantages for this mission. It has been the early adopter as far as advances in imaging technology and workflow are concerned.
Secondly, most radiology facilities have long since been digital and standards-compliant via protocols such as DICOM (digital imaging and communications in medicine). Given that radiology data is a critical component of a patient’s medical file, RIS and PACS can be appropriate launch pads for reconciling patient information, synchronizing order data and exchanging diagnostic results.
Many radiologists see themselves at the forefront of enterprise imaging by driving the agenda at the hospital management level and engaging with other image producers in their facility.

Need to cope with new demands

However, several issues have to also be taken into consideration.
The current generation of department PACS, in terms of their core architecture and workflow components, dates to the mid-2000s. They had been designed for use by a single physician in the imaging department.  As a result, healthcare organizations now have multiple PACS systems. These aim to provide a common standard of care. As care benchmarks evolve, the PACS must cope with ever-new demands.
Given the presence of disparate systems across different departments, an enterprise imaging strategy seeks to harmonize medical imaging across an entire network or organization. Until recently, many healthcare facilities used a ‘forklift’ approach to implement an enterprise PACS design across all departments. However, its limits soon became apparent – especially in terms of lengthy implementation procedures, mission creep and change management, as well as price. Most experts now propose an architectural design which accounts for multi-vendor integration. This is not only cost-effective but also minimizes the impact of radical change management across multiple sites.

Vendor neutral archives

In recent years, vendor neutral archive (VNA) technology has emerged to address challenges posed by proprietary systems.  VNA refers to an enterprise data storage and workflow solution. Its goal is to manage and share out large flows of information and address workflow challenges.
Data in a VNA is stored in non-proprietary formats, which permit open interchange. As a result, VNA permits sharing of DICOM and non-DICOM data.
VNA, coupled to the closely-related concept of Universal Viewer, allows healthcare facilities to store, distribute and view any electronically stored images without restrictions.

IHE frameworks
Making such developments even more pertinent is the availability of best practices- and standards-based frameworks from the Integrated Healthcare Enterprise (IHE) – which draws heavily on existing best-practices and processes – for example, the very specific needs of a cardiology or orthopedic department, or the time-critical processes at an emergency department (ED). 
IHE frameworks merge customization with a standards-based approach, to allow for rapid integration of systems and sub-systems and accelerate the adoption of information sharing across what were previously silos.
By avoiding information duplication and workflow disruption, IHE also achieves its goals without extra overhead and cost.  Indeed, one of the biggest barriers to system integration has consisted of disruption to an established care workflow.
On the other side, the integration of imaging workflow, from ordering through acquisition to reporting and billing, is considered to be a key factor to ensure that those viewing an image remotely are fully cognizant of both its context and presentation.

PACS 3.0
Next-generation PACS systems (or PACS 3.0) are likely to incorporate enterprise workflow/worklist applications, based on VNA, according to a noted US imaging technology expert, Michael Gray.
Plug-ins to the VNA, in Mr. Gray’s view, will feature diagnostic display applications used by different imaging departments, whether or not the images are in DICOM. Non-DICOM applications would deploy a front-end application to create the study from individual images and associate the proper patient and study metadata to the study.
In PACS 3.0, individual physician worklists present a list of specific studies to be consulted, while the underlying workflow launches the most appropriate display application, based on a physician’s pre-defined choices and the study selected from the list. In effect, the enterprise workflow/worklist application becomes the shared entry point for all interpreting physicians in every imaging department while the VNA is the data repository.

Beyond radiology
Enterprise imaging is nevertheless targeted well beyond the requirements of the radiology department alone.  A large (and increasing) number of images are generated and interpreted in other departments – for example, from an orthopedic procedure.
There also are certain kinds of images which are not formally considered imaging studies, for example during a dermatology consultation or during the course of wound care treatment. This kind of data is, however, becoming of clinical significance in areas such as personalized medicine, where it is an important part of the medical record of a patient.
Such images need to be shared, sometimes immediately. For example, pre-surgical imaging of a complicated ankle fracture in the emergency department could require transmission to not only an orthopedic surgeon but also of a vascular surgeon – with regard to blood flow in the poorly-vascularized talus. In such a case, instant access to previous images of ankle fractures would clearly enable an emergency department to best interpret new images.
Such circumstances are also apparent in the case of patients who present at different providers, since a second provider is at a disadvantage without access to earlier images.

Enterprise imaging and patient care
Acquiring data from a range of systems in different departments demands a buy from the top echelons of management and a commitment by all concerned members of the healthcare facility.
One argument for such alignment is the role of physicians – to provide the best-available patient care. Good enterprise imaging ensures that this is made possible by providing physicians with the most efficient tools and resources.
Indeed, it is not rare for the patient experience to get lost in the context of technology paradigm shifts or major process overhauls such as enterprise image/data integration. To avoid this and ensure maximum effectiveness, healthcare organization need to closely focus on both the individual patient as well as the complete continuum of care.

From EMRs to image lifecycle management
Drivers of enterprise imaging also come from the side of the electronic medical record. Hospitals have been seeking to stretch the frontiers of the latter by enhancing communication of both data as well as images.  Enterprise platforms, once looked at as no more than a storage medium, are now being geared up to give a comprehensive view of a patient’s medical history.
One challenge here is the rapid growth in the volume of imaging data. This is compounded by fragmentation and an ad-hoc approach to image management.  As storage requirements have grown, data has also become more distributed in terms of multi-site PACS as well as storage tiers, based on clinical urgency or relevance as well as legal and regulatory requirements.
Benefits from enterprise imaging solutions aim at better control of the lifecycle of a medical image – not least by providing better control over storage capacities and aligning storage costs with operational priorities.
Hospital managers who are renewing or upgrading to a newer PACS system usually seek some degree of future proofing, in the form of scalable solutions and methods to manage a growing corpus of images, many of which are dated. Identifying older images which can be compressed or deleted saves on storage space.

The Enterprise Imaging Program at Cleveland Clinic
The prestigious Cleveland Clinic in Ohio provides a good definition of enterprise imaging strategy as means to address the overarching need “for standardization of clinical image acquisition, management, storage and access.”
The Cleveland Clinic enterprise imaging program incorporates all producers into its clinical image library, which is connected to electronic medical records. In total, this includes images from 11 different healthcare service lines, in addition to radiology images. By the end of 2016, according to one report, 440 different image-generating devices residing outside the radiology service had been integrated.

Commercial solutions
Today’s marketplace already offers a range of enterprise imaging solutions for healthcare enterprises.
Typical examples include diagnostic-quality images provided to clinicians on demand, as well as interfaces with third-party applications to enhance programmes. Some focus on providing a comprehensive view of the healthcare workflow. Others improve image routing and support telemedicine services.

Emergence of artificial intelligence
One of the latest additions in the enterprise imaging arsenal is artificial intelligence (AI).
In recent years, as radiologists have been forced to cope with the explosion in medical imaging procedures and storage capacity, AI seems to be showing early promise. AI is also being used to directly help the care delivery process.
Some medical technology vendors have showcased AI applications integrated with their enterprise imaging platforms. These typically consist of imaging analytics software that assists radiologists diagnose diseases before symptoms occur, and more accurately interpret findings. For example, machine vision AI algorithms pinpoint anomalies within images in real time, alerting radiologists to incidental findings. Physicians could then screen patients further for what may still be asymptomatic conditions – but could develop into a major disease.

No one doubts that radiologists will work increasingly in the future with AI, both to improve the technology itself and to reduce routine, repetitive tasks such as confirming line placements and looking at scans to find nodules.  On its part, AI is also likely to become increasingly smarter, to improve efficiency, for example by prioritizing cases, putting thresholds on data acquisition, improving workflow by escalating cases with critical findings to the worklist of a radiologist and providing automatic alerts to both radiologists and other concerned clinicians. Such steps would not only free up resources for additional testing but also improve patient care, thereby making radiologists even more integral in the care management process. These perspectives are of course central to a robust enterprise imaging strategy.

Investments in healthcare: a quest for seeking improvements

Dr. Gianfranco Scaperrotta, the head of SS Senology Radiology at Fondazione IRCCS Istituto Nazionale dei Tumori (INT) in Milan, offers his perspective on what advisable investments healthcare executives should consider, pointing to inefficiencies in workflow and patient satisfaction in the stereotactic breast biopsy procedure to help illustrate his position.

by Dr. Gianfranco Scaperrotta

Healthcare executives – who are responsible for investment decisions – are constantly working to justify how a particular asset or purchase is beneficial to their facility. With multiple priorities to consider, from doctors’ and patients’ needs, to a facility’s financial goals and beyond – combined with budget limitations – the need to find and rationalize the right investment options can be particularly complex. This is largely caused by the demands being placed on facilities and doctors to work more quickly and efficiently. In an era marked by the concept of constantly doing more, faster and better, the search for the right investment essentially comes back to the same basic, and yet truly powerful idea: in the healthcare field, we are always on a quest for improvement.
 
One of the best ways to warrant an investment is to become immersed in the field’s overall functionality from a clinical, financial and patient perspective to unearth any weaknesses. There are certainly processes and procedures in each part of the healthcare industry that can and should be improved, and that, if effectively handled, could have a positive, widespread ripple effect across facilities. 
 
Breast biopsy procedure
The radiology sector, for example, is one of many in healthcare that has room for improvement. As the head of SS Senology Radiology at Fondazione IRCCS Istituto Nazionale dei Tumori (INT) in Milan, I feel this is particularly apparent when it comes to the current state of the stereotactic breast biopsy procedure. Throughout my 25-year career, I have performed many breast biopsy procedures, and although none of my experiences are exactly the same as one another, there are a few consistent aspects that are worth noting that help showcase a need for change. This is made evident when considering the overall procedural experience, from start to finish. 
 
More often than not, when patients come in for a breast biopsy, they’re already feeling anxious and uncertain about the procedure before they even enter the room. In addition to fearing a needle in the breast, they are likely contemplating the unsettling idea that they may be diagnosed with breast cancer. Their level of discomfort may grow while waiting for the clinicians to enter the room and begin to prepare for the procedure. To begin, the technologist will help the patient get into the appropriate position to ensure the biopsy needle is targeting the proper area of the breast, where the suspicious tissue was noted on the mammography exam. Depending on where the calcification is in the breast, in some cases, the patient must be placed in a particularly awkward position in order for the needle to reach the correct target area, and she must hold her body in that same position until the procedure is complete.  
 
At this point, the radiologist collects the tissue samples, which then require verification. This process varies depending on the facility. Whereas I have the resources to verify my patients’ samples in the same room where the biopsy is taking place, there are many cases in which the clinician must prepare the samples for transport, and then leave the procedure room to image and verify the samples on another piece of imaging equipment, which may already be in use for another patient and therefore cause scheduling delays. During this time the patient must remain in compression, which may increase her anxiety. In some cases, the clinician will determine the need to take more samples from the patient, making the procedure time lengthier than anticipated. After the tissues are verified, the breast biopsy procedure can conclude, yet the patient must first await her results, which will come later, after the samples have been sent to and evaluated by pathology. 
 
This one scenario in the radiology field demonstrates a few issues that must be tackled. First and foremost, patients are extremely anxious, and radiologists need to help ease their concerns. Perhaps they could be helped by enhancing the ambiance of the procedure room with more calming visuals or music to reduce tension. Additionally, positioning patients when their calcifications are in unusual areas can add to their discomfort. Similarly, lengthier procedure times  only add to patient apprehension, while also slowing radiologists down, which can affect their subsequent appointments. Lastly, patients must still wait for the samples to go through the pathology process before receiving a diagnosis.  

Patients satisfaction
It’s clear that today’s stereotactic breast biopsy could benefit from better workflow efficiency, yet this deep dive into the procedure also reveals a need for improved patient comfort. Together, time-savings and comfort contribute to overall patient satisfaction, and the fact that the stereotactic breast biopsy falls short in this area presents an opportunity for improvements to be made. For any doctor and facility, providing a positive patient experience and increasing satisfaction is crucial for success. Not only is it important to deliver high quality, swift care for patients for their health and happiness, but it’s also worth recognizing the business logistics associated with patient satisfaction – positive experiences can result in future referrals. Additionally, fast and efficient procedures mean that radiologists can get more work done in a day, furthering the overall productivity and financial success of a facility. 
 
In my opinion, when healthcare executives are thinking about their next investments, they should not only remember to consider a sector’s inefficiencies, but they should also take special note of those shortcomings that have the widest impact across the facility, like workflow and patient experience. Even beyond investments, it is human nature to constantly seek improvements. For example, I envision one day that radiologists will take an entirely new approach to the biopsy procedure, perhaps removing calcifications as a whole at once to start potential cancer treatment early, instead of taking smaller samples to first test the tissue. I encourage clinicians to similarly identify inefficiencies in their respective industries and search their minds for new, better ways. Let us challenge what we know and never tire from our quest to keep improving. 

The chief information security officer – new challenges, new responsibilities

Hospitals depend on information to effectively manage and deliver health services. Given the unremitting escalation in cyber-attacks and patient data breaches at hospitals today, the role of the CISO (Chief Information Security Officer) has moved to centre stage.  
As their own responsibilities have expanded, hospital CISOs have also faced the need to understand perspectives of other boardroom leaders. These range from business practices to risk management, the economics and cost-benefit of security as well as legislation about privacy and liability. Indeed, some American hospitals refer to the CISO as Chief Information Privacy and Security Officer.

Data breaches and ransomware threats escalate
The frequency of reported data breaches at hospitals has grown especially sharply in the US. Over just two days in the middle of September this year, Children’s Hospital Colorado, Morehead Memorial in North Carolina and Georgia’s Augusta University Hospital reported security breaches which potentially affected personal health data of several thousand patients.
Europe has also seen its share of attacks. In May 2017, the National Health Service in Britain was hit by a ransomware attack which crippled the ability of some 16 units to access patient data.  In July, an insider breach at health insurance giant Bupa exposed data of 108,000 customers.
In France, over 1,300 attacks on hospitals and healthcare facilities were voluntarily reported to the Ministry of Health in 2016.

Scale of threat grows, so do delays in response
Nevertheless, a data breach scandal in another business sector depicts the sheer scale and impact of the phenomenon. In September, Equifax, a major US credit reporting agency, announced its IT systems had been compromised, potentially exposing credit card details, Social Security numbers, and other personal information for up to 143 million Americans.
Although critics of Equifax complained about the delay, the longest gap in discovery of a breach concerns Tewksbury Hospital in Massachussets, which took 14 years to discover that a clerk had been inappropriately accessing patient records since 2003.

The role of the CISO
Such events have propelled CISOs to the frontlines of information security, strengthening a trend that dates to the late-2000s.
In 2011, a PricewaterhouseCoopers (PwC) survey found that 80% of businesses had a CISO or equivalent, compared to less than half in 2005. Almost two-thirds reported to the Chief Executive or the Board of Directors, and the rest to a Chief Information Officer (CIO). 

60 percent of US healthcare facilities have CISO role

The situation in the healthcare sector has mirrored, if slightly lagged, this trajectory. In 2017, 71 percent of respondents to a US cybersecurity survey by HIMSS (the Healthcare Information and Management Systems Society) stated their organizations allocated a specific budget for cybersecurity.
Almost half said this was over 3 percent of the budget, while one in ten said the share was more than 10 percent. Another interesting finding from the HIMSS survey was that 60 percent of respondents said their organizations employed a CISO or senior information security leader.

The CISO in Europe
The above figures refer to the US. Europe is likely to be some way behind. Nevertheless, it too is catching up. In France, for example, the Association for the Security of Health Information Systems (APSSIS) made specific recommendations at a recent annual conference on the role of the CISO (known in French as ‘responsable de la sécurité des systèmes d’information or RSSI) and the need for close coordination with the CEO.
In the UK, HCA Healthcare, London’s largest private hospital group (including top facilities such as The Harley Street Clinic, Princess Grace Hospital and The Wellington Hospital) announced an opening for a CISO at the end of August 2017. The HCA described the CISO job as being “responsible for providing strategic leadership and operational oversight for the security of information technology and systems and Information Governance…” Specific tasks which were identified include risk assessment and management, patient privacy, development of policies, standards, procedures, and guidelines, as well as threat/incident response and corporate communications on security.

The CISO and compliance: ISO standards
CISOs are in fact responsible for information-related compliance in all business sectors. Compliance principally involves two information security frameworks published by the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC).

IEC/ISO 27001:2013
The first, IEC/ISO 27001:2013 is a guideline with requirements “for establishing, implementing, maintaining and continually improving information security management.”  The second, ISO/IEC 27002:2013, is a standard, and provides implementation rules. It focuses on the confidentiality, integrity and availability of information; it also provides best practice recommendations. Both are applied internationally.

ISO 27799:2016 focuses on healthcare

Hospitals, nevertheless, face very specific information security challenges. These are embodied in another standard, ISO 27799:2016, to protect the confidentiality, integrity and availability of personal health information. This ISO standard provides implementation guidance for the controls in ISO/IEC 27002:2013 and supplements them where necessary, to make them relevant for health-specific information security requirements.

ISO 27799:2016 applies to information in the form of words and numbers, sound recordings, drawings, video or medical images, whether it is stored in print or in writing on paper or electronically. Also covered are the means used to transmit the information – by hand, through fax, over computer networks, or by post.
It is important to note that although ISO 27799:2016 and ISO/IEC 27002:2013 jointly define information security requirements for healthcare, they do not specify how these should be met. In other words, they are technology-neutral.

Differences between ISO and HIPAA
ISO 27799:2016 is, however, not a legal requirement unlike HIPAA (the Health Insurance Portability and Accountability Act) which regulates the security and privacy of health information in the US, though the two have much in common. Nevertheless, for hospital CISOs, the difference is a major factor.
The latest Data Breach Litigation Report from St. Louis law firm Bryan Cave reports 76 class action data breach lawsuits in 2016, up by 7 percent from the previous year.
However, these actions are potentially only the tip of an iceberg, with only 3.3 percent of publicly reported data breaches leading to litigation. What is more pertinent to hospital CISOs is the fact that 70 percent of publicly reported breaches related to the medical industry, with negligence accounting for 95 percent of all cases.

The Common Security Framework
In the late 2000s, an initiative known as the Common Security Framework (CSF) sought to become the overarching framework to comprehensively map different security standards and practices and provide a one-stop solution for hospitals and the healthcare sector. It was established by the Health Information Trust Alliance (HITRUST) – a US-led healthcare industry organization which has sought to ensure that information security becomes central to both the adoption of technology and the exchange of health data.

HITRUST in the US
HITRUST, in many senses, marks the coming of age of the CISO, in the US. Its founders consisted of CISOs from a broad range of healthcare actors, including Blue Cross Blue Shield, CVS Caremark,  Hospital Corporation of America, Humana and Kaiser Permanente, alongside top executives from Cisco Systems, Johnson & Johnson Health Care Systems and Philips Healthcare.
HITRUST has however yet to make any impact in Europe, where attention to healthcare information data security has been directed either to the electronic health record or included within the broader ambit of protecting personal data.

The Smart Hospital in Europe
Indeed, CISOs in Europe’s hospitals pay far greater attention to ISO 27799:2016 and ISO/IEC 27002:2013, with a leadership role at ISO taken by CEN, the European Committee on Standards.  Recently, this has been accompanied by recommendations from ENISA (European Union Agency for Network and Information Security).
As part of the so-called Smart Hospital programme, ENISA has specified good practices for hospitals, with explicit mention of the role of the CISO. Nevertheless, ENISA too takes cognizance of the central role of ISO and the “2700x series of standards.”

National initiatives
There are several national initiatives, too. In France, for example, APSSIS (the Association for the Security of Health Information Systems) has played a major role in charters to be signed by staff within territorial hospital groups (GHT), so as to make them aware of best practices in computer security.

In Germany, ZVEI (the German Electrical and Electronic Manufacturers’ Association) has published guidelines on the use of IT in medicine, including what it calls “secure medical subnetworks”. In February, ZVEI released a position paper on standards for the use of electronic products used in a medical setting and the legal obligations of operators using such systems.
One of the nightmare scenarios here is, of course, the likelihood of hacking of medical devices.  In 2016, Johnson & Johnson warned customers about a security bug in one of its insulin pumps , while St. Jude has sought to deal with the fallout of vulnerabilities in some of its defibrillators and pacemakers.

Health-specific experience
The issue of health-specific technical experience is now driving recruitment of hospital CISOs.  Healthcare has lagged sectors like banking or retail with regard to IT adoption. Indeed, even when hospitals began to implement IT, functionality rather than security was the priority. As a result, most hospitals have a back-office choking with legacy applications, often numbering in  the thousands. Knitting them into a secure architecture is hardly straightforward.
One consequence of such factors is an inadequacy in the number of IT professionals familiar with both healthcare and security. 

Training and certifications
To access the requisite talent, some argue for jettisoning the search for healthcare experience, and focus on hiring an experienced CISO from another industry, followed by training in healthcare issues.  Others favour the opposite – to look for talent in healthcare IT, but train them in security.
The College for Healthcare Information Management Executives (CHIME), and its affiliate, The Association for Executives in Healthcare Information Security (AEHIS) have launched programmes directed wholly at training hospital CISOs.
The CHIME Certified Healthcare CIO (CHCIO) programme, is in fact the first certification programme exclusively for CIOs and IT executives in the healthcare industry. CHIME members who have been in a healthcare CIO or equivalent position for at least three years and want to enhance their professional stature are eligible to become certified. Currently, over 400 IT professionals are CHCIO-certified. This level of figure is also endorsed in a professional forum like LinkedIn, which lists 240 CISOs at hospitals – out of a total of over 7,500.

For now, generally speaking, one is more likely to find CISOs at larger hospitals and academic medical centres in both Europe and the US. Mid-sized facilities still dedicate the CISO role to a CIO (Chief Information Officer), supported by IT staff who devote part of their time to security issues. Such a piecemeal approach is however fast revealing its limitations, as shown by the growing wave of cyberattacks.

New frontiers in point of care diagnotics – trends in genetics, biosensors and microfluidics

Almost precisely a decade ago, the US National Institutes of Health remarked that point-of-care (POC) testing might offer a paradigm shift towards predictive and pre-emptive medicine.
Recent advances in areas such as genetics testing, biosensors and microfluidics continue to enthuse proponents of such scenarios.
However, several challenges still need to be addressed, along the way.

Quicker, better and cheaper
POC testing, which simply means diagnostic tests are done near the patient rather than a clinical laboratory, provides diagnostic information to physicians and/or patients in near-real time. Samples do not need to be transported, or results collected.  Short turnaround times are accompanied by high sensitivity and a sample-to-answer format, as well as reduced costs to the health service.

Push-and-pull
Unlike many other medical innovations, the push of POC technology has been accompanied by a pull from users. Patients find POCs convenient and empowering. Many POCs allow them to monitor their health and medical status at home. Alongside the growing availability of medical information on the Internet, and other enabling technologies such as telemedicine, POCs also mark the coming of age of personalized medicine.

Classifying POC tests
POC tests can be broken down in terms of size/disposability and complexity. At one end are small handheld tests, above all for glucose, and lateral flow strips which determine cardiac markers and infectious pathogens, or confirm pregnancy. 
In recent decades, strip technology has been coupled to meter-type readers, typified by the now-widely used glucose meter.  Due to their compact nature, such POC tests are often specialized and limited in overall functionality. However, some can be quite sophisticated. New POC tests for early detection of rheumatoid arthritis, for example, require only a single drop of whole blood, urine or saliva, and can be performed and interpreted by a general physician within minutes.
On the other side of the equation are laboratory instruments, which have been steadily reduced in size and complexity. Recent launches include small immunology or hematology analysers. These POC tests provide higher calibration sensitivity and quality control and are used for more complex diagnostic procedures. Such devices have been accompanied by increasing levels of automation, which translates into increased speed. However, it also leads sometimes to challenges in training users.

Technology drivers
Three key technologies driving the POC market currently consist of genetic tests, biosensors and microfluidics. Combinations of biosensors and microfluidics have recently been developing at an especially dramatic pace.

Genetic testing
Traditionally, genetic testing involved DNA analysis to detect genotypes of interest, either for clinical purposes or related to an inheritable disease. However, results took days or weeks, limiting the applicability of genetic testing in a POC setting.

Emergence of molecular genetics
In recent years, molecular genetics has emerged as one of the most exciting frontiers for POC testing.  It detects DNA and RNA-level abnormalities that provoke and fuel most diseases. As a result, it offers precise diagnosis, determines the susceptibility of a patient to a specific disease and assesses his or her response to therapy. Molecular diagnostics can also establish a patient’s prognosis over time far more scientifically than what is often no more than a physician’s informed guess. 
One of the first POC gene tests was US biotech firm Cepheid’s GeneXpert, developed to detect the chromosome translocation associated with chronic myeloid leukemia. The small benchtop device provided results in less than two hours, with minimal manual labour involved.
Several companies have been developing tests to analyse genetic polymorphisms which influence the effectiveness of drugs. For instance, Spartan from Canada has developed a one-hour test to analyse CYP2C19, the cytochrome P450 enzyme that activates the antiplatelet inhibitor clopidigrel. Different alleles of the CYP2C19 gene can impair the enzyme’s ability to metabolize the drug, leading to major adverse reactions. Others are developing quick turnaround tests (below 20 minutes), for instance, to detect polymorphisms associated with warfarin response, in order to guide dosage.
These developments focus on analysing very specific targets, with clinical decisions based on a handful of expected results. POC testing in such contexts evidently saves time and permits faster patient care.

Gene sequencing: challenges and breakthroughs

The case is different when the POC effort involves sequencing a gene or a whole genome. This is largely because the interpretation of (otherwise-quick) results are still time consuming and need trained experts.
In spite of this, some innovators are confident about the opportunity for handhelds in genomic sequencing. MinION is a 90 gm handheld device, and is seen by its developer Oxford Nanopore as a first step to ‘anything, anywhere’ sequencing. MinION, which has been used in UK hospitals and in West Africa during the Ebola outbreak, performs nanopore-based sequencing within just a few hours.
There is much more, however, that remains to be smoothed out. MinION shows a high error rate compared to existing next generation sequencing (NGS) platforms and it is impractical for use with larger genomes.
As these kinds of POC genomic technologies continue to develop, other enabling innovations are also likely to make an impact. For example, some researchers have harnessed mobile phone technology for gene variation analysis and DNA sequencing. Its implications in a POC setting would clearly be massive.

Biosensors
As mentioned above, another technology driving POC diagnostics consists of biosensors.
Biosensors are biological materials, closely associated with a transducer to detect the presence of specific compounds.
A biosensor system consists of a biospecific capture entity to detect the target molecule, a chemical interface to control the system function and a transducer for signal detection and measurement. Transducers can be electrochemical, optical, thermometric, magnetic or piezoelectric. Their aim is to produce an electronic signal proportional to an analyte or a group of analytes.
The biospecific capture entity (typically whole cells, enzymes, DNA/RNA strands, antibodies, antigens) is chosen according to the target analyte, while the chemical interface ensures the biospecific capture entity molecule is immobilized upon the relevant transducer. 

Key requirements
One key requirement in a biosensor is selective bio-recognition for a target analyte, and the ability to maintain this selectivity in the presence of interference from other compounds. The selectivity depends on the ability of a bio-receptor to bind to the analyte. Bio-receptors are developed from biological origins (e.g. antibodies) or patterned after biological systems (such as peptides, surface- and molecularly-imprinted polymers).
The second requirement in a biosensor is sensitivity. This depends on a wide range of factors, such as the properties of the sensor material, the geometry of the sensing surface and resolution of the measurement system. One of the most important factors in this context is surface chemistry, used to immobilize the bio-recognition element on the sensing surface.

BioMEMS
In the field of POC, there has for some time been considerable excitement about biomedical (or biological) microelectromechanical systems, known by their abbreviation BioMEMS.
BioMEMS are biosensors fabricated on a micro- or nano-scale, resulting in higher sensitivity, reduced detection time and increased reliability. Reagent volumes are also reduced due to the smaller size of BioMEMS, which increases their operational cost-effectiveness.
The miniaturization inherent to BioMEMS means greater portability, which is of course a cardinal requirement for POC applications.
Next-generation POC systems are expected to go beyond diagnostics to advance warning, by ‘learning’ about patients (including vital signs such as heart rate, oxygen saturation, changes in plasma profile etc.), and discovering problems in advance through the use of sophisticated algorithms. Such monitoring systems are likely to comprise different types of wearable or implantable biosensors, communicating via wireless or 4G links to their smartphones and onwards to a medical centre. Such systems would dramatically reduce response time and make testing available in environments where laboratory testing is simply not feasible.

Microfluidics: lab-on-a-chip
Microfluidics, also known as lab-on-a-chip, miniaturize and integrate most of the functional modules used in central laboratories into a single chip. The technology is seen as a high-potential driver of POC diagnostics, not least in developing countries.
There are three principal families of POC microfluidic tests – lateral flow devices, desktop or handheld platforms and (emerging) molecular diagnostic systems. The systems range from zero-instrumented POC devices for the detection of pathogens to fully-instrumented equipment such as NGS sequencing and droplet-based microfluidics.
Microfluidic applications have grown at a dizzying speed, due to the inherent advantages and promises of the technology. These include the ability to manipulate very small volumes of liquids and perform all analytical steps in an automated format – from sample pretreatment, through reaction and separation to detection. Assay volumes are therefore reduced dramatically, while sample processing and readout are accelerated. Other salient features of microfluidics consist of parallel processing of samples with greater precision control, and versatility in formats for different detection schemes. These of course translate to greater sensitivity.

Technology trends
Key technology trends in the field of microfluidics, which have a direct bearing on POC use, include growing miniaturization, higher efficiency chemical reagents, accelerated sampling times as well as larger throughputs in synthesis and screening. As with BioMEMS biosensors, the advantages of microfluidics also consist of low device production costs and disposability,
Some researchers are looking at the commodification of microfluidics – for example, mass production by using inexpensive materials such as paper, plastic and threads, coupled to cost-effective manufacturing processes.
Paper has drawn the highest degree of attention, given that it is lightweight, biocompatible with assays and ecologically friendly.  In terms of operation, paper microfluidics is seen as an innovative means to escape the limitations of external pumps and detection systems. Flow in paper is driven by simple capillary forces. Another major advantage of paper is its application in colorimetric tests for detection by the naked eye.  Given the proliferation of smartphones equipped with high-resolution cameras, some experts view paper microfluidics becoming the tool of choice for POC diagnostics in developing countries.

Biosensor-microfluidics combinations: developing at a ‘violent’ pace
Efforts to merge biosensors with microfluidics have also been demonstrated since the mid-2000s. Progress has been encouraging. Last year, a University of Copenhagen research team, led by biotechnologist Alexander Jönsson and visiting Canadian scientist Josiane Lafleur, noted that the “marriage of highly sensitive biosensor designs with the versatility in sample handling and fluidic manipulation” offered by microfluidics promises to “yield powerful tools for analytical and, in particular, diagnostic applications.” Their article, ‘Recent advances in lab-on-a-chip for biosensing applications’, was published in the February 2016 issue of the journal ‘Biosensors and Bioelectronics’, and noted that areas where microfluidics  and biosensors converged was “rapidly and almost violently developing.” Nevertheless, the authors also found there is still much more to be done, with the observation that “solutions where the full potentials are being exploited are still surprisingly rare.”

Pulmonary drug delivery systems – addressing old challenges, heralding new markets

Novel drug delivery systems have been the subject of research for decades. This is because of a host of limitations with oral administration, the most widely-used route for administering medicine and challenges with several available alternatives. One of the most exciting new areas consist of pulmonary drug delivery systems, by which medication is delivered through the lungs. The harnessing of processes used in microelectronics and nanotechnology holds forth promise of a revolution in therapeutic medication.

The oral route: difficulties across generations, affects compliance
In spite of assumptions about convenience, oral dosage forms are not universally accepted. A recent study called ‘A Hard Truth to Swallow’ showed that over 55% of people, regardless of age or gender, faced “swallowing difficulties when taking tablets or capsules.” The study, by Spiegel Institut in Mannheim, surveyed 2,000 people in Germany and the US.
Surprisingly, although 44% of participants older than 65 years were affected, 70% of respondents in the 16–34 age group also reported problems – for example, with regard to swallowing, taste or odour, and irritation to digestive tract. This, in turn, clearly impacts on compliance.

The challenge of hepatic first pass metabolism
Broadly speaking, oral drug delivery faces challenges of low bioavailability and limits in the duration of therapeutic action.
A key problem consists of what is known as hepatic first pass metabolism (or pre-systemic metabolism). This is a phenomenon, by virtue of which the concentration of a medicinal product is reduced (in some cases, very sharply) before it reaches systemic circulation. Such a process involves the liver, to where a drug is borne from the gut wall via the portal vein, before reaching the rest of the body. The liver is biochemically selective and metabolizes drugs, in some cases to a massive extent, transferring only a part of the active ingredients to the circulatory system. As a result, there are marked differences in the effectiveness of oral drugs, due to variations in the degree of first pass metabolism.

IV administration
Bioavailability (BA) is defined as the proportion of an administered dose which reaches systemic circulation, and is considered one of the principal pharmacokinetic properties of drugs.
Given this, intravenous (IV) administration of a medicine means 100% bioavailability, which is why some consider IV administration to be a form of gold standard. The effects of IV medication are dependable. The entire administered dose immediately reaches systemic circulation. In turn, this allows for precise titration against a patient’s response.
However, IV administration has several limitations. It requires a functioning cannula, typically in a hospital, clinic or a patient’s bedsite – both due to procedural requirements as well as the need to avoid infections. Together, the latter entail that IV requires more staff and money. Finally, the process of cannulation can be distressing, especially in small children or those with needle phobias.
Indeed, even in a hospital setting, most IV patients are switched as soon as possible to oral therapy; the only exceptions are those critically ill or unable to absorb oral medications.

Injections, suppositories and topicals
Oral medications have sought to address some of their own inherent and long-evident limitations. These included slow- or extended-release formulations. However, as far as the issue of hepatic first pass metabolism is concerned, there is little reason to celebrate.
Instead, research has been focused on alternative routes of administration which, like IV, avoid first-pass effects, but do not necessarily require a clinical setting. Traditional alternatives include topical medications, intramuscular/subcutaneous injection and rectal administration via suppository drugs. Each of them continues to be investigated. All have pros and cons.

Topical administration is non-invasive and straightforward. It is also associated with significant patient satisfaction. However, most drugs have a high molecular weight and are poorly lipid soluble, and cannot be absorbed via skin or mucous membranes. Even when they are, the process is slow.

Injections have far better absorption profiles, and are preferred for drugs with low oral BA levels or those requiring a long duration of action, such as some psychotropic medications. Its onset is also more rapid than oral, or the topical route.  However, absorption via injection can be unpredictable, when a patient is poorly perfused. Like IV, injections can also frighten children and needle phobics.
On their part, rectal suppositories also have good absorption since hemorrhoidal veins drain directly into the inferior vena cava, and thus bypass the hepatic metabolism challenge. However, although onset of action is fast, the duration of action is short. In addition the absorptive ability of the rectum mucosa is lower than that of the small intestine.  Finally, rectal administration can provoke inherent feelings of resistance or revulsion, especially in adults.

Pulmonary delivery: the promise

In the light of all these, pulmonary drug delivery systems (PDDS) may offer a promising new alternative.
PDDS offers extremely fast absorption and onset of therapeutic action, due to the large surface area of the respiratory endothelium and its thinness. The plasma profiles after PDDS closely duplicate that of IV. As a result, it serves to reduce dose size and dosing intervals. This also helps to diminish side effects.
Aerosols and intra-tracheal inhalations
PDDS administers drugs to the lungs via the nasal or oral route, using two techniques: aerosol and intra-tracheal inhalation.
Aerosols provide more uniform distribution and greater penetration into the peripheral (alveolar) region of the lung. However, aerosol delivery is expensive. It also faces difficulty in measuring precise dose, when inside the lungs
Intra-tracheal inhalation (or instillation) is a much simpler and cheaper process than aerosols. It uses a syringe to deliver a medicated solution into the lungs. This addresses one of the major problems with aerosol delivery – to quantify the amount of drug delivered into the lungs.
Particle aerosol inhalers, in particular, are now increasingly commonplace for treating respiratory disease. Nebulizers, dry powder inhalers (DPI) and pressurized metered dose inhalers (pMDI) allow for local delivery of high concentrations of therapeutics in the lung, in many cases avoiding toxicities associated with oral or even injectable therapies.
Together, pMDIs and dry powder inhalers (DPIs) are estimated to deliver more than 90% of inhaled medications.

New PDDS applications
PDDS has also established its utility in emergency situations, given its absorption advantage.
One of the highest opportunities in PDDS is seen for macromolecules such as peptides and proteins, which usually need to be administered via injections (e.g. insulin). However, more experience with PDDS is required, especially about potential side effects after routine use.

Challenges for PDDS
PDDS, however, still faces limitations.
The first is that the particles which are to be inhaled need somewhat precise and reproducible aerodynamic factors related to diameter and density, as well as velocity, in order to successfully transit the nose and mouth and their filtration systems – which are designed to keep such matter out. As a result, there is always a certain degree of deposition of drugs in the nasal and oral passage.
Secondly, once in the lungs, the particles must overcome the pulmonary phagocytic barrier to release drugs at the required rate in order to achieve the intended therapeutic effect. For successful PDDS, designers must take careful account of properties such as pH value, ionic strength etc. which can affect the release of the drug, and thus its therapeutic effects.
Finally, PDDS is always accompanied by wastage of the drug. Due to material limitations of physics, a significant part of the drug is retained in the container.

As a result, pulmonary drug delivery remains inefficient, sometimes strikingly so. In spite of the growth in their availability, dose delivery efficiencies for dry powder asthma inhalers is estimated at just 3-15% for children and 10-30% for adults. The most advanced pMDIs deliver just 60% of inhaled material to bronchial airways. These were some of the findings in a review entitled ‘Targeted drug-aerosol delivery in the human respiratory system’, published in a 2008 issue of the  ‘Annual Review of Biomedical Engineering’.

Lessons from microelectronics manufacturing
In recent years, researchers have sought to address some of the key challenges of PDDS.
These, as we have noted, concern aerodynamic factors such as diameter and density of the particles.
Conventionally, pharmaceutical aerosols for DPIs are manufactured by milling (micronization) or spray drying techniques. These lead to wide particle size distributions and limited control over particle shape. Additional challenges include the need for non-agglomerating powders with the active ingredients, especially when they concern products such as proteins and monoclonal antibodies.
Recently, some manufacturers have sought to learn from the microelectronics industry by seeking to generate high-precision aerosol particle-based respiratory drug delivery systems. Such particle engineering techniques have shown special promise for targeted pulmonary delivery, when combined with inhalable nanoparticles, especially in solid-state dry powders.

PRINT and nano-particles
One leading example is called PRINT (Particle Replication in Non-Wetting Templates) which co-opts the precision and nanoscale spatial resolution in lithographic techniques used by the microelectronics industry, to provide  unprecedented control over particle size and shape.
A 2013 edition of ‘Angewandte Chemie International Edition’ describes PRINT as “a continuous, roll-to-roll, high-resolution molding technology which allows the design and synthesis of precisely defined micro- and nanoparticles.”
PRINT’s micromolding enables the formulation of particle systems of small molecules, biologics and oligonucleotides – all of which hold special promise for next-generation therapeutic PDDS applications.  In itself, the technique is highly versatile and is also being researched for application to oral and topical dosage forms.
The PRINT manufacturing process has begun to be tested for clinical applications. In the US, Liquidia Technologies and Accelovalence have completed Phase I and II studies to use PRINT to produce GMP-compliant bioabsorbable particles that improve the immune response and efficacy of seasonal influenza vaccines, at a scale relevant to clinical development.

Other approaches: iSPERSE
Other research efforts focus on chemistry. For example, another US firm, Pulmatrix, has recently been awarded a patent in Europe for iSPERSE, a PDDS systems based on proprietary cationic salt formulations which can accommodate high drug loads and large drug molecules in highly dispersible particles, in a manner claimed to be both robust and flexible enough to accommodate multi-drug formulations. The advantage of iSPERSE is that it has shown superior delivery capabilities compared with conventional dry powder technologies which use lactose blending or low-density particles.

Emerging markets: major new opportunities
Such efforts are likely to be rewarded given the large number of blockbuster respiratory products going off-patent – with growing demand in the developing world. In Latin America, for example, COPD deaths have risen by 65% in the last decade, while figures indicate 12 million people affected by the disease in India. In China, in China, chronic respiratory diseases have become the second leading cause of death.
We have seen that the generic capsule-based dry powder inhaler (DPI) segment in developing markets shows a lot of promise and demand is rising. However, when it comes to these products, patients in developing markets have not been best served by strategies employed by major pharmaceutical companies in the US and Europe, which have developed DPIs customized exclusively for one specific active pharmaceutical ingredient (API).

KIMES, Seoul, 15-18 March 2018

The gold standard in point-of-care HbA1c testing