Healthcare, like other services, requires getting appropriate expertise to the place where it is needed at the right time. Requirements like these become critical when a patient faces a sudden and unpredictable life-threatening condition. The latter is a near-routine occurrence in a hospital’s intensive care unit (ICU). Still, a host of factors make it impossible for clinicians to be present at every point in the ICU, all the time.
Early acceptance of robotic telepresence
Such shortcomings are sought to be addressed by ICU robots, one of the latest applications in the emerging field of ‘robotic telepresence’. The use of ICU robots, also referred to as teleoperated medical devices, is growing rapidly as a supplement for patient care in the ICU. In its early stages, healthcare providers were overwhelmingly convinced of their potential. In September 2012, for example, a survey of over 10,000 ICU robotic interventions in the journal ‘Telemedicine journal and e-health’ found 100 percent of practitioners considered the robot to improve both patient care and patient satisfaction.
Autonomous, optimised for ICU, hospital environment
ICU robots essentially provide access for physicians and other specialists to implement a variety of medical procedures round-the-clock, while reducing delays for difficult admissions or procedures.
The robots can be pre-programmed to drive on their own around an ICU, or this mode can be overridden and controlled by an individual, located on the premises, at a facility near by or thousands of kilometres away, via a keyboard or joystick.
The robotic sensors are optimized to perform in a hospital environment, enabling the robot to identify and avoid things like IV lines, cables and glass doors.
Plug-and-play for medical devices
The robot itself contains combinations of display types, microphones, speakers and cameras; these have pan-tilt and zoom capabilities, and are powerful and manoeuvrable enough to permit physicians to view fine details and listen to the smallest sounds.
Typical accessories in an ICU robot include an integrated electronic stethoscope to allow physicians to listen remotely to heart and lung sounds using earbuds. However, most Class II medical devices can be plugged into the robot, which streams data back in real time. On the other side, robots can also access digitized medical records of patients.
Recent innovations include a smartphone application, enabling physicians to access the robot’s camera. Another is ‘point and click’ navigation, by virtue of which a user can simply click somewhere on a map of the hospital and the robot gets itself there.
UCLA pioneers ICU robot
The history of ICU robotics dates to 2005, when the University of California at Los Angeles (UCLA) Medical Center became the world’s first hospital to introduce a robot in its neurosurgery intensive care unit under a US military-funded pilot project. The UCLA pilot saw intensivists (clinicians specialized in the care of critically ill patients) monitoring patients from their homes and offices.
The robot was RP-6, developed by California-based InTouch, a company known for its ‘auto-drive’ robotics technology used in defence and public safety. Controlled by a webcam and joystick over a broadband connection, the 65 inch (166 cm) wheeled robot boasted 8-hour runtime from a single charge. Onwards from 2006, InTouch offered hospitals an option to rent the RP-6 for USD 4,000 a month, or buy it outright for USD 120,000. Its earliest customers included Detroit Medical Center and Baltimore’s Sinai Hospital.
The iRobot-InTouch Health Alliance
Meanwhile, another US company iRobot (vendor of the robotic household vacuum, Roomba) set up a Healthcare Robotics division in 2009.
In 2011, iRobot and InTouch Health announced an alliance targeting healthcare. The next year they unveiled the RP-VITA (Remote Presence Virtual + Independent Telemedicine Assistant), a robot which went beyond simply providing remote interactive capability between a clinician and patients to a hugely-enhanced navigation capability, based on sophisticated mapping and obstacle detection and avoidance technologies tailored to a hospital environment. Its aim was to free the clinician for clinical tasks.
The most revolutionary capability of RP-VITA was autonomous navigation, which was submitted to the the US Food and Drug Administration (FDA) for 510(k) approval. In January 2013, the FDA cleared RP-VITA, making it the first autonomously navigating telepresence robot in healthcare, with clearance for use before, during and after surgery and for cardiovascular, neurological, prenatal and psychological as well as critical care.
Demand driven by range of factors
The key drivers of demand for ICU robots today include time factors (urgency in ICU cases) and access (unavailability of ICU expertise) in remote areas. Both these are compounded by staff shortages.
There are fewer than 6,000 practising intensivists in the United States today and more than 5 million patients admitted to ICUs annually. A few years ago, Teresa Rincon, chair of the Tele-ICU Committee of the Society of Critical Care Medicine (SCCM) noted that the number of intensivists in the US was “not enough for each hospital to have one.” Indeed, it is estimated that only about 37 percent of ICU patients in the US receive intensivist care, although trained intensivists in the ICU correlates to better outcomes and decreased length of stay – both in the ICU and hospital.
The challenge of coma
In terms of urgency, the SCCM notes that up to 58% of emergency department admissions in the US result in an ICU admission.
Following admission, one of the major drivers of demand for ICU robots is coma. The reliable assessment of comatose patients is always critical. A hospital needs to quickly identify clinical status changes in order to determine and implement appropriate interventions.
In January 2017, the prestigious Mayo Clinic published results from a 15-month study of 100 patients, which is reported as the first to look specifically at telemedicine in assessing patients in coma. The results suggest that patients with depressed levels of consciousness can be assessed reliably through telemedicine.
Another urgent complication is delirium. Delirium incidence has been estimated at over 80% in critically ill patients. This is accompanied by a threefold increase in mortality risk, according to an oft-cited study in an April 2004 issue of the ‘Journal of the American Medical Association’.
Medical emergencies like coma and delirium require the presence of highly qualified clinicians, but as discussed previously, real-life constraints limit their availability round-the-clock.
Access is another crucial consideration. Most hospitals simply lack the patient volume to employ full-time intensivists in fields like neonatology, while their availability is limited for the same reason in remote rural locations.
The first attempts to address such challenges were centred on telemedicine or Tele-ICU care, involving continuous surveillance and interactive care by offsite clinicians. This was achieved by video observation of the patient and interrogation of equipment, along with instructions conveyed to other ICU staff.
Although more studies are needed, there is evidence of an association of the Tele-ICU with lower mortality and shorter length of stay in both the ICU as well as the hospital. Another benefit is that a Tele-ICU enables stricter adherence to guidelines.
US leads the way
Europe was a relative latecomer to ICU telemedicine, with a near-total focus on teleconsultation and almost-total reliance on the US experience.
For example, Britain’s NHS refers extensively to US studies on ICU telemedicine in its own Technology Enabled Care Services (TECS) Evidence Database, while the University of Pittsburgh Medical Center has opened a Tele-ICU centre in Italy, which allows US physicians to perform remote consults for Italian ICU patients.
From telemedicine to robotics: business model turned around
In many senses, ICU robotics have been a natural successor to the Tele-ICU, albeit with a significant reversal in its operating model.
The Tele-ICU functions centrally. Rooms are hard-wired with high-resolution cameras and transmit data to a remote command centre staffed by an intensivist (tele-intensivist). The intensivist, who typically covers multiple ICUs, has access to the same clinical information (e.g. vital signs, lab values, notes, physician orders etc.) as the ICU bedside team consisting of nurses, respiratory therapists, non-ICU physician and transfers instructions to them via a two-way communication link. Robotics, driven by advances in technology and mobility, have made it possible for the Tele-ICU care model to become decentralized. The ICU robot is controlled wirelessly by the tele-intensivist, who is freed from a dedicated command centre, and can indeed be just about anywhere. The robot moves from room to room, examining patients based on instructions from the intensivist and interacting as required with staff. The latter interaction is now seen to be far more efficient, since it occurs only after the intensivist has given instructions on the procedures which need to be performed on a patient.
The cost factor
ICU robots seem to also address another major limitation of Tele-ICU, namely cost. Most studies on Tele-ICU have found that though the technologies deployed have been adequate, they have also been much too expensive.
In the US, some hospitals collided with reality, quickly and harshly, “removing tele-ICUs after outcomes failed to justify the costs.” A study in December 2009, in the prestigious ‘Journal of the American Medical Association’ also questioned a key maxim of the Tele-ICU, pointing to evidence that remote monitoring of patients in ICUs was not associated with an overall improvement in the risk of death or length of stay in the ICU or hospital.
Perspectives have been similar in Europe. For example, a Dutch study published in 2011 in the ‘Netherlands Journal of Critical Care’ concluded that hospitals were unlikely to see the “enormous” investment entailed by a tele-ICU as being cost-effective. Concerns about Tele-ICUs were also echoed the same year in Canada, where critical care clinicians, writing in the ‘Journal of Critical Care’ expressed scepticism regarding the ability of a Tele-ICU to address challenges of human resource limitation or even deliver quality care.
The personal touch
While a conclusive answer to the question of cost-effectiveness of OCU robots will require a larger user base, one powerful advantage seems to be the ability to target the eventual subject of the healthcare process, the patient. According to Paul Vespa, a neurosurgeon at UCLA’s David Geffen School of Medicine patients “interact with the robot as if it is a person.”
Steps to realize full potential
Before there is growth in numbers of ICU robots, some of the factors which will need to be addressed have been identified in a ‘Journal of Critical Care’ article in December 2013 by the Center for Comprehensive Access and Delivery Research and Evaluation, Iowa City, US.
These consist of formal training and orientation, identification of roles, responsibilities, and expectations, needs assessment, and administrative support and organization. Failure to adopt these, say the authors, will mean ICU robots may not see their full potential realized.
New service and business models are challenging the traditional role of a hospital – as a place where sick people are taken to get better. Instead, a growing body of evidence suggests that the key mission of future hospitals will be to help people to avoid falling ill, and to manage those that do in fundamentally different ways than at present. Such processes are principally driven by economic pressures and the promise of new technologies. However, patients are also playing a major role.
Patients more proactive
It has indeed been apparent for some time that patients are far less passive than they were in the past. In Britain, a study by the King’s Fund think-tank found patients wished to be far more involved in healthcare decisions. In addition, the study reported that patient satisfaction depended not just on medical outcomes, but also on being treated with dignity and respect.
Emerging technologies are seen as one way to enhance the patient experience, and several popular apps show how rapidly patients have moved to centre-stage. In the US, Heal, a smartphone app, lets patients search for physicians in a manner similar to Uber’s connecting passengers to drivers. Zocdoc, another tool for finding doctors, has added an artificial intelligence-powered Insurance Checker feature which lets patients select and verify insurance information as they are booking appointments. An app called Welloh goes beyond doctors to give users information about hospitals, pharmacies, care centres and other facilities. Clinical trials are also opening up to volunteers, thanks to an app called TrialReach, which helps patients find open clinical trials for specific medical conditions.
These new health access paradigms resonate strongly with younger patients. According to a report from Salesforce, over 70 percent want their physicians to adopt mobile health applications.
Apple integrating health apps
Evidence of the opportunities arising from enhanced patient participation comes from Apple, which plans to bring the current clutter of healthcare apps under one roof. Its new Health Records feature will allow users to see their records of allergies, immunizations, lab results, medications and other conditions in a single window and send notifications when any data is updated.
One of the most promising and best known tools in the emerging technology arsenal is computing giant IBM’s Watson, which deploys artificial intelligence (AI) to collect and interpret vast amounts of data from medical literature in order to advise on best treatment options. Scores of other tools provide personalized treatment plans for cancer patients using the genetic background of their tumours, accompanied by analysis from tens of thousands of other, similar cases.
These kinds of innovations count on assimilating and interpreting what has come to be known as Big Data. The sources for this data, whose volume continues to grow by leaps and bounds, are many. They include clinical studies, prescriptions, radiological images and a host of other healthcare information.
The Internet of Things
One new source of data is from the Internet of Things. Connected medical devices such as insulin pumps and pacemakers pick up signals and automatically transmit information to networked computers, which allow physicians (and patients) to perform real-time monitoring.
An array of wearable devices to track vital signs are another fast-growing source of medical data. On an individual basis, this may not amount to much. However, when the data is provided by millions of users, its size becomes staggering, as does its potential for providing insights.
Such a burgeoning mass of data is being generated asynchronously, processed and stored by different machines on multiple platforms. Making it usable is hardly simple.
One promising answer to such a challenge lies in cloud computing technology, which has dramatically reduced the cost of data storage, as well as the time required to process and transfer the data to multiple users at different locations. For patients needing to visit a lot of specialists, the accessibility of their data from a variety of locations can be indispensable.
The Electronic Health Record
One of cloud computing’s biggest areas of impact may be the electronic health record (EHR), one of whose goals was in fact to address the above challenge – patient data access in real time by different specialists.
The EHR has generally failed to meet expectations (and over-expectations). In both Europe and the US, the EHR’s key technical/operational limitation was that clinical and financial data could not be easily shared and exchanged among providers – as many had assumed or otherwise hoped for. In the US, EHRs have generally also failed to meet levels of reporting that support the ‘meaningful use’ requirements of pay-for-performance programmes.
Cloud computing seems likely to give a new lease of life to the EHR. Server-based EHRs always run the risk of system failure, which would prevent access to critical patient data until the server has been restored. Such a scenario does not concern cloud-based EHRs. In addition, cloud services are encrypted and provide security. Cloud-based EHRs also reduce entry barriers to adoption by transferring responsibility for confidential patient information to specialized vendors.
Design and hospital re-purposing
The impact of such developments are reaching into the very design of a hospital. Christopher Shaw, Chair of a professional organization called Architects for Health and founder of the design firm Medical Architecture, believes there is a growing mismatch between the physical infrastructure of a hospital and the nature of activities expected to be required over the coming decades.
One key question here is the future of hospital buildings – whether to renovate and incrementally redesign structures or start afresh. Indeed, even as popular imagination associates future hospitals with robotic doctors, another equally beguiling scenario consists of individualized medicine, extending to some forms of surgery, carried out at home.
The reality may lie in between, at least in the foreseeable future. One of the most likely scenarios might be a hub-and-spoke hospital model. Its inside tier would consist of academic medical centres serving larger populations and focused on acute care. The middle tier would be an intermediate-care hospital, located in smaller cities or larger towns and providing longer-term rehabilitation and nursing support. The outer tier would be comprised of polyclinics for outpatient diagnostics and elective care, referred from primary care physicians. At the periphery would be the patient’s home, with telemedicine treatment, and possibly some form of tele-surgery assisted by paramedical professionals on the scene. Some of the latter may well be robots.
After many false starts, telehealth technology is now on the edge of take-off – helping allocate care to patients more efficiently, by eliminating the need to visit hospitals, when they do not have a need to access concentrated multi-disciplinary expertise.
Telehealth is also seen as a means to bring patients back more quickly to their homes. Indeed, there is a considerable body of evidence which suggests that the sooner patients begin recovery at home, the more quickly they heal.
Telehealth is not only being pushed by technology but also pulled by economics. In the US, for example, healthcare providers of diabetic patient care have to contend with value-based measures. As a result, they are becoming increasingly dependent on real-time data from remote glucose monitors. Telehealth allows patients to be more engaged, and participate with physicians in ensuring better outcomes, by adhering to insulin or other medications.
Emerging models – examples
The challenge facing the emerging healthcare model lies in the best way to integrate resources, delivery and support mechanisms, and the need to avoid duplication. However, there are encouraging signs from several parts of the world.
In the US, Westchester Medical Center Health Network (WMC Health), is an example of the emerging hub-and-spoke hospital model. The core of the system consists of a 1,500-bed facility headquartered in Valhalla, New York, which is the only facility for complex interventions and procedures. Buttressing this are six (intermediate) hospitals, as well as several polyclinics and medical campuses. The system covers a population of more than 3 million people spread over 15,000 square kilometres.
In Europe, there are several efforts to redefine hospital design. In a variation of hub and spoke, Guy’s Hospital at London has developed its cancer centre as a stack of ‘villages’, one atop another, with each providing a different service (radiotherapy, chemotherapy, etc.).
Certain hospitals have sought to move in the opposite direction, bringing a full range of services to patients in one room or area. In Veldhoven, the Netherlands, a new Woman-Mother-Child Center at Maxima Hospital provides prenatal, delivery, postnatal and breastfeeding support services from one room.
UMC+ in Maastricht, NL
Some of the most radical efforts to address the redefinition of the hospital are being explored in the Netherlands, at Medical University Centre+ (UMC+) in Maastricht.
In late 2009, the departments of Dermatology and Orthopedics at UMC+ started out on separate tracks of what is called ‘design thinking’. Each department independently developed and implemented new care and financing systems, closely adapted to what they saw as the real needs of their patients, and combining specialities, which had been traditionally separated.
The key mission at UMC+ is to avoid pushing strategy down individual departments, which have highly specific patient groups, processes and technologies, and instead build strategy bottom up, involving inputs from across the staffing chain.
Nevertheless, the aim of design thinking is to also generate organizational change. Over time, several other departments began applying the methodologies pioneered by Dermatology and Orthopedics, creating a new hospital healthcare model.
Over time, the UMC+ model is transforming healthcare focused on rehabilitation, to preventive public health and development. The shift has also changed the role of the Board. Directors no longer set out strategies, but make communication possible between different departments. The Board aims to ensure that different departments do not seek to reinvent the wheel, and instead continuously develop and implement internal best practices.
The challenge of demographics
Nevertheless, many challenges still lie ahead. While Internet- and smartphone-friendly millennials are clearly going to benefit from new hospital care models, the bulk of hospital and healthcare needs for the next decade or two lie in the elderly. According to a Partners HealthCare study in 2016, few seniors obtain information or accomplish healthcare-related tasks online. Only 16 percent of seniors said they used the Internet to obtain health information, while just 7 percent contacted physicians online.
Primum non nocere (first, do not harm) remains a basic tenet of medical practice. Unfortunately, the complexities of modern medicine, the large pool of available medications as well as the multiplication of technical procedures combined with the frequent difficulty of reaching definite diagnoses and the high number of medical professionals taking care of a single patient have resulted in a growing number of medical errors, a significant part of which prove fatal for the patient. Data on the number of deaths caused by medical errors are not readily obtainable, nevertheless a number of recent studies in the US have reported figures greater than 200,000 deaths per year. For example, a patient safety expert team from John Hopkins University has calculated that over 250,000 deaths are caused by medical error in the US, based on an analysis of medical death rate data over an eight-year period. This figure was published last May in the BMJ and places medical error as the third highest cause of death, accounting for 10% of all US deaths. For the healthcare industry, this translates into about six potentially preventable deaths per year per US hospital, definitely not good statistics. The situation is somewhat similar in Europe, even if there aren’t any official figures at the EU level as Eurostat doesn’t list medical error as a possible cause of death since its statistics – like those of the US CDC – rely on the medical information contained on death certificates and on the coding of causes of death according to the WHO International Classification of Diseases (ICD) codes. Results from German studies on patient safety show that close to 20,000 deaths are caused by preventable adverse events in the country’s hospitals. These deaths cover a wide range of preventable causes, including hospital-acquired infections, embolisms, surgical errors, delay in diagnosis (especially for pediatric patients) and misdiagnosis – the latter probably ranking quite high even if very difficult to detect in research. Apart from deaths, there is a much bigger number of cases, up to 20-fold higher, where people suffer from serious adverse effects, sometimes for the rest of their lives. In addition to the individual harm incurred, there is also a high cost for society that includes additional healthcare expenditure, social costs and loss of economic capacity. Evidence shows that up to 70% of the harm caused by medical errors can be prevented through comprehensive systematic approaches to patient safety. At the hospital level, there is an urgent need for action, not least by physicians – they should be the first to recognize that every single death caused by a preventable adverse effect is one too many.
Conventional or B-mode ultrasound has been used as a diagnostic imaging tool for over four decades. Over the last few years, however, ultrasound systems have witnessed a blizzard of developments in their underlying technology. This has catalysed a significant change in the patterns of ultrasound usage vis-a-vis other, older imaging modalities, especially in terms of concerns about the latter – for example, radiation risk in X-rays and computer tomography (CT), and cost for both CT and magnetic resonance imaging (MRI).
The ultrasound market is largely driven by innovations in underlying technologies and more sophisticated software algorithms, which allow manufacturers to offer smaller, more powerful and complex systems.
Key developments include an acceleration in processing speed and enhancement in the quality of diagnostic images – coupled to advances in contrast-enhanced imaging and precision in the timing of image capture. This has been accompanied by a sharp reduction in noise-to-signal ratios in the final data to optimize spatial, contrast and temporal resolution, including rotatable views for better visualization.
GE’s cSound technology, for example, offers CT level image quality based on advanced algorithms that capture much larger amounts of data than possible previously (by some estimates, about a DVD worth of data per second). The technology also makes pixel-by-pixel selections of the most precise information to display.
Developments in transducers, beam formation
Ultrasound has also made quantum leaps in factors such as transducer sensitivity and beam formation. For example, line-by-line imaging in beamformers has been replaced in some systems by large zone acquisitions, allowing users to view examinations in greyscale and colour Doppler. Meanwhile, retrospective imaging makes it possible to process raw data multiple times, while retention of channel domain data allows for patient-specific imaging.
Because of all the above, clinicians are able to use ultrasound to image blood perfusion and blood flow in vessels with diameters of 2 mm and less, with small vessel beds displayed via Doppler flow false-colour 3-D or greyscale reconstructions. The result is better assessments of organ perfusion, which have traditionally been difficult on ultrasound.
Take-up of ultrasound has also been recently boosted by a growing commodification trend. Certain categories of ultrasound have become relatively inexpensive, mobile and less demanding of power. Mobility-related innovations include portable hand-held devices, and more recently, the world’s first wireless transducer. Even some low-end machines are now enabled for full bi-directional communication with electronic medical records.
As healthcare reforms and budgetary pressures favour use of cost-effective solutions, this has led to especially sharp growth in the use of low- and mid-range ultrasound systems. It is now commonplace, for example, to see ultrasound systems in a recovery room, next to hospital beds, or equipping NGOs at health outreach projects in developing countries.
For many hospitals, this kind of product/technology mix makes sense, since not all patients require the sophisticated features offered by high end machines, while their smaller, inexpensive counterparts provide solutions for an everyday challenge faced by most hospitals – workflow bottlenecks.
High-end remains motor for new applications
At the other end, the high-end segment is leading innovation not only in ultrasound technologies, but driving the overall medical imaging market, too. Despite their cost, the advanced features of premium systems have moved ultrasound well beyond traditional applications such as ob/gyn to interventional cardiology and internal medicine. Several ER clinicians, for instance, now routinely utilize ultrasound for echocardiograms and abdominal imaging, while radiologists and surgeons use it to guide needle placement or perform bone sonometry.
Some cutting-edge areas – such as matrix transducers – remain ensconced in the premium category. Matrix transducers have direct relevance to two fast-emerging applications, namely volumetric ultrasound and 3-D/4-D applications.
Given below is an overview of key recent developments in ultrasound systems.
Mobility and Ergonomics
Ergonomics and mobility are being addressed by vendors in order to differentiate their systems and grow user volumes. Some surveys suggest that over three out of four of ultrasound users experience work-related pain, with a fifth of these suffering a career-ending injury.
New-generation ultrasound systems stand out in terms of design. Most are noiseless to permit sonographers to minimize distraction and focus on the exam, with settings customized and organized depending on clinical preferences.
Some have slanted bodies to prevent users hitting their knees or feet on the machine, with keyboards that can be raised or lowered depending on user height, probes that are shaped to the human palm and rotatable LCD monitors for sharing the display with colleagues. Other innovations include the possibility of use in both sitting and standing positions, with memory features to accommodate different users.
Some recent ultrasound machines have tablet-sized touchscreen-based interfaces, which significantly reduces the reach and steps (in some cases by 15-20%) in order to start and complete an exam. This enables faster workflow. Touchscreens allow users to tap in order to start functions, pinch and drag to zoom in and out, and swipe to expand the image. Some vendors offer exam presets, with several enhanced functions such as continuous wave Doppler or transducers.
As discussed below, there is an increase in the use of ultrasound as an alternative to CT and MRI in many point-of-care (PoC) settings. One of the reasons for the trend is mobility as well as increasing miniaturization. Smaller ultrasound machines provide solutions to concerns about cables or wheeling bulky machines around patient rooms, and address tight space demands in key hospital settings such as the operating room. Compact models can be transported by being wheeled or atop a cart.
In some cases, smaller portable machines can also be moved between departments within a hospital or clinic – on a user’s back.
Enhanced quality drives ultrasound to point of care
Ultrasound images today are available with far-higher resolutions than in the early 2000s, when most physicians were used to pictures being fuzzy. One of the key reasons is enhancement in real-time computer processing of images.
Superior image quality has also driven ultrasound to the point-of-care (PoC) setting – both for diagnostic and interventional procedures. PoC ultrasound is now widely available in operating theatres and emergency rooms. Between 2010 and 2013, anesthesiologists are reported to have doubled the use of ultrasound procedures, and ultrasound is also far more common today in certain interventional procedures such as image-guided biopsies and ablations, previously dominated by CT and MRI.
Volumetric ultrasound development
Volumetric ultrasound allows superior characterizing of tissue and the performance of procedures with far greater accuracy.
Ultrasound was previously only able to capture a single imaging plane, but it can currently acquire volumes. This is because transducers which enable the acquisition of real-time volumes of tissue and allow imaging in multiple planes such as the transverse and sagittal have recently become available. For instance, transducers can detect the altered speed of high-frequency sound waves through adipose layers versus other tissue, and make the system aware of increased adipose content.
Though several new-generation transducers remain expensive, in areas where they make a difference, the added price tag is becoming justified. For instance, high-resolution matrix transducers are finding use in interventional cardiology applications such as trans-esophageal echocardiogram (TEE) and 4D imaging.
While 2-D continues to be widely used in clinical applications, recent technological advances such as matrix transducers have been enabling factors and triggered interest in 3-D and 4-D ultrasound.
3-D/4-D ultrasound has a more rapid acquisition rate of datasets and subsequent improved image visualization.
4-D imaging consists of the three spatial dimensions as well as the element of time. It projects a cinematographic, motion picture view of an organ or a specific part of an organ, and is emerging as the next generation in advanced imaging.
In combination with advanced visualization functions, 4-D ultrasound aids complex surgical applications and interventional procedures. Multiplanar reconstructed (MPR) images are now available for review in the same manner as CT and MR scans.
Leading imaging vendors already offer 4-D imaging products – across all modalities, PET/CT, MRI and ultrasound. However, 4-D ultrasound is capturing a great deal of interest in applications where ultrasound has already made a case for itself, due to cost, mobility or radiation concerns.
The close connection between 4-D and ultrasound dates back to cutting edge efforts in the early 1980s, when a Duke University team determined that although MRI was faster, ultrasound was the closest to “achieving 3D real time acquisition.” The researchers, led by Dr. Olaf von Ramm, developed a single-transmit, multiple-receive ultrasound scanner called Explosocan to increase data bandwidth.
One of the most revolutionary technologies in ultrasound consists of elastography, which utilizes B-mode ultrasound to measure the mechanical characteristics of tissues, which are then overlaid on the ultrasound image. This provides physicians the ability to view stiffer and softer areas inside of tissue, with image quality and clinical outcomes equivalent to X-Ray, MRI, and CT.
Elastography techniques include strain elastography and shear wave elastography (SWE). It has begun proving its use in the characterization of thyroid nodules, lymph nodes and indeterminate breast lumps as well as the detection of prostate cancer. None of these were achievable via conventional ultrasound.
The application which has generated maximum attention is liver fibrosis staging. Biopsies are not only invasive but carry bleeding and infection risks. Elastography, which can be repeated as often as required, is being seen as a way to get the data needed by clinicians to diagnose and stage liver diseases without the associated complications. Elastography is also used to predict complications in patients with cirrhosis.
SWE in particular is also seen as a tool to assist in earlier detection of conditions such as Hepatitis C, and both fatty liver and alcoholic liver disease. Alongside lab studies, it offers a means to closely monitor the impact of treatment and assess if the liver will normalize. For many hepatologists, fighting a liver condition before Stage 4 cirrhosis provides a good chance of reversibility.
SWE can also provide information on which Hepatitis C patients might benefit from viral therapy.
From smartphone apps to AI: the future
App-based ultrasound have recently been showcased. These use transducers connecting via a USB port to a mobile device and a downloadable app. The transducer performs data acquisition, processing and image reconstruction. The result is an ultrasound feature in a consumer-grade smartphone.
Some vendors have launched artificial intelligence systems to enhance speed and automatically take image volume data from 3-D echo to recreate optimized diagnostic views. In cardiac echo in particular, the result offers major potential by permitting reproducibility of imaging.
Nevertheless, such cutting edge technologies are still in their infancy. Only time and user experience will determine their eventual success.