Making medical imaging fit for use

Recent years have witnessed an explosion in the number of medical images. Growth has been fuelled by technology, regulations and demographic trends. According to one estimate cited by GE , as much as 90 percent of all healthcare data comes from medical imaging.
Technology has made it possible to expect ever-higher resolution imaging, which has directly led to a burgeoning volume of data. Meanwhile, standard-of-care rules require that such imaging be provided across different medical departments and that the data be retained. Finally, there is also the effect of demographics: an ageing population requires more healthcare, and this too has driven demand for more imaging.

Enormous quantum

Digital images are stored in data storage devices after preprocessing. They are transmitted and reprocessed for final applications.
The numbers are impressive. In settings like emergency rooms, imaging per patient can work out to approximately 250 GB of data. Radiologists often examine 200-plus cases a day. A ‘pan scan’ CT of a trauma patient can render 4,000 images.
Given this enormous quantum of digital data provided to an image processor, sophisticated compression techniques have been developed to reduce the size of an image. This reduction is accompanied by a high level of fault tolerance as well as sufficiently good quality of the decoded image at the end of the process.

Medicine’s Holy Grail

Many see machine-assisted analysis of imaging data to be the Holy Grail of medicine, with vital information about organ function and disease states. These, they say, can provide insights not only for the benefit of a single patient but for all victims of a medical condition. For their proponents, gamechanging mathematical tools, in the shape of increasingly sophisticated, quantitative pixel-based analysis, advanced deep learning analytics and artificial intelligence, will pave the way for dramatic advances in the effectiveness of healthcare.
Indeed, there are enough research papers, proofs-of-concept and pilot projects demonstrating how data-based screening algorithms can highlight the subtlest of changes in a nodule or a lesion. Such algorithms learn over time, and become better at what they do, resulting in even greater speed and confidence in the future.

Powering up Big Data

The above processes have been driven by the steady acceleration, over the years, in raw computer processing power. While training an algorithm at the turn of the century took 2-3 months, the same results can now be achieved and iterated within minutes.
Big Data-based pattern analysis has demonstrated the capacity to detect areas of opacities, honeycombing, reticular densities and fibrosis, and thereby provide a list of differentials, using computer aided diagnostic tools. These have been backed up with dynamic contrast enhancement (DCE) texture analysis or 3D multi-planar reconstructions on highly-targeted data subsets, instead of making the time-consuming effort of interrogating and querying a complete imaging dataset.

Data quality and content

In spite of such promise, many problems need to be overcome before medical imaging data can be used to its full potential.
Traditionally, access has been a major barrier. Large healthcare organizations, which generate the bulk of imaging data, tend to keep it siloed in departmental picture archiving and communications systems (PACS).
Analysis is also handicapped by data quality. Medical imaging, as we have mentioned above, covers a gamut of areas from data acquisition and compression, to transmission, enhancement and segmentation.

Denoising and reconstruction

The biggest pre-processing step consists of cleaning up the data by denoising and reconstruction, to eliminate undesirable source signals and highlight the useful ones.
Denoising is a central challenge for all medical imaging modalities, be it ultrasound, computed tomography (CT), magnetic resonance imaging (MRI) or positron emission tomography (PET).
Typical examples include electronic noise, reverberation artefact with multi-path reflections, and echoes from tissue structures in applications such as blood flow estimation, perfusion or targeted molecular imaging.

Cleaning data in ultrasound

Medical ultrasound offers several advantages over other modalities, such as superior temporal and spatial resolution and the lack of ionising radiation risk. It is also very often more convenient.
Nevertheless, high levels of image artefact prevalence (or ‘clutter’) frequently leads to demand for more expensive modalities, such as magnetic resonance imaging (MRI) or computed tomography (CT). However, (as we shall see), the latter too face their own denoising challenges.
One of the most frequent sources of artefact in ultrasound is off-axis scattering and multi-path reverberation. Clutter from the latter, most pronounced in a patient’s sternum and rib cage, occurs when a reflective tissue structure repeatedly reflects a returning acoustic wave vis-a-vis the ultrasound transducer face.
Such limitations serve to obscure dynamic tissue in regions of interest.

Filtering clutter

There have been two broad approaches to clearing up data clutter in ultrasound. The first consists of classical filters, which operate only in the temporal dimension.
Clutter directly degrades image performance by biasing functional image measurements. Its impact is especially profound in critical areas such as displacement estimation in elastography and blood flow imaging or myocardium strain in cardiac imaging. This impacts adversely on diagnosis of cardiac function through motion tracking or visual inspection of imaging data.
Although efforts continue to be made to improve interpolation from artefact-free regions and modelling to infer heart motion while compensating for image degradation from reverberation artefacts, it is not possible to interpolate abnormal myocardial motion in diseased hearts from statistical models alone. Filtering has therefore been the preferred choice to suppress image artefacts and allow for computing accurate motion tracking from the entire myocardium.
In medical ultrasound, filtering strategies for suppression of clutter have been directed to involve the linear decomposition of received echo signals. This approach seeks to reformulate and express the original data along a new coordinate system, which separates the clutter and signal of interest along different bases. Filtering rejects the clutter, but retains the bases which describe the signal of interest.

SVD: Adding a spatial dimension

Newer techniques add a spatial element to provide a fourdimensional approach (three spatial plus time providing the fourth).
The best example of this is singular value decomposition (SVD), which leverages differences in tissue and blood motion in terms of spatio-temporal coherence. Along with wavelet transform, SVD was developed as one of the most useful linear algebra tools for image compression. SVD is essentially a factorization and iterative approximation technique to reduce any matrix into a smaller invertible and square matrix.
The impact of SVD has been profound, making possible techniques such as ultra-fast ultrasonic imaging, which is based on the unfocused transmission of plane or diverging ultrasound waves. Larger synchronous ultrasound imaging datasets greatly improve the discrimination between tissue and blood motion in Doppler imaging.
SVD has been shown to be far superior to traditional temporal clutter rejection filters, in terms of contrast-to-noise ratio and removal of tissue or probe motion artefacts. Tests have detected completely new microvascular blood flow networks. In the clinical field, this has led to dramatic improvements in the application of high-tech imaging in areas such as the neonate brain.

The case of CT images

CT offers a different set of challenges. Firstly, the process of denoising and reconstruction of CT images depends on statistically-uncertain baseline measurements such as radiation dose. In addition, in spite of huge advancements in acquisition speed, increased signal-to-noise ratio (SNR) and superior image resolution, images are still affected by noise and artefacts.
It is always important (and difficult) to strike a correct balance in the trade-off between reduction in noise on one side, and the conservation of genuine details – such as edges, corners and other structures – in order to maintain or even enhance clinically relevant image content on the other.
Real life choice of methodology poses its own set of challenges, as denoising techniques themselves often provide the means to understand the noise in CT images.
Nevertheless, a variety of algorithm-based techniques have been developed to suppress noise from the CT scanned images. Each has its own merits and demerits. Broadly, these include the use of filters, wavelet decomposition, wave atom transformation, anisotropic diffusion etc. Each produces its own set of metrics to enable comparison, based on key parameters for any imaging modality – MSE (mean square error), SNR and PSNR (peak signal-to-noise ratio), S/MSE (signal to mean square error ratio) and MAD (mean absolute difference).
Practically, interest in denoising CT has also been driven by the recent increase in awareness of radiation-induced cancer. This has made it important to enhance the diagnostic quality of low dose CT, by increasing the signal-to-noise ratio.
There have been several approaches to denoising low-dose CT images. Some researchers have used deep neural networks to improve image quality, based on the use of convolutions with different dilation rates. Compared to standard convolution, this has enabled the image to capture a greater level of contextual information in fewer layers as well as create shortcut connections to transmit information from early layers to later ones.

Approaches in MRI

One of the most prominent methods for denoising MR images has been NLM (non local means). This seeks to reduce noise by exploiting the similarity of image patterns by averaging similar image patterns (typically image patches), and is also used for CT.
Researchers have however developed new approaches. Some of the most exciting use deep learning via feature regression as well as image self-similarity, in order to permit a high degree of automatic denoising. Deep learning in MRI was typically focused on segmentation and classification of reconstructed magnitude images. Its application in lower levels of MRI measurement techniques is more recent and covers a range of processes including image acquisition and signal processing in MR fingerprinting, before denoising and image synthesis.
An intriguing approach to denoising MR images has been proposed by researchers from France’s University of Bordeaux and the Universitat Politecnica de Valencia in Spain.
The method involves a two-stage approach. In the first stage, an over-complete patch-based convolutional neural network blindly removes the noise without specific estimation of the local noise variance to produce a preliminary estimation of the noise-free image. The second stage uses this preliminary denoised image as a guide image within a rotationally invariant non-local means filter to robustly denoise the original noisy image.
The proposed approach has been compared with related state-of-the-art methods and showed competitive results in all the studied cases while being much faster than comparable filters. They present a denoising method that can be blindly applied to any type of MR image since it can automatically deal with both stationary and spatially varying noise patterns.
References

  1. https://www.gehealthcare.com/article/beyond-imagingtheparadox-of-ai-and-medical-imaging-innovation
  2. https://arxiv.org/ftp/arxiv/papers/1911/1911.04798.pdf