Since its introduction in 1973, X-ray computed tomography (CT) has become a leading modality for diagnostic imaging. The advantages of CT are manifold. Above all, they include rapid scanning and small spatial resolution, which allows for relatively quick and accurate diagnosis of injuries and disease. CT has also been an imaging tool of choice for the staging and treatment follow-up of cancer.
Growth in use, but variations between countries
Overall, CT use has grown rapidly. The total number of scans in the US is estimated to be in the region of 80 million a year. In England, the National Health Service (NHS) reported 4.8 million CT scans in 2016/17, which is 40 percent more than the 3.4 million MRI scans done during that year. CT usage has also been growing rapidly – in England at about 8% annually, compared to just 1.5% for X-rays and 5% for ultrasound.
Nevertheless, there are significant variations between countries in the intensity of CT use. According to data from the Paris-based Organization for Economic Cooperation and Development (OECD), the annual rate of CT scans per 1,000 inhabitants ranges from a high of 225-230 in the US and Japan, to a low of 37 in Finland. The rate is about 80 in Italy, 90 in the Netherlands, 110 in Spain, 140 in Germany and 200 in Belgium and France.
Differences in radiation dosing practice
Though large, such divergences are considered to be less significant than differences in radiation dose to which patients are exposed, for the same condition. In December 2007, a study published in ‘European Radiology Supplement’ had found dosage could have been halved in many cases without impacting on image quality. Another study two years later revealed a 13-fold difference between the lowest and highest radiation doses used for identical CT procedures by four clinical sites in the neighbourhood of San Francisco.
Concerns about such issues have been dramatically highlighted after a major new international study, which attributes differences in dosage to the person doing the scanning rather than to patients or equipment. The study, published in ‘The British Medical Journal’ (BMJ) in January 2019, found that patient characteristics, make and model of scanner, and type of hospital where the CT scan was done had little effect on the amount of radiation used.
Analysis of 2 million CT scans in 151 institutions
The BMJ study was based on a massive effort by a research team led by Dr. Rebecca Smith-Bindman, a professor in the Department of Radiology and Biomedical Imaging at the University of California San Francisco (UCSF). The researchers analysed dose data for over 2 million CT scans of the abdomen, chest and head, at 151 institutions in seven countries.
Their findings are likely to resonate strongly, given the association of radiation with cancer. Although CT scans account for a minority of diagnostic radiologic procedures, they use large amounts of radiation per image. Some estimates suggest that CT contributes nearly half the US population’s radiation dose from all medical examinations. The figure in England is higher, at 68 percent, although plain radiography is used five times more often than CT in the country (22.9 million procedures in 2016/17 versus 4.8 million).
Cancer risks of CT
The association with cancer has been controversial, especially when predictions of the impact of CT scanning have been based on a linear-no-threshold dose-response model. Some have argued that CT radiation doses are too low to produce any health effect.
There is also uncertainty about how to calculate risk accurately. This is because of a host of factors. Firstly, radiologists are not necessarily familiar with CT radiation exposure descriptors (volume CT dose index and dose length product). Secondly, there have been a series of revisions about the relative sensitivity of organs to radiation. Finally, radiation dose in units such as millisieverts (mSv) are used to estimate population risks based on generic models, not individual patient calculated dose. Indeed, the radiation dose in a typical CT scan (1–14 mSv depending on the exam) is similar to the annual dose received from natural sources, such as radon and cosmic radiation – which typically varies from 1 to 10 mSv, depending on where a person lives.
Even small risks justify search for solutions
Nevertheless, the current consensus is that, even if the risk of cancer from CT imaging is small, the economic burden of treatment of the proportionately reduced number may well be significant, given the high prices of cancer treatment.
Neither does anyone question the logic of attacking even a small cancer risk. In December 2009, a report in ‘The Archives of Internal Medicine’ made a detailed assessment of projected cancer risks due to CT scans in the US. The study was conducted by a team from the Radiation Epidemiology Branch of the National Cancer Institute (NCI), and argued that changes in practice might help to avoid the possibility of reaching an attributable risk of 29,000 cancer cases based on CT scans in the year 2007. The authors also observed that the impact would be largest in abdomen, pelvis and chest CT scans in adults aged 35 to 54 years.
One of the most vexatious issues concerns CT scans which are not medically necessary, especially when it concerns repeat imaging of a particular patient – and the ensuing enhancement of cancer risk. According to one estimate, unnecessary scans could account for as much as 30 percent of CTs in the US. In Europe, such a figure is also likely to be high in countries such as Belgium and France where per capita CT scan levels are close to those of the US.
Though the US state of California has passed a law requiring documentation in a patient’s medical record of radiation dose used for every CT scan, compliance has been inconsistent. Perspectives in Europe are problematic too. For example, the European Union collects dose levels in Europe, but there are major differences in definitions and data collection techniques.
Progress in pediatric dosing
Until the NCI study at the end of 2009, the emphasis on reducing CT cancer risks had largely been on pediatric scans. The authors of that paper noted there was evidence of pediatric doses being reduced as a result of social marketing campaigns such as Image Gently. The latter was launched in 2008 by the Alliance for Radiation Safety in Pediatric Imaging.
Lessons from the pediatric dose control campaign
One of the key recommendations of Image Gently was to promote standardization of pediatric dose measurements and display across vendor equipment.
This is precisely what the recent BMJ study proposes to do for all patients. The authors of the study assessed mean effective doses and proportions of high dose examinations (defined as CT scans with doses above the 75th percentile defined during a baseline period) for abdomen, chest, combined chest and abdomen, and head CT. These were classified by patient characteristics (sex, age, and size), type of institution (trauma centre, 24×7 care provision, academic and private hospital), practice volumes, machine manufacturer and model, country etc. The figures were adjusted for patient characteristics, using hierarchical linear and logistic regression.
For example, after taking into account patient factors, a fourfold range in radiation doses still existed in abdominal scans. Similar variations were found for chest and combined chest-and-abdomen scans.
Huge variations in dose
The BMJ study found that variations in radiation dose across institutions and countries were huge. For abdomen CT examinations, the mean effective radiation dose differed by a factor of four, with a 17-fold range in the share of high dose examinations (4 to 69%). Variations in mean effective dose for chest scans and combined chest plus abdomen scans were also close to four times, while the share of high dose exams varied from 1 to 26%, and 2 to 78%, respectively. For head CT, the differences were less spectacular (with the range of mean effective doses less than 1.5 times and the share of high dose exams ranging from 8 to 27%.
Achievable and universal standards
However, when the UCSF group adjusted for technical parameters, that is, in terms of the way CT scanners were used by medical staff, the variations in doses nearly disappeared.
The researchers conclude that it is possible to optimize doses to a “single set of achievable quality standards” and apply this “to all hospitals and imaging facilities.” They also noted that the choice of “appropriate CT protocol parameters might be less complex than widely believed.” The key to protocol optimization lies in updating physician awareness and recalibrating expectations about what constitutes a diagnostic CT scan. The latter will be based on a better alignment of CT protocol parameter choices with diagnostic image quality requirements.
One interesting finding was that institutions with lower average doses shared scanning approaches. These institutions tended to limit the number of protocols, with each relying on the minimum dose required to answer the clinical question. They used multiple CT scanning infrequently, had lower settings for tube current and tube potential, and used higher pitch for most, if not all, imaging indications.
The way ahead
The road to CT dose reduction and standardization will vary by type of institution and country. This is due to differences in the make and model of CT scanners as well as medical cultures, in terms of radiologist preferences and personnel support. There are case studies of protocol overhauls taking a year or more, and needing to be kept up-to-date with new CT software and scanner upgrades. Examinations with higher radiation exposure generally give more acceptable images than those where exposure is lower. The challenge is to optimize a ‘correct’ minimum dose for different patient sizes, ages and conditions. Continuing improvements in scanning technology will undoubtedly also be part of the process of optimizing protocols. On their side, some companies have been experimenting with artificial intelligence algorithms to position patients correctly in a CT scanner. Off-centre CT scans can expose patients to much higher levels of radiation
By Callan Emery, Editor
A study published in Nature in September has caught the attention of the media and the interest of
Obs-Gyn specialists. In what is the largest study of the neonatal microbiome (gut bacteria), the researchers provide strong evidence that the way a baby is born impacts significantly on their microbiome.
The study by Lawley T., et al. (doi: 10.1038/s41586-019-1560-1) found that babies born through the vaginal canal carry different microbes from those delivered through caesarean section. Those born through c-section tended to lack strains of gut bacteria found in healthy children and adults. Additionally babies born through c-section showed a high-level of colonization by opportunistic pathogens associated with the hospital environment (including Enterococcus, Enterobacter and Klebsiella species).
Interestingly, the researchers note that it was the mother’s gut bacteria, and not vaginal bacteria, that made up much of the microbiome in the vaginally delivered babies. Previous studies had suggested that vaginal bacteria were swallowed by the baby on its way down the birth canal. This led to what is has been termed ‘vaginal seeding’ whereby babies born by c-section are swabbed with the mothers vaginal fluids in an effort to restore any missing microbes. However, a study by Stinson et al. (doi: 10.3389/fmed.2018.00135) has shown vaginal seeding to be unjustified and potentially unsafe.
Although a lack of exposure to the right microbes in early childhood has been implicated in autoimmune diseases, such as asthma, allergies and diabetes, the exact role of the baby’s gut bacteria is unclear and it isn’t known if these differences at birth will have any effect on later health.
The researchers, who analysed nearly 600 births in the United Kingdom, say the differences in gut bacteria between vaginally born and caesarean delivered babies largely evened out by 1 year old. They note that large follow-up studies are needed to determine if the early differences influence health outcomes.
Discussing her study, Stinson pointed out that microbes thrown out of balance in babies born by c-section are very similar to those thrown off balance in babies born to mothers receiving antibiotics but delivering vaginally. She surmises that routine antibiotic administration given to mothers delivering by c-section could be a cause of the bacterial difference in the neonatal microbiome.
Although this research does pose interesting questions about the potential health outcomes associated with c-section versus vaginal delivery, it should be emphasised that at this point mothers should not be deterred from c-section delivery if it is the right choice for the mother and her baby.
The study is part of larger effort, called the Baby Biome Study, which aims to follow thousands more newborns into childhood.
It is fascinating following our expanding knowledge of the workings of the brain with the use of functional MRI over the past 10-15 years. fMRI has provided an extraordinary view of brain function and enabled a wide range of remarkable discoveries. As this research proliferates, it promises many more new insights, with a multitude of applications.
Particularly interesting has been the growing understanding of memory formation and retrieval. Expanding on this knowledge and taking it to the next level, a recent study by neuroscientists and artificial intelligence researchers at DeepMind, Otto von Guericke University Magdeburg and the German Centre for Neurodegenerative Diseases shows how the human brain connects individual – or episodic – memories to solve problems and draw new insights.
The researchers proposed a novel brain mechanism that would allow retrieved memories to trigger the retrieval of other, related memories.
There have been many studies of episodic memories which advance the theory that they are stored as separate memory traces in a brain region called the hippocampus. Taking this as standard knowledge, the researchers’ new theory explores an anatomical connection that loops out of the hippocampus to the neighbouring entorhinal cortex but then passes back in to the hippocampus. It is this recurrent connection, the researchers thought, that allows memories retrieved from the hippocampus to trigger the retrieval of further, multiple linked memories.
To test the theory the researchers used a 7 Tesla fMRI to scan brain activation in 26 male and female study participants as they performed a task that required them to draw insights across separate events using a series of paired images. Their results are published in the September 2018 issue of Neuron.
Part of the study involved the development of a technique where they were able to separate out the parts of the entorhinal cortex that provide the input to the hippocampus, which allowed them to precisely measure the patterns of activation in the hippocampus to distinguish input and output separately.
Their resulting data showed that when the hippocampus retrieves a memory, it doesn’t simply pass it to the rest of the brain, but instead recirculates the activation back into the hippocampus, triggering the retrieval of other related memories.
They say their results preserve the best of both worlds – you preserve the ability to remember individual episodic experiences by keeping them separate, while at the same time allowing related memories to be combined on the fly at the point of retrieval.
In addition, they reckon this understanding could be replicated in Artificial Intelligence systems so they will have a greater capacity for rapidly solving novel problems.