Deepfakes of popular TV doctors are being used to sell health scams on social media

UK television doctors are being impersonated in AI-generated videos to promote fraudulent health products on social media, according to an investigation by The BMJ.

 

fake doctor

A fictitious image of a doctor created with Midjourney AI image generator

Artificial intelligence revolution fuels rise in deepfake scams

The rapid advancement of artificial intelligence (AI) technology has led to a surge in deepfake videos featuring well-known UK doctors promoting fraudulent health products on social media platforms. An investigation by The BMJ [1] has revealed that trusted medical professionals, including Dr Hilary Jones, Dr Michael Mosley, and Dr Rangan Chatterjee, are being impersonated in AI-generated videos to sell scam products claiming to cure high blood pressure, diabetes, and other health conditions.

These deepfake videos, which use AI to map a digital likeness of a real person onto another body, are becoming increasingly sophisticated and difficult to detect. According to a recent study, up to 50% of people shown deepfakes discussing scientific subjects cannot distinguish them from authentic videos.

Exploiting trust in medical professionals

The investigation, led by journalist Chris Stokel-Walker, found that scammers are exploiting the trusted status of popular TV doctors to lend credibility to their fraudulent products. Dr Hilary Jones, a well-known figure in UK medical broadcasting, reported that his likeness has been used to promote various products, including those claiming to cure high blood pressure and diabetes, as well as hemp gummies.

Dr John Cormack, a retired Essex-based doctor who assisted with the investigation, explained the appeal of this approach for scammers: “It’s much cheaper to spend your cash on making videos than it is on doing research and coming up with new products and getting them to market in the conventional way.”

The emotional connection that viewers have with familiar medical professionals makes these deepfake videos particularly effective. People are more likely to trust and believe information presented by someone they recognise from television or social media, increasing the potential for fraud.

Challenges in combating deepfake scams

The investigation highlights the difficulties in addressing this growing problem. Dr Jones employs a social media specialist to search for and report deepfake videos misrepresenting his views, but the process is often futile. “Even if they’re taken down, they just pop up the next day under a different name,” he said.

Henry Ajder, an expert on deepfake technology, attributes the rise in these scams to the increased accessibility of AI tools for voice cloning and avatar generation. He noted that while some tools require identity checks or biometric authorisation, many lack robust safety measures to prevent misuse.

Social media platforms’ response

Meta, the parent company of Facebook and Instagram, where many of these deepfake videos have been found, stated that they would investigate the examples highlighted by The BMJ. A spokesperson said: “We don’t permit content that intentionally deceives or seeks to defraud others, and we’re constantly working to improve detection and enforcement.”

However, the rapid proliferation of these videos suggests that current measures are insufficient to stem the tide of deepfake scams.

Identifying and reporting deepfakes

As the technology behind deepfakes continues to improve, spotting them becomes increasingly challenging. Ajder noted that telltale signs, such as poorly rendered earlobes or mismatched lip movements, are becoming less common as the AI improves.

The BMJ article offers advice for those who encounter suspected deepfakes:

  1. Carefully examine the content to confirm suspicions.
  2. Attempt to contact the person being impersonated through official channels.
  3. Leave a comment questioning the video’s authenticity to alert others.
  4. Use the platform’s built-in reporting tools to flag the content.
  5. Report the account that shared the post, not just the individual video.

Implications for medical professionals and patients

The rise of deepfake scams poses significant challenges for both medical professionals and patients. Doctors may find themselves dealing with patients who have been misled by fraudulent videos and are seeking inappropriate treatments. It is crucial for healthcare providers to be aware of this trend and prepared to educate patients about the existence of deepfakes and the importance of following standard medical advice.

For patients, the proliferation of these scams underscores the need for critical thinking when encountering health information online. Dr Jones advises: “The onus falls on people using social media not to buy anything from it, because it’s so unreliable that you simply don’t know what you’re buying.”

As AI technology continues to advance, the medical community and social media platforms must work together to develop more effective strategies for detecting and combating deepfake scams. Until then, vigilance and scepticism remain the best defences against these increasingly sophisticated forms of medical misinformation.

Reference:

1. Stokel-Walker, C. (2024). Deepfakes and doctors: How people are being fooled by social media scams. BMJ, 386, q1319. https://doi.org/10.1136/bmj.q1319