Healthcare LLM AI systems show dangerous bias against women patients
Large Language Model (LLM) artificial intelligence systems increasingly deployed across healthcare settings pose serious risks to patient safety, with new research revealing these tools systematically recommend reduced medical care for women and vulnerable populations based on irrelevant factors such as typing errors and communication style rather than clinical need.