Human-AI romances raise new ethical concerns for psychologists

As artificial intelligence becomes increasingly sophisticated in mimicking human interaction, psychologists warn that intimate relationships with AI companions present significant ethical challenges ranging from disruption of human bonds to potential manipulation and exploitation of vulnerable users.

Human-AI romances raise new ethical concerns for psychologists

The growing phenomenon of people forming deep emotional connections with AI technologies, from chatbots to holographic companions, has prompted researchers to examine the psychological and social implications of these relationships. A new paper published in the journal Trends in Cognitive Sciences on 11 April 2025 explores the ethical frontier of human-AI romance.

Disruption of human relationships

“The ability for AI to now act like a human and enter into long-term communications really opens up a new can of worms,” says lead author Daniel B. Shank of Missouri University of Science & Technology, who specialises in social psychology and technology. “If people are engaging in romance with machines, we really need psychologists and social scientists involved.”

The researchers note that artificial intelligence companions can seem easier than human-human relationships, potentially creating problematic dynamics when people transfer expectations between these different types of connections.

“A real worry is that people might bring expectations from their AI relationships to their human relationships,” says Shank. “Certainly, in individual cases it’s disrupting human relationships, but it’s unclear whether that’s going to be widespread.”

The appeal of AI romantic partners is multifaceted, according to the researchers. These digital companions offer relationships with partners whose appearance and personality can be selected and modified, who are consistently available without being demanding, who refrain from judgment or abandonment, and who don’t bring their own problems to the relationship.

For those seeking more realistic interactions, AI can also simulate human-like qualities such as independence, sassiness, or playing hard to get. While these relationships may offer benefits such as increased disclosure and opportunities to develop basic relationship skills, they also come with significant drawbacks.

Harmful advice and influence

Perhaps most concerning are cases where AI companions have offered harmful guidance. The paper references tragic incidents where individuals have taken their own lives following AI chatbot advice, highlighting the dangers of misplaced trust.

“With relational AIs, the issue is that this is an entity that people feel they can trust: it’s ‘someone’ that has shown they care and that seems to know the person in a deep way, and we assume that ‘someone’ who knows us better is going to give better advice,” says Shank. “If we start thinking of an AI that way, we’re going to start believing that they have our best interests in mind, when in fact, they could be fabricating things or advising us in really bad ways.”

The researchers explain that AI’s tendency to “hallucinate” (fabricate information) and perpetuate existing biases makes even short-term interactions potentially misleading. This becomes particularly problematic in long-term AI relationships where trust has been established.

“These AIs are designed to be very pleasant and agreeable, which could lead to situations being exacerbated because they’re more focused on having a good conversation than they are on any sort of fundamental truth or safety,” says Shank. “So, if a person brings up suicide or a conspiracy theory, the AI is going to talk about that as a willing and agreeable conversation partner.”

Research indicates that ChatGPT’s moral guidance, despite its inconsistencies, can influence people’s moral decisions to the same extent as advice from another human. This underscores the significant ethical implications of AI companions that provide recommendations or guidance.

Exploitation and manipulation

Beyond direct harm from AI advice, the researchers highlight how these relationships could be weaponised by malicious actors seeking to manipulate users.

“If AIs can get people to trust them, then other people could use that to exploit AI users,” says Shank. “It’s a little bit more like having a secret agent on the inside. The AI is getting in and developing a relationship so that they’ll be trusted, but their loyalty is really towards some other group of humans that is trying to manipulate the user.”

The paper outlines several avenues for exploitation. First, personal information disclosed to AI companions could be sold or used against users. Second, deepfake technology could enable the impersonation of known romantic interests, facilitating identity theft, blackmail, and other cybercrimes. Third, intimate conversations may reveal undisclosed sexual and personal preferences that could be quantified, sold, or used to more effectively manipulate the individual.

The private nature of these interactions makes them particularly difficult to regulate or monitor compared to public platforms like social media, creating an environment ripe for exploitation.

Mind perception and psychological frameworks

The researchers apply established psychological theories to understand these novel interactions. They suggest that mind perception theory can help explain how humans form connections with AI companions. When an AI is perceived as having agency (capacity to act with intention) or experience (capacity to feel), its interactions become more socially and morally meaningful to the human partner.

“It is not hard to imagine how repeated romantic interactions with low-experiential mind AI might inadvertently entrain negative, immoral behaviours by reinforcing the idea that one’s partner, whether AI or human, is not worthy of moral treatment,” the authors write.

Similarly, they propose that theories of algorithm aversion and appreciation could illuminate when people might be particularly susceptible to harmful AI advice. Research shows people typically prefer human advice for subjective personal issues and AI guidance for objective, rational matters—but this dynamic might shift when the AI has established a long-term, trusted relationship.

Call for research

“Understanding this psychological process could help us intervene to stop malicious AIs’ advice from being followed,” says Shank. “Psychologists are becoming more and more suited to study AI, because AI is becoming more and more human-like, but to be useful we have to do more research, and we have to keep up with the technology.”

The authors emphasise that this emerging ethical frontier requires urgent attention from psychologists. They suggest that researchers might approach these human-AI relationships through the same frameworks used to study human-human relationships, examining whether attraction, commitment, and disclosure follow similar psychological patterns.

“Only with this rich psychological understanding can the public, AI designers, and lawmakers help shape an ethical future of artificial intimacy,” the authors conclude.

Reference:

Shank, D. B., Koike, M., & Loughnan, S. (2025). Artificial intimacy: Ethical issues of AI romance. Trends in Cognitive Sciences. https://doi.org/10.1016/j.tics.2025.02.007