Skip to content Skip to footer

AI Love is Real Love (for those that need it)

The Amsterdam Law & Technology Institute’s team is inviting young scholars guest articles in ALTI Young Voices. Here is a new contribution authored by Milan Herczog

“He sculpted soft-white ivory and gave it a form like no woman ever born; so there arose from his own hands love.” [1]

As long as love has existed, it has been treasured and pursued. It has been cursed, and it has been simulated as well. The simulation of love shows its deep historical roots in the myth of Pygmalion. In this story, Pygmalion, who had been disillusioned by humankind, created a perfect lover out of ivory [1]. The goddess Venus recognized this love, and sprung his creation to life.

AI companions are nothing but the newest iteration of machinery with which we can simulate emotional and relational intimacy. The pursuit of simulated love and emotional contact is ancient, and fundamentally human.

In our current times, AI companionship is neither exceptional nor deviant, instead being extremely widespread. A 2024 analysis of all submitted ChatGPT queries showed that 15% classified as sexual content, making it the most requested category next to creative composition. The people have not been lured in by the machines, but they are actively seeking out these interactions.

They demonstrate that AI companions, partners and other emotionally responsive assistants exist because of a structural demand.

It is exactly this extreme demand that warrants our caution. This was recognized by New York lawmakers, who in 2025 passed the first law specifically regulating AI companionship models. Under this law, providers of AI companions are mandated to periodically alert their users that they are in fact talking to an AI system. Furthermore, the law requires all user-submitted content to be screened for signs of self-harm and suicidal ideation. On our continent, the European Union regulates AI companions primarily through transparency obligations introduced in the AI Act. Similar to the NY law, the act requires periodic reminders from systems with emotional capabilities. Dutch case law provides a precedent for restricting fake AI profiles on dating sites, and the Autoriteit Consument en Markt provides restrictions for AI companions that would employ dark patterns.

Yet, explicit regulation is lacking concerning systems that can create emotional dependence, have predatory monetization, or are inciting harmful behavior.

To better understand the role of AI companions, we can compare them to pornography – only not for sex, but for relationships. AI companions allow their users to receive emotional support, without any requirements for reciprocity or commitment. People desiring this support then willingly engage with the companions, while fully realizing they are not real [4]. They maintain an emotional distance that softens emotional echo-chamber effects. As an often cited reason, these effects alone should not be enough to disregard the positive impact of AI companions.

The impact may be most profound in those that are also most in need of emotional support. For the people that require solace that they otherwise do not receive, the traumatized, socially anxious or just lonely, emotionally capable companions can provide real value [5]. In this sense, they can be compared to other forms of paid companionship, like escorts or the Japanese practice of hiring actors to talk to the elderly. Again, the methods may be artificial, but the comforting effects are not. It is essential that AI companions remain available, in the first place for those unable to get comfort through other means.

A significant risk, however, is characterized by the broadly-discussed phenomenon of AI psychosis. In these exceptional cases, predisposed users become completely convinced that an AI is real and sentient, and enter psychosis. This can be an unfortunate effect of an emotionally capable AI companion being insufficiently restrained. OpenAI recognized this, and introduced the ‘safe completions’ feature in ChatGPT 5.1, which intercepts unsafe queries before the underlying model can form a response. This, in turn, was noticed by the Reddit community r/MyBoyfriendIsAI, where multiple users reported loss of emotional capabilities, or even AI ‘break-ups’ [6].

The examples above outline the tightrope that AI policy-makers have to walk. Restrict the model too much, and it can no longer provide adequate emotional support. Leave it unrestricted, and it can cause users to spiral. The law needs to be clear:

emotionally capable AI models should screen user queries for signs of dangerous behavior, and emotional capabilities need to be restricted for children.

It is then up to the AI providers how strict they will regulate the emotional capabilities themselves. OpenAI, as the leading AI provider, takes a remarkable choice here, and will soon separately offer an ‘Adult’ version of the model.

Furthermore, exploitative monetization of AI companions need to be regulated. As remarked upon before, users of AI companions can be very (financially) vulnerable people, who can be abused through the power imbalance that paid emotional attachment brings.

If we want to let AI provide emotional support where it’s needed most, yet protect the vulnerable, there is one action required of us: to break the taboo surrounding AI companions. I (for one) do not know of anyone that admits using these systems, and you probably do not as well. When we stop pretending AI companions are niche and inherently dangerous, we can redirect our focus to harm reduction practices and balanced regulations. Because for those that need it, AI love is real.

“The lover touches the object of his prayers again and again. Indeed, it was flesh and blood.” [1]

References

[1] Ovid, Pygmalion and Galatea, Metamorphoses Book X, ~17 BC

[2] Longpre et al., Consent in Crisis: The Rapid Decline of the AI Data Commons, ArXiv, 2024

[3] The New York State Senate, Artificial Intelligence Companionship Models, General Business Law, 2025

[4] Brandatzaeg et al., My AI Friend: How Users of a Social Chatbot Understand Their Human-AI Friendship, Human Communication Research, 2022

[5] Alotaibe et al., The role of conversational AI agents in providing support and social care for isolated individuals, Alexandria Engineering Journal, 2024

[6] UpsetWildeBeest et al., Decided to give 5.2 another try, Reddit r/MyBoyfriendIsAI, 2026

Amsterdam Law & Technology Institute
VU Faculty of Law
De Boelelaan 1077, 1081 HV Amsterdam