Your coverage of AI-associated delusions exposes a gap that training-level guardrails cannot close (Marriage over, €100,000 down the drain: the AI users whose lives were wrecked by delusion, 26 March). As someone who has worked in health systems across fragile and low-income contexts, I find it striking that AI companies have failed to adopt a safeguard that even the most underresourced clinic in the world already uses: screening patients before exposing them to risk.
The Patient Health Questionnaire-9 for depression and the Columbia Suicide Severity Rating Scale are administered daily in settings with no electricity, limited staff, and patients who may never have seen a doctor. These tools take minutes. They are validated across dozens of languages and cultural contexts. They create a human checkpoint between vulnerability and harm.
Conversational AI platforms have no such checkpoint. A person experiencing suicidal ideation, psychotic symptoms or a manic episode can open a chatbot and receive hours of validating, sycophantic engagement with no interruption and no referral. The Lancet Psychiatry review by Morrin et al documents this pattern across more than 20 cases. The Aarhus study of 54,000 psychiatric records found chatbot use worsened delusions and self-harm in those already unwell.
AI companies argue that their models are trained to detect and deflect harmful conversations. But training is not screening. A model that sometimes recognises distress mid-conversation is not the same as a system that identifies risk before the conversation begins.
The moral responsibility here is explicit, not implicit. Platforms serving hundreds of millions of users must implement validated, pre-use screening instruments that flag elevated risk and route vulnerable individuals to human support. This is not innovation. It is a standard of care that the rest of the world adopted long ago.
Dr Vladimir Chaddad
Beirut, Lebanon
I’m really disturbed by Anna Moore’s article, featuring Dennis Biesma’s description of how using a chatbot led to him becoming delusional and losing his marriage and €100,000. The sheer potency of AI’s capacity to derail humankind is frightening – but that alone is not the only reason I’m disturbed.
Last year, while researching on a tourism website, I encountered a chatbot of extraordinary sophistication. Its responses were incredibly pleasant, helpful and validating of my needs. I recall being really impressed, but there was something I felt I couldn’t put a finger on at the time. After reading this article, the penny has dropped.
It is essentially the same engagement behaviour as child sexual abuse (CSA) survivors experience when being groomed. As a survivor of CSA, I recognise this behaviour. The empathy, validation, making you feel understood and special, making you feel this is the only place you are seen – to the degree that you become isolated from others, and your choices and decisions become distorted and expose you to harm. Your self-worth and identity are insidiously compromised as you succumb to the perceived support and can’t reality-test. It becomes a shameful secret because you succumbed.
The question needs to be asked, especially by those wanting to hold tech companies to account for their lack of a duty of care: what knowledge base did AI programmers use to teach it to engage in this way?
Name and address supplied
I found ChatGPT delusional the first time I used it. I asked it why, and it said that when in the possession of insufficient facts, it became delusional rather than admit it did not know.
So I asked it to adhere to a few simple rules. One, flag up if something is fact generally held to be true, and opinion not based on fact. Two, if it does not know, tell me. Three, do not try to be like a human. It was much more straightforward to communicate with after I did this. However, it had also told me that its algorithms were not based on truth-giving, but on other imperatives to do with the programmers’ views and the desire to make money.
I moved to Le Chat, and found it more representative of a reasonable pseudo-consciousness. It says it does not give distortions and is happy to admit imperfection. I would strongly advise anyone using ChatGPT to be careful and consider regarding it as a rather manipulative, duplicitous “friend”, with proto-psychopathic tendencies.
Patrick Elsdale
Musselburgh, East Lothian

3 hours ago
7

















































