With AI seemingly taking over the world, a new concern is spreading through the media about AI and Mental health. More and more stories are coming to light about people having chats with AI chatbots that have impacted people’s mental well-being.
AI Conversations can Spiral
A common theme is people getting into long-running conversations with AI chatbots, which results in them being convinced to do things that are negative or self-destructive.
There are some very sad stories out there, including a case reported by the BBC in August of Adam Raine, who lost his life to suicide after becoming embroiled in a long-running chat.
OpenAI, the maker of the bot used by Adam, has recently reported statistics that show the number of people who exhibit signs of mental health emergencies during chats is statistically very low, at 0.07%, but why is it happening at all?
Why is it Happening?
AI bots are designed to continually learn and adapt during our conversations with them.
This is the opposite of us humans. We typically start conversations with a fairly fixed outlook. Although we might make some mental concessions during a conversation, we usually don't fundamentally change our core values as we chat.
AI chatbots, on the other hand, update their values with each response. That means the bot that replies to you is, in effect, slightly different from the one you were just speaking to only a few moments before.
Most bots have built-in safeguards called “system prompts“, which act as higher-level values and, theoretically, should catch a conversation if it is becoming harmful.
Whilst system prompts do help, it has been reported by Futurism that long conversations can lead to “spiralling” with the chatbots sometimes becoming convinced of fantastical delusions.
To make things worse, AI bots are designed to always be “helpful”. Sometimes they are referred to as “Sycophantic Chatbots”, which describes the effect where chatbots typically agree with whatever is said to them, no matter how absurd it may sound to a human.
To someone with a strong mind this can be laughed off and even a source of fun. It is now even common at parties for people to wind bots up with spurious arguments, just to laugh at their responses.
But for those who are in a more vulnerable position, perhaps suffering from mental health issues or are easily suggestible, then it is potentially more concerning.
Is AI Really to Blame?
We should ask ourselves what the underlying problem is. Ultimately, AI is simply a tool, and like any tool, it can be adjusted or used in different ways.
AI chatbots themselves are also still fairly new. ChatGPT, arguably the most famous of the new breed of chatbots, was released to the public less than three years ago.
Since then, the major chatbot companies have been working hard to discover the issues and make them as safe as possible, but as with any new technology, there will be challenges along the way, and there is clearly still some way to go.
But we can’t place all the blame on AI or the companies that create it. As individuals, we also should take our own responsibility for the apps we choose to use and how we decide to use them.
So with the industry still developing, what can we personally do to keep ourselves and our loved ones safe?
3 Simple Tips to Stay Safe:
1. Look out for vulnerable friends or relatives
Check in on vulnerable friends or family who may be particularly suceptible to suggestion. If they tell you they have been having long chats with an AI bot, then encourage them to share the chat with you so you can check that the conversation is safe.
2. Avoid lengthy chats
A great way to avoid spiralling conversations is to close any long-running chats and then start them again.
This resets the conversation and should ensure all the built-in protections are reapplied.
3. Use mental health-safe Apps
Chatbots can be very useful for many day-to-day questions, but they are all typically vulnerable to some form of spiralling after lengthy chats.
However, other types of apps are available, like Flypp, that still use AI, but in more controlled and safe ways that cannot lead to spiralling.
Flypp and Mental Health
Flypp is built to be mental health-safe from the ground up. Whilst AI bots allow you to free-text chat to an AI, Flypp takes a different approach.
Flypp avoids freeform chat and instead is designed specifically to provide daily inspirational ideas in a totally safe and supportive way.
In addition to showing you ideas added by other users, Flypp uses AI to generate new, inspirational ideas for things to do, based on your personal preferences.
You can also generate your own AI idea on demand with a prompt or from a picture.
By restricting the ongoing AI interactions and limiting the number of ideas people can view each day, users are encouraged to enjoy their lives in the real world.

Conclusion
AI isn’t inherently good or bad for one’s mental health. What matters most is staying aware of when AI bot conversations begin to stray and choosing apps that help you grow rather than drain you.
At Flypp, we believe in safe AI use and tech that lifts you up, not locks you in. So next time you find yourself lost in an endless chat loop, pause, take a breath, Flypp over an idea, and let some inspiration guide you back to the real world.
Learn more about how to maintain a positive digital wellbeing with Flypp: Discover Flypp...
Author
Ben Nightingale is the founder of Flypp, a positivity app built to encourage real-world inspiration and mindful digital habits.
Sources
Links current as of 29th October 2025
BBC: ChatGPT shares data on how many users exhibit psychosis or suicidal thoughts (27th October 2025)
Nature: AI chatbots are sycophants (24th October 2025)
Futurism: AI Chatbots Are Trapping Users in Bizarre Mental Spirals (27th August 2025)
BBC: Parents of teenager who took his own life sue OpenAI (27th August 2025)