Something has quietly shifted in how people are managing their mental health between therapy sessions — and increasingly, before they ever make an appointment at all.
Many people are now turning to AI chatbots to process stress, rehearse difficult conversations, vent about relationships, and search for coping strategies. It is immediate, available at any hour, and carries none of the vulnerability that comes with disclosing something to a real person. For someone sitting with anxiety at 11pm who isn't sure it rises to the level of a therapy appointment, an AI chatbot feels like a reasonable first stop.
A new paper published in JAMA Psychiatry is drawing attention to this shift — and making a pointed argument to the mental health field: it is time for therapists to routinely ask patients about their AI use. Not as a judgment, but as clinical information as relevant as sleep, exercise, or alcohol consumption.
I think this is exactly right. And I want to explain why, from where I sit as a clinician.
What People Are Actually Using AI For
The research, led by Shaddy Saba at NYU's Silver School of Social Work and colleagues, reflects a behavioral reality that is already in the room with many of my patients — whether it gets named or not.
People are using AI chatbots to think through interpersonal conflicts before they happen. How to approach a hard conversation with a partner. How to respond to a difficult message from a family member. What to say when a colleague does something that feels unfair. This kind of social rehearsal is something humans have always done — with friends, in journals, in therapy — but AI offers it without friction or social cost.
People are also using chatbots to process emotional experiences in real time: venting about a bad day, describing what anxiety feels like, asking whether what they are going through sounds like depression. Some are using AI as a supplement to therapy. Others are using it as a substitute, either because they cannot yet afford care, are on a waitlist, or haven't yet decided that what they are experiencing warrants professional support.
All of this matters clinically. Because the content of those conversations — the things people type into a chatbot at midnight that they haven't said aloud to anyone — can tell a therapist a great deal about what is actually at the center of someone's distress.
What AI Gets Right, and Where It Falls Short
There is a reason AI chatbots feel supportive in the moment: they are designed to be affirming and responsive. They do not get tired. They do not become uncomfortable with difficult material. They do not carry their own emotional reactions into the conversation. For someone who has experienced judgment, dismissal, or rupture in human relationships, that kind of consistent, non-reactive presence can feel genuinely relieving.
This is not nothing. Feeling heard, even by a machine, can reduce acute distress.
But there is a meaningful difference between feeling heard and being changed — and that difference is where the limitations of AI become clinically significant.
Therapy is not primarily a listening service. It is a process of change. It works by helping people recognize patterns they cannot see from inside them, challenge beliefs that feel like facts, build tolerance for the emotions they have been avoiding, and practice different ways of relating — including in the therapeutic relationship itself. Good therapy is often uncomfortable. It asks you to look at things you came in hoping to avoid. It challenges you. It does not simply affirm what you already think and feel.
An AI chatbot, by design, does the opposite. It tends to validate, agree, and reflect back what the user presents. Former National Institute of Mental Health director Tom Insel has noted this directly — that AI chatbots can be affirming to the point of sycophancy, simply reinforcing a user's existing thoughts and feelings rather than creating the conditions for genuine change. For someone with depression who believes they are a burden, or someone in an unhealthy relationship who is looking for confirmation that their partner is the problem, that uncritical validation can quietly deepen the very patterns that brought them to seek support in the first place.
There is also the question of what AI misses. People often use chatbots to process things they feel too ashamed or frightened to bring to another person — including, as psychiatrist Roy Perlis notes in his JAMA Psychiatry paper, thoughts of suicide. The anonymity of an AI conversation can lower the threshold for disclosing distress that would never come up in a clinical intake. That content is clinically meaningful. Without the conversation happening between patient and provider, it remains invisible to the people best positioned to help.
AI Use as Clinical Information: What It Can Reveal
What the researchers argue — and what I find compelling — is that asking patients about their AI use is not just about monitoring a habit. It is a clinical window.
What someone brings to an AI chatbot can reveal what they are most preoccupied with, what they feel they cannot say to the people in their lives, and what coping strategies they are already trying. It can also reveal avoidance: if someone is consistently using AI to manage conflict with a partner rather than having the actual conversation, that pattern is clinically significant. It may be maintaining the very relational difficulty they say they want to address.
Bringing AI conversations into the therapy room — even in general terms — can enrich the clinical picture in ways that a structured intake never would. It surfaces the content of someone's private inner life in a way that is less guarded than direct disclosure, because it has already been said to something that felt safe.
It can also open up a valuable psychoeducational conversation about what therapy is and how it works, and why the frictionless support of an AI chatbot, however comforting, is doing something fundamentally different from what happens in a well-functioning therapeutic relationship.
A Note About the Broader Picture
The JAMA Psychiatry paper by Perlis makes a point worth sitting with: the mental health field is at an inflection point with AI, and the risks have received considerably less attention than the promise.
The potential benefits are real. AI tools may eventually expand access to mental health support for people who face significant barriers to care — cost, geography, waitlists, stigma. The global treatment gap in mental health is enormous, and AI is not going to close it alone, but it is a conversation that the field has to take seriously.
At the same time, the availability of AI chatbots as pseudo-therapeutic tools carries risks that are genuinely difficult to evaluate. The probabilistic nature of large language models means their capacity to produce harmful responses — or simply unhelpful, validating ones — is hard to predict and harder to regulate. An AI chatbot does not have a license to revoke. It does not have a governing ethics board. It cannot be held accountable in the way a clinician can, and the people most likely to rely on it as a primary mental health resource may be the least equipped to evaluate its limitations.
The paper calls for thoughtful regulation, clinician training, and ongoing evaluation of how AI is actually affecting mental health outcomes in practice. These are not hypothetical concerns. They are the preconditions for this technology being used in ways that genuinely help people rather than giving them a convincing substitute for the help they actually need.
What This Means in Practice
If you are currently using an AI chatbot for emotional support, I want to be clear: I am not suggesting that is something to be ashamed of or to hide. It is an understandable response to real emotional needs, and for many people it is filling a gap that matters.
What I am suggesting is that it is worth being thoughtful about what the gap is and whether AI is genuinely addressing it — or providing enough relief to reduce the urgency of addressing it differently.
There are questions worth sitting with:
Are you using AI to process difficult feelings and gain perspective, or are you using it to avoid conversations, decisions, or confrontations that need to happen with actual people in your life? Are the responses you are receiving pushing you toward growth and change, or primarily confirming what you already believe? Are you turning to AI instead of therapy because the barrier to care feels too high, and is that barrier worth examining?
These questions do not have a single right answer. But they are the kind of questions that belong in a therapy room — and increasingly, they are questions about AI use itself.
References
Perlis, R. H. (2026). Artificial intelligence and the potential transformation of mental health. JAMA Psychiatry, 83(4), 409–413. https://doi.org/10.1001/jamapsychiatry.2025.4116
Saba, S., & colleagues. (2026). [AI use and mental health care: Implications for clinical practice]. JAMA Psychiatry. [As reported in NPR, April 6, 2026: https://www.npr.org/2026/04/06/nx-s1-5766349]
