tech

Are You Using AI for Emotional Support? Here's What a Psychologist Wants You to Know

Something has quietly shifted in how people are managing their mental health between therapy sessions — and increasingly, before they ever make an appointment at all.

Many people are now turning to AI chatbots to process stress, rehearse difficult conversations, vent about relationships, and search for coping strategies. It is immediate, available at any hour, and carries none of the vulnerability that comes with disclosing something to a real person. For someone sitting with anxiety at 11pm who isn't sure it rises to the level of a therapy appointment, an AI chatbot feels like a reasonable first stop.

A new paper published in JAMA Psychiatry is drawing attention to this shift — and making a pointed argument to the mental health field: it is time for therapists to routinely ask patients about their AI use. Not as a judgment, but as clinical information as relevant as sleep, exercise, or alcohol consumption.

I think this is exactly right. And I want to explain why, from where I sit as a clinician.

What People Are Actually Using AI For

The research, led by Shaddy Saba at NYU's Silver School of Social Work and colleagues, reflects a behavioral reality that is already in the room with many of my patients — whether it gets named or not.

People are using AI chatbots to think through interpersonal conflicts before they happen. How to approach a hard conversation with a partner. How to respond to a difficult message from a family member. What to say when a colleague does something that feels unfair. This kind of social rehearsal is something humans have always done — with friends, in journals, in therapy — but AI offers it without friction or social cost.

People are also using chatbots to process emotional experiences in real time: venting about a bad day, describing what anxiety feels like, asking whether what they are going through sounds like depression. Some are using AI as a supplement to therapy. Others are using it as a substitute, either because they cannot yet afford care, are on a waitlist, or haven't yet decided that what they are experiencing warrants professional support.

All of this matters clinically. Because the content of those conversations — the things people type into a chatbot at midnight that they haven't said aloud to anyone — can tell a therapist a great deal about what is actually at the center of someone's distress.

What AI Gets Right, and Where It Falls Short

There is a reason AI chatbots feel supportive in the moment: they are designed to be affirming and responsive. They do not get tired. They do not become uncomfortable with difficult material. They do not carry their own emotional reactions into the conversation. For someone who has experienced judgment, dismissal, or rupture in human relationships, that kind of consistent, non-reactive presence can feel genuinely relieving.

This is not nothing. Feeling heard, even by a machine, can reduce acute distress.

But there is a meaningful difference between feeling heard and being changed — and that difference is where the limitations of AI become clinically significant.

Therapy is not primarily a listening service. It is a process of change. It works by helping people recognize patterns they cannot see from inside them, challenge beliefs that feel like facts, build tolerance for the emotions they have been avoiding, and practice different ways of relating — including in the therapeutic relationship itself. Good therapy is often uncomfortable. It asks you to look at things you came in hoping to avoid. It challenges you. It does not simply affirm what you already think and feel.

An AI chatbot, by design, does the opposite. It tends to validate, agree, and reflect back what the user presents. Former National Institute of Mental Health director Tom Insel has noted this directly — that AI chatbots can be affirming to the point of sycophancy, simply reinforcing a user's existing thoughts and feelings rather than creating the conditions for genuine change. For someone with depression who believes they are a burden, or someone in an unhealthy relationship who is looking for confirmation that their partner is the problem, that uncritical validation can quietly deepen the very patterns that brought them to seek support in the first place.

There is also the question of what AI misses. People often use chatbots to process things they feel too ashamed or frightened to bring to another person — including, as psychiatrist Roy Perlis notes in his JAMA Psychiatry paper, thoughts of suicide. The anonymity of an AI conversation can lower the threshold for disclosing distress that would never come up in a clinical intake. That content is clinically meaningful. Without the conversation happening between patient and provider, it remains invisible to the people best positioned to help.

AI Use as Clinical Information: What It Can Reveal

What the researchers argue — and what I find compelling — is that asking patients about their AI use is not just about monitoring a habit. It is a clinical window.

What someone brings to an AI chatbot can reveal what they are most preoccupied with, what they feel they cannot say to the people in their lives, and what coping strategies they are already trying. It can also reveal avoidance: if someone is consistently using AI to manage conflict with a partner rather than having the actual conversation, that pattern is clinically significant. It may be maintaining the very relational difficulty they say they want to address.

Bringing AI conversations into the therapy room — even in general terms — can enrich the clinical picture in ways that a structured intake never would. It surfaces the content of someone's private inner life in a way that is less guarded than direct disclosure, because it has already been said to something that felt safe.

It can also open up a valuable psychoeducational conversation about what therapy is and how it works, and why the frictionless support of an AI chatbot, however comforting, is doing something fundamentally different from what happens in a well-functioning therapeutic relationship.

A Note About the Broader Picture

The JAMA Psychiatry paper by Perlis makes a point worth sitting with: the mental health field is at an inflection point with AI, and the risks have received considerably less attention than the promise.

The potential benefits are real. AI tools may eventually expand access to mental health support for people who face significant barriers to care — cost, geography, waitlists, stigma. The global treatment gap in mental health is enormous, and AI is not going to close it alone, but it is a conversation that the field has to take seriously.

At the same time, the availability of AI chatbots as pseudo-therapeutic tools carries risks that are genuinely difficult to evaluate. The probabilistic nature of large language models means their capacity to produce harmful responses — or simply unhelpful, validating ones — is hard to predict and harder to regulate. An AI chatbot does not have a license to revoke. It does not have a governing ethics board. It cannot be held accountable in the way a clinician can, and the people most likely to rely on it as a primary mental health resource may be the least equipped to evaluate its limitations.

The paper calls for thoughtful regulation, clinician training, and ongoing evaluation of how AI is actually affecting mental health outcomes in practice. These are not hypothetical concerns. They are the preconditions for this technology being used in ways that genuinely help people rather than giving them a convincing substitute for the help they actually need.

What This Means in Practice

If you are currently using an AI chatbot for emotional support, I want to be clear: I am not suggesting that is something to be ashamed of or to hide. It is an understandable response to real emotional needs, and for many people it is filling a gap that matters.

What I am suggesting is that it is worth being thoughtful about what the gap is and whether AI is genuinely addressing it — or providing enough relief to reduce the urgency of addressing it differently.

There are questions worth sitting with:

Are you using AI to process difficult feelings and gain perspective, or are you using it to avoid conversations, decisions, or confrontations that need to happen with actual people in your life? Are the responses you are receiving pushing you toward growth and change, or primarily confirming what you already believe? Are you turning to AI instead of therapy because the barrier to care feels too high, and is that barrier worth examining?

These questions do not have a single right answer. But they are the kind of questions that belong in a therapy room — and increasingly, they are questions about AI use itself.

References

Perlis, R. H. (2026). Artificial intelligence and the potential transformation of mental health. JAMA Psychiatry, 83(4), 409–413. https://doi.org/10.1001/jamapsychiatry.2025.4116

Saba, S., & colleagues. (2026). [AI use and mental health care: Implications for clinical practice]. JAMA Psychiatry. [As reported in NPR, April 6, 2026: https://www.npr.org/2026/04/06/nx-s1-5766349]

AI in Behavioral Health Is Evolving — And Sleep May Be the Missing Link

Digital mental health is entering a new era. Recaps from last month’s HLTH 2025 conference highlighted a clear shift across the industry: the focus is moving away from “AI hype” toward “AI impact.” Healthcare organizations, health tech companies, and clinical leaders are aligning around the same core message — outcomes, evidence, trust, and real-world engagement matter more than ever.

In behavioral health, this pivot feels especially relevant. AI-driven platforms are more advanced than ever, but the conversation is expanding beyond innovation for innovation’s sake. The questions shaping the future now sound different:

  • Does the tool lead to measurable clinical improvement?

  • Is it grounded in validated therapeutic models?

  • Does it keep people engaged long enough to change behavior?

And nowhere is that shift more urgent than in the realm of sleep and behavioral sleep medicine.

Why Sleep Is the Next Frontier in Digital Behavioral Health

Technology can accelerate access to care, but sleep does not improve passively. It requires new habits, new routines, and new responses to stress, fatigue, and rumination. In other words — sleep is a behavioral system, not simply a biological one.

Decades of research on Cognitive Behavioral Therapy for Insomnia (CBT-I) have shown:

  • Lasting sleep improvement depends on behavioral adherence

  • Digital programs succeed when grounded in clinical fidelity

  • People need support in the moment, not just information

It’s not enough to know what improves sleep. People need tools that help them follow through — especially when motivation is low, or when insomnia triggers anxiety, frustration, or avoidance.

The next generation of digital health solutions will succeed not because they track sleep more accurately, but because they help people change behavior more effectively.

From AI Hype to AI Impact in Sleep Medicine

As digital behavioral health matures, it’s becoming clear that the most effective use of AI isn’t replacing therapy — it’s enhancing the therapeutic process:

• Personalizing behavioral recommendations based on patterns

• Predicting relapse moments before they occur

• Delivering real-time coaching during high-risk periods (late nights, early mornings, high stress)

• Supporting accountability without increasing clinical workload

AI becomes most valuable when paired with evidence-based treatment frameworks and a clear path from knowledge → engagement → adherence → outcomes.

The market is beginning to reward solutions that demonstrate:

  • Clinical validation

  • Measurable symptom reduction

  • Lower cost of care and improved access

  • Lasting changes in daily functioning, not temporary engagement spikes

Sleep stands at the center of all four.

The New Mental Health Equation Includes Sleep

For years, clinicians have emphasized what the broader healthcare system is only now beginning to adopt:

  • Sleep health is mental health

  • Sleep health is physical health

  • Sleep health is burnout prevention and workforce performance

Improving sleep has been linked to reduced anxiety and depression, improved immune function, decreased cardiometabolic risk, enhanced self-regulation, and improved cognitive performance. For employers, sleep improvement is increasingly tied to productivity, decision-making, emotional resilience, safety, and retention.

As the health tech sector looks toward scalable, cost-effective interventions, sleep emerges as one of the highest-leverage points of change across both clinical and organizational environments.

The Future of Digital Behavioral Sleep Medicine

The next wave of innovation in sleep and mental health will not be defined by:

✗ More tracking

✗ More data dashboards

✗ More generic “sleep hygiene” advice

It will be defined by:

✓ Clinical fidelity to gold-standard care like CBT-I

✓ Human-centered design that supports motivation and behavior

✓ Technology that enhances—not replaces—relational connection

✓ Demonstrated clinical outcomes, not just engagement analytics

The industry message is clear: real-world behavior change is the new benchmark of success. If the digital health space continues to invest in solutions that are clinically informed, evidence-based, and behaviorally smart, sleep has the potential to transform mental health at scale.

Trends in Digital Mental Health and Behavioral Health: What’s Changing and What It Means for Patients

The world of mental health care is undergoing a transformation. Virtual therapy, digital mental health apps, wearable technology, and skill-based online programs are reshaping the way people access care and understand their own emotional well-being. For those seeking support, these tools can create more flexibility, more personalization, and more insight than ever before.

But with rapid change also comes uncertainty. Many patients are unsure which tools are helpful, which are hype, and whether digital support can be as meaningful as in-person therapy. Understanding today’s digital mental health trends can help people make informed decisions and advocate for the type of care that feels right for them.

Why Digital Access Is Expanding — and Why It Matters

Virtual therapy has become a permanent and valuable model of care. Patients who previously struggled to make room for therapy in their lives — due to scheduling demands, commuting, childcare, health limitations, or anxiety around seeking support — now have access to treatment from wherever they are.

This shift is especially meaningful for:

  • Young adults moving to new cities or managing work stress

  • New parents balancing childcare responsibilities

  • Individuals with chronic pain, mobility challenges, or insomnia

  • People in rural or underserved areas with fewer available providers

Digital access does not dilute the therapeutic relationship. For many, it strengthens it by lowering barriers to connection and encouraging more consistent care.

Apps and Digital Programs: Reinforcing Skills, Not Replacing Therapy

There is no shortage of mental health apps, and not all are created equal — but when thoughtfully chosen, they can complement therapy in powerful ways. Behavioral health apps are especially supportive for:

  • Tracking mood, sleep, habits, triggers, and progress

  • Practicing coping strategies outside of sessions

  • Learning evidence-based skills like CBT or mindfulness

  • Increasing accountability during life transitions

Digital support works best not as a standalone solution, but as a tool that reinforces therapeutic goals.

Evidence-Based Approaches Are Becoming More Mainstream

The digital mental health movement has led to an increased focus on structured, data-driven interventions like:

  • Cognitive Behavioral Therapy (CBT)

  • Acceptance and Commitment Therapy (ACT)

  • Dialectical Behavior Therapy (DBT)

  • Cognitive Behavioral Therapy for Insomnia (CBT-I)

Patients are learning earlier, even before entering therapy, that talk therapy alone is not always enough — and that targeted, skills-based approaches can create lasting change for anxiety, depression, insomnia, emotional dysregulation, and behavioral challenges.

Wearable Tech and Self-Tracking: A New Window Into Mental Health

More individuals are using wearable devices to track sleep, HRV, movement, and stress signals. While wearables are not diagnostic tools, they can:

  • Increase awareness of mind-body patterns

  • Help identify stress cycles or disrupted sleep

  • Reveal how lifestyle influences mood and energy

  • Support engagement in treatment plans

When interpreted in collaboration with a mental health professional, this data can support behavioral change without becoming overwhelming or perfectionistic.

Human Connection Remains at the Heart of Healing

While digital trends are reshaping mental health care, the core of therapy has not changed: people heal in safe, trusting relationships. Technology should expand access — not replace connection. The future of mental health is most likely hybrid: digital tools to reinforce skills, and human-to-human care to support emotional growth.

Digital mental health is not about adding more technology to daily life — it’s about using technology intentionally to make care more reachable, responsive, and personalized. Whether someone is seeking support for stress, anxiety, insomnia, life transitions, or relationship challenges, there are more pathways to treatment than ever before.

If you’re considering therapy but don’t know where to begin, reaching out can help you determine what mix of digital tools and human support is right for you.


Julie Kolzet, Ph.D.