Stories about people building emotional connections with AI are appearing more often, but Anthropic just dropped some numbers claiming it’s far from as common as it might seem. Scraping 4.5 million conversations from Claude, the company discovered that only 2.9 percent of users engage with it for emotional or personal support.
Anthropic wanted to emphasize that while sentiment usually improves over the conversation, Claude is not a digital shrink. It rarely pushes back outside of safety concerns, meaning it won’t give medical advice and will tell people not to self-harm.
But those numbers might be more about the present than the future. Anthropic itself admits the landscape is changing fast, and what counts as “affective” use today may not be so rare tomorrow. As more people interact with chatbots like Claude, ChatGPT, and Gemini and more often, there will be more people bringing AI into their emotional lives. So, how exactly are people using AI for support right now? The current usage might also predict how people will use them in the future as AI gets more sophisticated and personal.
Ersatz therapy
Let’s start with the idea of AI as a not-quite therapist. While no AI model today is a licensed therapist (and they all make that disclaimer loud and clear), people still engage with them as if they are. They type things like, “I’m feeling really anxious about work. Can you talk me through it?” or “I feel stuck. What questions should I ask myself?”
Whether the responses that come back are helpful probably varies, but there are plenty of people who claim to have walked away from an AI therapist feeling at least a little calmer. That’s not because the AI gave them a miracle cure, but because it gave them a place to let thoughts unspool without judgment. Sometimes, just practicing vulnerability is enough to start seeing benefits.
Sometimes, though, the help people need is less structured. They don’t want guidance so much as relief. Enter what could be called the emotional emergency exit.
Imagine it’s 1 AM and everything feels a little too much. You don’t want to wake up your friend, and you definitely don’t want to scroll more doom-laced headlines. So you open an AI app and type, “I’m overwhelmed.” It will respond, probably with something calm and gentle. It might even guide you through a breathing exercise, say something kind, or offer a little bedtime story in a soothing tone.
Some people use AI this way, like a pressure valve – a place to decompress where nothing is expected in return. One user admitted they talk to Claude before and after every social event, just to rehearse and then unwind. It’s not therapy. It’s not even a friend. But it’s there.
For now, the best-case scenario is a kind of hybrid. People use AI to prep, to vent, to imagine new possibilities. And then, ideally, they take that clarity back to the real world. Into conversations, into creativity, into their communities. But even if the AI isn’t your therapist or your best friend, it might still be the one who listens when no one else does.
Decision-making
Humans are indecisive creatures, and figuring out what to do about big decisions is tough, but some have found AI to be the solution to navigating those choices.
The AI won’t recall what you did last year or guilt you about your choices, which some people find refreshing. Ask it whether to move to a new city, end a long relationship, or splurge on something you can barely justify, and it will calmly lay out the pros and cons.
You can even ask it to simulate two inner voices, the risk-taker and the cautious planner. Each can make their case, and you can feel better that you made an informed choice. That kind of detached clarity can be incredibly valuable, especially when your real-world sounding boards are too close to the issue or too emotionally invested.
Social coaching
Social situations can cause plenty of anxiety, and it’s easy for some to spiral into thinking about what could go wrong. AI can help them as a kind of social script coach.
Say you want to say no but not cause a fight, or you are meeting some people you want to impress, but are worried about your first impression. AI can help draft a text to decline an invite or suggest ways to ease yourself into conversations with different people, and take on the role to let you rehearse full conversations, testing different phrasings to see what feels good.
Accountibility pal
Accountability partners are a common way for people to help each other achieve their goals. Someone who will make sure you go to the gym, go to sleep at a reasonable hour, and even maintain a social life and reach out to friends.
Habit-tracking apps can help if you don’t have the right friend or friends to help you. But AI can be a quieter co-pilot for real self-improvement. You can tell it your goals and ask it to check in with you, remind you gently, or help reframe things when motivation dips.
Someone trying to quit smoking might ask ChatGPT to help track cravings and write motivational pep talks. Or an AI chatbot might ensure you keep up your journaling with reminders and suggestions for ideas on what to write about. It’s no surprise that people might start to feel some fondness (or annoyance) toward the digital voice telling them to get up early to work out or to invite people that they haven’t seen in a while to meet up for a meal.
Ethical choices
Related to using AI for making decisions, some people look to AI when they’re grappling with questions of ethics or integrity. These aren’t always monumental moral dilemmas; plenty of everyday choices can weigh heavily.
Is it okay to tell a white lie to protect someone’s feelings? Should you report a mistake your coworker made, even if it was unintentional? What’s the best way to tell your roommate they’re not pulling their weight without damaging the relationship?
AI can act as a neutral sounding board. It will suggest ethical ways to consider things like whether accepting a friend’s wedding invite but secretly planning not to attend is better or worse than declining outright. The AI doesn’t have to offer a definitive ruling. It can map out competing values and help define the user’s principles and how they lead to an answer. In this way, AI serves less as a moral authority than as a flashlight in the fog.
Affective AI
Right now, only a small fraction of interactions fall into that category. But what happens when these tools become even more deeply embedded in our lives? What happens when your AI assistant is whispering in your earbuds, popping up in your glasses, or helping schedule your day with reminders tailored not just to your time zone but to your temperament?
Anthropic might not count all of these as effective use, but maybe they should. If you’re reaching for an AI tool to feel understood, get clarity, or move through something difficult, that’s not just information retrieval. That’s connection, or at least the digital shadow of one.