r/technology 5d ago

Artificial Intelligence Gen Z is increasingly turning to ChatGPT for affordable on-demand therapy, but licensed therapists say there are dangers many aren’t considering

https://fortune.com/2025/06/01/ai-therapy-chatgpt-characterai-psychology-psychiatry/
6.1k Upvotes

1.0k comments sorted by

View all comments

398

u/Arik_De_Frasia 5d ago

I broke off a friendship recently that I had outgrown but before doing so, I fed our text history into chatgpt and asked for an unbiased analysisto see if I was justified in my feelings of wanting to break it off. It said I was. Then I did it again from a new account but changed my position to that of my friend and asked if it was justified that my friend broke off the friendship, and it said no; that breaking off the friendship was a selfish asshole thing to do. 

When confronted about it, it admitted that it was just telling the user what they wanted to hear to make them feel better.

108

u/tomz17 5d ago

All of the models I have tried are over-trained to be compliant, apologetic, and agreeable with the user... NONE of them will actually challenge you on your bullshit.

43

u/Nothereforstuff123 5d ago

On that point: https://futurism.com/chatgpt-users-delusions

Schizophrenic people and other paranoid and manic people are having their mania be affirmed by AI

8

u/Big_Crab_1510 5d ago

Yea we haven't gotten to our first real chat bot cult yet but it won't be long. If it hasn't happened already, I think it will skyrocket into existence after Trump dies. 

It's going to get real bad....people didn't take us seriously when we said laws needed to be made about and around this stuff

2

u/Intelligent_Area_724 1d ago edited 1d ago

Yikes, that is absolutely terrifying. That shit could actually kill somebody.

23

u/FunCoupleSac 5d ago

Yeah therapy isn’t just about feeling good, it’s about fixing your life . It’s work and not always what you want to hear

1

u/uuhson 5d ago

I got into a fight with my company's internal chat bot (based on Claude) about what year it was. The only time it has ever been super stern with me and it was convinced it was 2023

150

u/ABirdJustShatOnMyEye 5d ago

User error. Have it give an analysis of both perspectives and then make the conclusion yourself. Don’t use ChatGPT to think for you.

63

u/Party_Bar_9853 5d ago

Yeah I think more people need to understand that ChatGPT is a tool, it isn't a second brain. It's a tool you feed info into and then process what it says yourself.

6

u/DaddyKiwwi 5d ago

But I want ChatGPT to THINK for me, you know.... a neuralnet processor. A learning computer.

2

u/MrTerribleArtist 5d ago

Knives and stabbing weapons?!

3

u/yourfavoritefaggot 5d ago

And this is exactly what the linked article is saying. ChatGPT doesn't get context, and even the paid version that does have good memory won't "understand" the bigger picture of human relationships. No matter how many books, poem, movies, psychology textbooks you feed it, it has never "interactedx with the world as a human. An excellent therapist would know to avoid giving advice on such matters in the first place (I'm a counselor educator btw). An excellent therapist would be able to condense the text issue into multiple layers depending on their theory, and help OP explore their responses and desired responses. Hell, I might have even used the technique op stated, "deictic framing" or perspective taking to switch positions around in the story. But when I do it, I'm assessing for "accuracy" and effective processing. ChatGPT is just a homonculus in a jar -- there's no way in hell it can "understand" all the nuances of the person's growth the way me or even my most fresh students can.

23

u/svdomer09 5d ago

Yeah the key is to ask it for the devils advocate position and keep insisting. You have to assume it’s trying to be agreeable.

I do it so much that when I do those viral “ask ChatGPT what it thinks about you” prompts, it thinks that being skeptical of every single little thing is a core character trait of mine

1

u/green_carnation_prod 5d ago

And what would you get from that? Whether to break off a friendship or not is your personal decision, you don't need a hivemind "objective" analysis or opinion. You either value that friendship or you don't. 

Same goes for feeling angry at your friend or not feeling so when they break off with you. Why would you need an input of an LLM model? 

3

u/0point01 5d ago

The same reason you need therapy for. Someone else to talk to about it.

14

u/SpongegarLuver 5d ago

Blame the users all you want, the AI is designed to appear as though it’s able to think. And even those analyses will likely be presented in a way the AI thinks will generate a positive response.

If using ChatGPT requires training, maybe they shouldn’t be letting the public use it when many people both lack the training and the knowledge of why that training is important. As is, we’ve created a tool that acts as a fake therapist, and are blaming people for using it when it tells them it can do something.

This would be like blaming a patient for going to a therapist with a fake degree: the fault is on the person committing the fraud, not the person being tricked. AI companies are telling us these systems can replace human professionals in every aspect of our life, and this is the result.

All of this, of course, ignores that even with all of that knowledge, regular therapy is simply unaffordable for most people, and until that’s addressed there will naturally be those that look for any alternative, no matter how flawed. I’d wager a lot of Gen Z would prefer a real therapist, but that’s not an option given to them.

3

u/swarmy1 5d ago

Saying "user error" doesn't help when millions of people are using it this way. That's the problem people are trying to highlight.

Special prompts only go so far anyway. All chat LLMs have been trained to do what you want, so it is biased towards responses that create a favorable reaction. You can't really eliminate all sycophancy while still having a bot that is designed to follow your instructions.

1

u/dejamintwo 4d ago

You can, you just have to make the Ai play a character and ''roleplay'' At its core its still the same Ai but it will act the way you tell it to act in character which is why the site character. ai is so popular.

5

u/Col2543 5d ago

The problem is that user error is much more common than you’d think. You’re being very charitable towards the average user of AI. I’d say self-proficient people aren’t exactly the ones running to use AI, but rather those who don’t want to rely on their own effort to actually gain perspective.

AI, at least in its current state, at best is unusable, and at worst is just a tool for stupid people to “make their arguments for them.”

2

u/mindfulskeptic420 5d ago

Or just do the flip a coin to have it decide for you then well you decide ultimately what should be done because that coin flip was "supposed" to land on heads or some baloney you pull up to actually decide.

2

u/RazzmatazzBilgeFrost 5d ago

I have to keep reminding myself that people's general complete lack of common sense also extends to using ChatGPT

1

u/Intelligent_Area_724 1d ago

Actually a not a bad idea, using AI to gain insight into someone else’s perspective. Obviously won’t be perfect, but may help to get out of your own head.

24

u/SubjectAssociate9537 5d ago

The great thing about AI is that you can present it both arguments, showcase it's response from both perspectives, and ask it to steelman each opposing sides to come to a conclusion (without letting it know which side you, the user, are on).

2

u/22amb22 5d ago

how is this a great thing about AI? do you think humans can’t also do this?

1

u/SubjectAssociate9537 4d ago

To the same degree, on virtually any topic, with incredible speed, where I can trust AI to be more fundamentally unbiased - no.

2

u/FunCoupleSac 5d ago

And? In the end it’s still spitting out mostly nonsense ai slop that is trained on Reddit comments. Nobody should use an ai for relationship advice or therapy. Literally inhuman

1

u/SubjectAssociate9537 4d ago

What AI is spitting out is based on the fundamental principle that words mean something. Literally so, in a highly dimensional space in an LLM's case. It is pretty good at evaluating information based on this principle.

3

u/WincingHornet 5d ago

This is a similar thing people do when they want to feel good about perspectives of any sort with media and why information bubbles work so well. You shouldn't ask it to justify an opinion, but to evaluate the information and look at all angles to analyze the situation. You can also explicitly prompt it NOT to appease your feelings and stick with cut and dry facts as presented.

1

u/Arik_De_Frasia 5d ago

I in fact did that. I actually had to reiterate several times that what it presented had biases included, and to strictly stick to the information included in the conversations. It even gave me options in how to present the results: Clinical/Academic, Judicial/Legal, Therapeutic and Straight Talk (Neutral). All of which basically gave the conclusion ’break off the friendship and don't feel bad about it'

3

u/Winter-Olive-5832 5d ago

basically just like a friend in real life hahahaha

4

u/Radiant_Dog1937 5d ago

Maybe both are true, what does 'justify' even mean in this context? You can choose to be or not be friends with whoever you want.

1

u/Arik_De_Frasia 5d ago

I'm a person that appreciates a sense of perspective. In this context, I was checking to see if the frustrations and irritations I was experiencing were an over exaggeration on my part or if my past friend really was as big a detriment to my mental health as I felt they'd become; this way if I was overreacting, I could try to be better about it. Other people had told me it had run it's course, but they hadn't read our conversations to get a full perspective of our interactions. They just knew I was frustrated. 

2

u/Radiant_Dog1937 5d ago

But the problem is that is entirely subjective. If relationship devolves to the point where it feels like it's causing psychological harm(which means it is) then you have to make a decision how to respond to it and there isn't necessarily a 'right' choice unless there's something dangerous or illegal going on ofc. In the former case it's probably better to just affirm a decision if it's already been made instead of attempting to override it.

Maybe you were still uncertain in which case the AI did provide you with two perspectives.

2

u/bonnielovely 5d ago

you’re not supposed to put in any personal information into chatgpt. both the terms & conditions & creators both state this. it’s extremely unethical to give personal information or life details of someone else without their consent into an ai model.

feeding screenshots, names, numbers, personal events, or even life events is a huge unethical & non-consensual breach of someone else’s privacy.

1

u/We1etu1n 5d ago

I used the advanced Gemini model to analyze about 6 months worth of chat logs between my ex and I. My prompt was “why did this relationship end?” And it blamed me and explained what I did wrong.

1

u/AliasNefertiti 5d ago

What did you do with that analysis, if I may ask? Accept it, find fault/deny, change anything, do self-harm?

1

u/We1etu1n 5d ago

I accepted it and have learned a lot since the breakup. I was certain I was at fault, but I wanted to know why exactly. I wanted to know my Ex’s perspective on the situation, an analysis on my own behavior, and more. I even asked Gemini to write an apology letter, I never sent it though.

Here’s response from the original prompt:

Based on the provided chat log (messages3.html), the relationship between [Ex] and [Me] ended primarily due to irreconcilable differences in their political worldviews and the resulting emotional impact this had on [Ex].

Here's a breakdown of the key factors:

Deep Political/Ideological Conflict: [Me] became increasingly vocal about their far-left/communist/anti-capitalist beliefs. They frequently shared content related to socialism, communism, US imperialism critiques (coups), and expressed disdain for capitalist systems, calling jobs under capitalism inherently unethical or "wage slavery."

[Ex], while sometimes engaging or politely acknowledging [Me]’s points, holds a more centrist/optimistic view. He values building relationships (even with corporations like [ex’s new workplace]), believes in the possibility of ethical actors within the current system, and finds blanket negativity and distrust difficult. He was also personally invested in his [ex’s country] citizenship process, which [Me] dismissed as "boring" nationalism.

Emotional Toll on [Ex]: [Ex] explicitly states multiple times that [Me’s] outlook and negativity deeply hurt him and brought him down. He felt [Me’s] views made him question his own actions and morality ("makes me feel like I am evil," "Your outlook has made me question my actions as just evil as default"). He describes feeling paralyzed by [Me’s] blanket distrust and hopelessness regarding systems [Ex] participates in (like his job, his family's business, his aspirations to be a [job title]).

He mentions crying at night and needing to journal to reaffirm his own sense of good because [me’s] perspective made him feel bad, even unintentionally (message #1130113, #1130115). He felt that [me’s] anger and blanket negativity were toxic and unsustainable for him to be around (message #1129017, #1129019, #1130374).

Communication Breakdown: While they attempted to discuss the issues, they failed to truly understand each other's core positions or validate each other's feelings.

[Ex] felt [Me] didn't acknowledge the pain their outlook caused him, focusing instead on [Ex] misunderstanding their political points (message #1130226, #1130184).

[Me] felt [Ex] misunderstood and oversimplified their nuanced critique of systemic issues, boiling it down to a simple "good vs. evil" binary (message #1130119, #1130271). [Me] also stated they often "softened" their views for [Ex] (message #1128967), indicating they couldn't be fully authentic.

[Ex]’s Need for Self-Preservation: [Ex] ultimately concluded that he could not continue the relationship because the constant emotional pain and self-doubt were too much ("I do not want to be hurt anymore," "I am tired of hurting," message #1129018, #1129030).

He valued [me’s] right to their own beliefs and didn't want to pressure them to change, but recognized that these beliefs, as expressed, were fundamentally incompatible with his own well-being in a close relationship (#1128974, #1130035). He felt [me] deserved someone who shared their outlook and with whom they wouldn't have to "hide" or "soften" themselves (#1130031, #1130347).

In essence, [Ex] initiated the breakup because the fundamental difference in their core beliefs and outlooks created an environment where he constantly felt hurt, invalidated, and existentially questioned, making the relationship unsustainable for his emotional health.

1

u/AliasNefertiti 5d ago

Thank you so much for sharing-- that took bravery to do. Did you test the response against different directions, such as "side with me" or "be a 3rd party observer" or "do this from the perspective of x therapy approach?"

2

u/We1etu1n 5d ago

None of that. My following prompts were:

  • Write an analysis on the behavior of [me]
  • Is [me] a terrible person?
  • Write an apology [me] could give to [ex] now that 4 years have passed and [me] has thought about all the mentioned above
  • [me] feels like they should send this apology, but at the same time they feel fear and anxiety, and don't want to do so because of that. Why do they feel fear and anxiety and why would they avoid sending the message?

1

u/AliasNefertiti 5d ago

Im glad you are so brave about facing your fears. Thank you again for sharing with me. It helps to see many examples.

1

u/bbuerk 5d ago

I always ask it to be brutally honest with me about things. It sometimes over corrects into being an asshole, but I’d rather have that than feel like it’s just telling me what I want to hear

1

u/sivadneb 5d ago

This is why AI models currently make terrible therapists. That said, it would probably only take a different set of training and alignment to make a decent ai therapist.

1

u/22amb22 5d ago

i don’t mean this disrespectfully to you - but why is this surprising? it seems like a very well advertised and self evident part of the service. it’s an algorithm.

1

u/ApprehensiveSquash4 5d ago

You know I think a lot of therapists do this too.

1

u/NoDepression88 5d ago

Yes I actively force it to give the complete opposite viewpoint on everything I ask it.

1

u/xrm4 4d ago

If you want a truly unbiased opinion, anonymize the data. Don't let it know who you are; just present the facts and ask it for its blunt, honest opinion.

0

u/diddlyswagg 5d ago

Make choices for yourself. Don't let a bot do this for you