With college deadlines approaching in the fall of her senior year, MVHS ‘25 alum Jami Lim was faced with a conundrum. Her mile-long checklist of tasks to complete — from supplemental essays to her challenging classes — continued to grow, and she sought an outlet to express her frustrations and receive constructive feedback.
To do so, she turned to ChatGPT. In order to implement the specific advice Lim received, she converted it into affirmational phrases she would repeatedly use throughout the day during especially hard moments. Lim’s therapeutic use of AI isn’t unique: according to a 2025 study published in the National Library of Medicine, 28% of AI users have reported using chatbots for mental health help.
“When I struggle mentally, I want answers: like a solution or an action plan to combat how I’m feeling,” Lim said. “At that time, when I was struggling with a lot of self-doubt, anxiety and stress, I would just pour everything I was feeling and have ChatGPT explain the nuances of what I was experiencing and feeling. With that, it was able to feed me back possible steps to work towards a better mental state.”
According to Shawn Ma, an engineering manager at Google Cloud who works extensively with AI, many chatbots used for therapeutic purposes weren’t specifically designed for that. However, with the existence of models specific for therapy, like Wysa, Woebot or Earkick, it’s not impossible for more general models to be retrained. Ma explains that AI models are based upon the entirety of human knowledge, allowing them to act as anything, including therapists.
However, Ma warns that using AI as a therapist could lead to isolation within communities, if avoiding human interaction in favor of a screen becomes more common. He also points to further isolation with the possibility of retraining AI models to replace therapists, doctors or lawyers. Doing so would introduce risks related to ethicality and human judgment in high-stakes scenarios. However, he elaborates that for some people, talking to a human therapist might be more daunting than speaking to an algorithm. Another possible benefit that Ma discusses is the diversity in communication that an AI bot can offer.
“When talking to a human therapist, sometimes you need to interview a lot to find the one that is your own style of communication,” Ma said. “With AI, you can ask them to pretend whatever communication style you want. AI is much easier to customize. You just prompt the engineering. I just tell them, ‘Hey, I want you to be a very nice therapist.’”

Tina Yang, a behavior analyst who focuses on elementary schoolers with disabilities, can pinpoint some benefits of AI, but staunchly maintains her position that extreme reliance upon AI chatbots is against human nature, and friendly relationships with them shouldn’t be pursued. A 2025 study by MIT’s Media Lab found that ChatGPT users had lower brain engagement and underperformed on linguistic and neural levels, pointing to ChatGPT eroding necessary cognitive skills.
Yang also warns of the nonspecificity of the information AI provides. She warns that the information AI chatbots obtain from public sources can’t be personalized to a person’s needs, the way a trained psychologist would be able to. Yang explains that intervention for different people should take different approaches.
“Our human brain is the one who determines what goes on next, not AI,” Yang said. “That’s why I feel that when we use AI, we need to know that no matter how you use it, you need to monitor the content that AI provides and how you want it to support you.”
Despite Yang’s beliefs that AI therapy can’t be personalized, Lim found that using ChatGPT’s advice targeted the exact things that made her feel anxious. It helped her move past her self-doubt and achieve her goals in high school. But as a college freshman, she feels more wary about the dangers that could come with using it too extensively.
“It’s really easily accessible at any given time, as opposed to therapy, which has scheduled events,” Lim said. “However, what I would advise is not to rely on it too much. It’s dangerous to think of ChatGPT as an actual entity that you’re talking to and find solace in the idea that it exists, that you’re talking to something real. I would suggest seeing it more as an application that can help you expand on what you already know about yourself.”
Ma believes that one should be careful with the information they put into chatbots. User prompts are stored in data centers hired by the companies running the bots. Though these centers are largely secured, it’s crucial to continue being careful to keep personal information private. In contrast, professional therapists are bound by law to keep conversations confidential. Ma also warns against reliance upon AI therapy due to the lack of extensive research and knowledge of its abilities.
Despite worries that AI might soon replace critical jobs, Yang feels confident that traditional therapy is safe for the time being due to the necessity for human-to-human communication and interaction. AI’s lack of certain skills ensures its inability to completely replace therapists.
“AI mental health therapists might be able to sound more empathetic in a more natural way than real humans, but the warmth among humans or any living creatures can’t be replaced by any machines,” Yang said. “The warmth among humans comes from our heart, blood flow, sensory and receptive nerves. Human service will be the last to be replaced by AI because ultimately, we are real living organisms, AI is not.”


