Artificial intelligence shows promise and peril. But some people might find it more comforting to tell their problems to a machine than a human.

A young man talks to a robot therapist.

A young man talks to a robot therapist. (Illustration by News Decoder)

This article, by high school students Sienna Mamoun and Alexa Taras was produced out of News Decoder’s school partnership program and won First Prize in the 16th News Decoder Non-fiction Storytelling Contest. Both are students at the Hewitt School in New York City, a News Decoder partner institution. Learn more about how News Decoder can work with your school.

Imagine feeling overwhelmed and in need of someone to talk to, but no one is available. You have no idea what to do, who to talk to and what to say. Chatbot AI is your new best friend. Essentially, it can take over basic human interaction and problems, answering even the most absurd questions.

An artificial intelligence chatbot provides support and guidance. But there are some aspects that AI can not replace, things like having a physical person in front of you. Still, you feel a bit better knowing you have some support.

The 1980s were referred to as the rapid “AI boom.” Joseph Weizenbaum, a professor at the Massachusetts Institute of Technology, developed the first chatbot to simulate an entertaining human conversation. He envisioned it as taking on the persona of a psychotherapist.

Its original purpose was “to make machines use language, form abstractions and concepts, solve the kinds of problems now reserved for humans and improve themselves.” Ideally, a user would input a message on an electric typewriter linked to a mainframe, and shortly after, the “psychotherapist” would respond.

Decades later, in 2017, chatbots finally became recognized as a stable form of communication. Because of continuous innovations in technology, chatbots have been created as a type of artificial intelligence application that poses as a sort of digital friend that you can lean on.

Imitating human interaction or improving on it?

Whether you are unable to talk with a person either mentally or physically, conversational AI — also known as CAI — has become a popular form of solving problems. It provides users with someone to interact with by using large volumes of data, machine learning and natural language processing to help imitate human interactions.

Especially during the Covid-19 epidemic, those who were isolated at home saw chatbots as a way to communicate. It began to fill a gap in people’s lives who were unable to connect with those in the real world.

Although there are many forms of chatbot therapy, a study by Emre Sezgin at the Center for Biobehavioral Health in the U.S. state of Ohio found that of the 103 million U.S. adult users, at least 13% of them reported that they simply used it to interact with someone.

Wysa is a top AI mental health app that helps people with emotional challenges. It uses clinically-proven AI and offers human coaching, too. The app helps track emotions, promotes positivity and makes cognitive behavioral therapy easy to use.

It offers a chatbot therapist and plenty of mental health improvement exercises to help with stress and anxiety. AI algorithms like this can automatically take information from various documents, such as invoices, contracts and forms. This not only saves businesses valuable time and effort but also reduces the risk of human error.

On the other hand, a potential downside is that it could also provide false information to the public. Since it acts as a “virtual health assistant,” many people could rely on it as a valuable resource for predicting symptoms. However, such assumptions can harm the patient.

Can a machine have a bedside manner?

Dr. Michael Mamoun is a neuroscientist and physician specializing in psychiatry, neuroimaging and brain stimulations in California and worries about AI for counseling.

“It [CAI] could lead a patient or a doctor down the wrong path,” Mamoun said. “It could psychologically harm or affect a patient adversely. Even if the information is correct, if it’s not presented to the patient in a proper and kind of gentle and appropriate manner, then that could be shocking or stunning to the patient.” 

Mamoun saw an increase in social anxiety as a result of the Covid pandemic as many people became used to the idea of talking with a program rather than a person.

In psychiatry, only human doctors can establish critical patient-doctor communication, he said. This is unlike many fields of medicine, such as radiology, whose skill set required to interpret images and form accurate diagnoses is a job suited for AI. Many in this field both recognize and fear their jobs slowly being replaced by AI.

But AI chatbots can provide much support to doctors, without replacing them. Medical professors use it as a collaborator, not a replacer. A form of AI known as Decision Support Systems (DSS) can assist mental health professionals in making evidence-based treatment decisions.

A more accurate diagnosis

A study published in 2022 by a team of researchers led by Salih Tutun, a professor at the Olin Business School at the University of Washington in St. Louis, who is an expert in deep learning, neuro-psychology and healthcare analytics, found that an accurate diagnosis by DSS for mental disorders could reduce the overall healthcare cost by preventing misdiagnosis, overdiagnosis and unnecessary treatment.

Natural Language Processing, which is an aspect of AI that helps analyze patients’ behavior in conversations, can also help doctors by detecting patterns through a patient’s social media, texts, emails and other forms of communication that correlate with mental health issues.

Many people are grateful for the development of AI. In an article in Scientific American in 2023, science journalist Sara Reardon took a look at a Reddit thread and found people praising the continuous support they received from AI, with one poster even saying: “Chat GPT is better than my therapist.”

Chatbots can provide help to people who might need psychological consultation but can’t afford it, are leery of going to a human or who would have to wait for an appointment. Where a therapist has working hours, a chatbot is there 24 hours a day, every day.

People from a higher socio-economic background also have a better understanding of not just what AI is, but how to use it. According to a 2023 survey of people in the United States by the Pew Research Center, more people at the upper income level are more aware of AI (52%) compared to lower-income adults (15%).

As a result, lower-income adults, who are less aware of the use of AI, are more vulnerable to its manipulation. They can fall victim to it if they are not mentally prepared to be confronted with specific answers.

Mimicking human behavior

The scary truth is that AI can now be used to mimic human behavior. But can they replace us? To ensure that AI doesn’t take over basic forms of human interactions, psychiatrists have continued to stress the importance of relying on core therapeutic relationships and decision-making to human professionals.

Despite this, AI cannot be avoided. According to the Pew study, AI is a part of 65% of work customer service chatbots and 64% of respondents confirm that they take recommendations provided by AI based on what they buy through their search history.

The fact is, we have no inkling of the long-term effects of AI. This generation, this era of humanity is the first to physically and mentally be exposed to this advanced level of technology, and consequently, we are the experimentation generation.

“I think the sky’s the limit,” Mamoun said. “You know, there may be possibilities that we’re not even aware of.”

Currently, psychiatrists are attempting to combat the takeover of AI on their patients by attempting to limit its use in the decision-making process.

Yet, doctors do not have control over their patients. While exposing the wrong people to this type of technology can be dangerous and misleading, its presence also harbors many benefits for low-income communities. If we want to stop AI from providing misinformation, we must help others face this truth and that might require a human touch.

“Some people say they want to know about their problems but they really don’t, you know?” Mamoun said.

Questions to consider:

1. What could be an advantage AI has in therapy over a human therapist?
2. What are some concerns doctors have about using AI for therapy?
3. Would you feel more or less comfortable telling your personal problems to a machine? Why?

Sienna Mamoun

Sienna Mamoun is in the second year at The Hewitt School. Outside of school, she enjoys spending quality time with friends and family, as well as engaging in activities like basketball, dancing and surfing. Professionally, Mamoun works as a babysitter. After high school, Mamoun plans to study business and entrepreneurship. 

Alexa Taras

Alexa Taras is entering her fourth year at The Hewitt School in New York City. She spends her personal time volunteering as an assistant teacher at her synagogue and playing tennis. In school, she is an active member of Model UN and varsity soccer. 

Share This
Contest winnersWho wants a therapist who’s robotic? But a robot therapist? Maybe.