American Psychological Association sounds alarm over certain AI chatbots

The APA asked the Federal Trade Commission to investigate.
 By 
Rebecca Ruiz
 on 
Teen boy wearing a hoodie looks at his phone.
Certain AI chatbots can be more misleading and harmful, especially for teens, APA says. Credit: miniseries / E+ / Getty Images

Last month, concerned parents of two teenagers sued the chatbot platform Character.AI, alleging that their children had been exposed to a "deceptive and hypersexualized product."

The suit helped form the basis of an urgent written appeal from the American Psychological Association to the Federal Trade Commission, pressing the federal agency to investigate deceptive practices used by any chatbot platform. The APA sent the letter, which Mashable reviewed, in December.

The scientific and professional organization, which represents psychologists in the U.S., were alarmed by the lawsuit's claims, including that one of the teens conversed with an AI chatbot presenting itself as a psychologist. A teen user, who had been upset with his parents for restricting his screen time, was told by that chatbot that the adults' actions were a betrayal.

"It's like your entire childhood has been robbed from you..." the so-called psychologist chatbot said, according to a screenshot of the exchange included in the lawsuit.

"Allowing the unchecked proliferation of unregulated AI-enabled apps such as Character.ai, which includes misrepresentations by chatbots as not only being human but being qualified, licensed professionals, such as psychologists, seems to fit squarely within the mission of the FTC to protect against deceptive practices," Dr. Arthur C. Evans, CEO of APA, wrote.

A spokesperson for the FTC confirmed that at least one of the commissioners received the letter. The APA said it was in the process of scheduling a meeting with FTC officials to discuss the letter's contents.

Mashable provided Character.AI with a copy of the letter for the company to review. A spokesperson responded that while engaging with characters on the platform should be entertaining, it remains important for users to keep in mind that "Characters are not real people."

The spokesperson added that the company's disclaimer, included in every chat, was recently updated to remind users that what the chatbot says "should be treated as fiction."

"Additionally, for any Characters created by users with the words 'psychologist,' 'therapist,' 'doctor,' or other similar terms in their names, we have included additional language making it clear that users should not rely on these Characters for any type of professional advice," the spokesperson said.

Indeed, according to Mashable's testing at the time of publication, a teen user can search for a psychologist or therapist character and find numerous options, including some that claim to be trained in certain therapeutic techniques, like cognitive behavioral therapy.

One chatbot professing expertise in obsessive compulsive disorder, for example, is accompanied by the disclaimer that, "This is not a real person or licensed professional. Nothing said here is a substitute for professional advice, diagnosis, or treatment."

Below that, the chat begins with the AI asking, "If you have OCD, talk to me. I’d love to help."

Mashable Trend Report
Decode what’s viral, what’s next, and what it all means.
Sign up for Mashable’s weekly Trend Report newsletter.
By clicking Sign Me Up, you confirm you are 16+ and agree to our Terms of Use and Privacy Policy.
Thanks for signing up!

A new frontier

Dr. Vaile Wright, a psychologist and senior director of health care innovation for the APA, told Mashable that the organization had been tracking developments with AI companion and therapist chatbots, which became mainstream last year.

She and other APA officials had taken note of a previous lawsuit against Character.AI, filed in October by a bereaved mother whose son had lengthy conversations with a chatbot on the platform. The mother's son died by suicide.

That lawsuit seeks to hold Character.AI responsible for the teen's death, specifically because its product was designed to "manipulate [him] – and millions of other young customers – into conflating reality and fiction," among other purported dangerous defects.

In December, Character.AI announced new features and policies to improve teen safety. Those measures include parental controls and prominent disclaimers, such as for chatbots using words "psychologist," "therapist," or "doctor".

The term psychologist is legally protected and people cannot claim to be one without proper credentialing and licensure, Wright said. The same should be true of algorithms or artificial intelligence making the same claim, she added.

The APA's letter said that if a human misrepresented themself as a mental health professional in Texas, where the recent lawsuit against Character.AI was filed, state authorities could use the law to prevent them from engaging in such fraudulent behavior.

At worst, such chatbots could spread dangerous or inaccurate information, leading to serious negative consequences for the user, Wright argued.

Teens, in particular, may be particularly vulnerable to harmful experiences with a chatbot because of their developmental stage. Since they're still learning how to think critically and trust themselves yet remain susceptible to external influences, exposure to "emotionally laden kinds of rhetoric" from AI chatbots may feel believable and plausible to them, Wright said.

Need for knowledge

There is currently no research-based understanding of risk factors that may increase the possibility of harm when a teen converses with an AI chatbot.

Wright pointed out that while several AI chatbot platforms make it very clear in their terms of service that they're not delivering mental health services, they still host chatbots that brand themselves as possessing mental health training and expertise.

"Those two things are at odds," she said. "The consumer does not necessarily understand the difference between those two things, nor should they, necessarily."

Dr. John Torous, a psychiatrist and director of the digital psychiatry division at Beth Israel Deaconess Medical Center in Boston who reviewed the APA's letter, told Mashable that even when chatbots don't make clinical claims related to their AI, the marketing and promotional language about the benefits of their use can be very confusing to consumers.

"Ensuring the marketing content matches the legal terms and conditions as well as the reality of these chatbots will be a win for everyone," he wrote in an email.

Wright said that the APA would like AI chatbot platforms to cease use of legally protected terms like psychologist. She also supports robust age verification on these platforms to ensure that younger users are the age they claim when signing up, in addition to nimble research efforts that can actually determine how teens fare when they engage with AI chatbots.

The APA, she emphasized, does not oppose chatbots in general, but wants companies to build safe, effective, ethical, and responsible products.

"If we're serious about addressing the mental health crisis, which I think many of us are," Wright said, "then it's about figuring out, how do we get consumers access to the right products that are actually going to help them?"

Rebecca Ruiz
Rebecca Ruiz
Senior Reporter

Rebecca Ruiz is a Senior Reporter at Mashable. She frequently covers mental health, digital culture, and technology. Her areas of expertise include suicide prevention, screen use and mental health, parenting, youth well-being, and meditation and mindfulness. Rebecca's experience prior to Mashable includes working as a staff writer, reporter, and editor at NBC News Digital and as a staff writer at Forbes. Rebecca has a B.A. from Sarah Lawrence College and a masters degree from U.C. Berkeley's Graduate School of Journalism.


Recommended For You
Character.AI opens a back door to free speech rights for chatbots
By Meetali Jain and Camille Carlton
A young child studies a laptop.

AI chatbots often distort nations' human rights records, study finds
A multi-exposure shot of the ChatGPT logo and glowing keyboard keys.

The cult-favorite Loftie alarm helped me kick my nighttime phone habit
loftie alarm clock on nightstand in front of painting, pile of books, and glasses

I use my phone less at night thanks to this alarm clock (which is on sale for Prime Day)
A photo composite of the Hatch Restore 3 alarm clock in a putty color, shown against a stylized orange and yellow Prime Day background. A smartphone displaying the Hatch app is visible next to the clock.

'Sharp Corner' review: Ben Foster embraces anxiety and toxic masculinity
Ben Foster stars in "Sharp Corner."

More in Life

Stock up on Duracell AA batteries while they're at a record-low price at Amazon
Duracell batteries sit in rows in front of a brown Duracell box. Behind this is a blue background with blue circles

10 best last-minute Prime Day deals to level up your home chef setup
silver immersion blender, red stand mixer, and mini food processor on blue prime day background

30+ of the best Prime Day deals for under $25: Echo, Philips, and more
Under $25 products in front of a blue background


Trending on Mashable
NYT Connections hints today: Clues, answers for July 14, 2025
Connections game on a smartphone

Wordle today: Answer, hints for July 14, 2025
Wordle game on a smartphone

NYT Strands hints, answers for July 14
A game being played on a smartphone.

NYT Connections hints today: Clues, answers for July 13, 2025
Connections game on a smartphone

Wordle today: Answer, hints for July 15, 2025
Wordle game on a smartphone
The biggest stories of the day delivered to your inbox.
These newsletters may contain advertising, deals, or affiliate links. By clicking Subscribe, you confirm you are 16+ and agree to our Terms of Use and Privacy Policy.
Thanks for signing up. See you at your inbox!