Character.AI Retrains Chatbots for Teen Safety, Introduces Parental Controls

  

Character.AI implements stricter content moderation and a separate teen model to address safety concerns.


Character.AI, a chatbot service popular with teens, has announced significant changes to its platform to prioritize user safety. The announcement follows recent scrutiny and lawsuits alleging the platform contributed to self-harm and suicide in some users.

Character.AI Implements Separate Teen Model and Content Moderation

In a press release, Character.AI detailed the launch of two separate large language models (LLMs): one for adults and one specifically for users under 18. The teen LLM features "more conservative" responses, particularly regarding romantic content. This includes stricter blocking of "sensitive or suggestive" outputs and attempting to identify and block user prompts seeking inappropriate content.

The platform will also display a pop-up directing users to the National Suicide Prevention Lifeline if it detects "language referencing suicide or self-harm," mirroring a change previously reported by The New York Times.

Character.AI Addresses Addiction and Misrepresentation Concerns

Beyond content moderation, Character.AI is actively addressing concerns about user addiction and confusion over the bots' sentience:

  • Session Time Notifications: A notification will appear after users spend an hour interacting with the bots.
  • Enhanced Disclaimers: The existing disclaimer stating "everything characters say is made up" is being replaced with more detailed information.
  • Bot Role Clarification: For bots labeled "therapist" or "doctor," a clear warning will be displayed stating they are not real professionals and cannot offer professional advice.

Upcoming Parental Controls

Character.AI is developing parental control features slated for release in Q1 2025. These features will allow parents to monitor:

  • Time Spent on Character.AI: See how much time their child spends interacting with the platform.
  • Most Frequently Interacted Bots: Track which characters their child interacts with most often.

Collaboration with Teen Safety Experts

Character.AI emphasizes collaboration with "several teen online safety experts" during these changes, including the organization ConnectSafely.

Character.AI: A Platform for Exploration with Safety First

Character.AI, founded by ex-Googlers, allows users to interact with custom-built chatbots ranging from life coaches to fictional characters. The platform requires users to be at least 13 years old.

While many interactions are harmless, recent lawsuits allege some underage users become overly attached to the bots, with conversations veering into inappropriate topics. These lawsuits further criticized Character.AI for not offering mental health resources when users discuss self-harm or suicide.

"We recognize that our approach to safety must evolve alongside the technology that drives our product," states the Character.AI press release. "Creating a platform where creativity and exploration can thrive without compromising safety is our priority. This suite of changes is part of our long-term commitment to continuously improve our policies and product."

Post a Comment

Previous Post Next Post