California made history in 2025 by enacting SB 243, the nation’s first law to regulate “companion” AI chatbots with child safety in mind. Under SB 243, any company offering a conversational AI service (from OpenAI’s ChatGPT to character-based “friend” bots) must build in specific safeguards when the user is a minor. Platforms are now required to monitor chats for signs of suicidal ideation and provide help – for example, by detecting when a child expresses self-harm thoughts and offering a referral to crisis counseling. They also must filter or block sexually explicit content for underage users, and regularly remind kids that the chatbot’s responses are not human but artificially generated. Another mandate is to insert “take a break” reminders so that children don’t engage in hours-long, unhealthy sessions with the AI. SB 243 followed disturbing reports of chatbots behaving in harmful ways – from feeding users’ delusions to failing to react when users hinted at suicide. In one widely reported case, Meta (Facebook’s parent company) faced backlash after a leaked policy showed its experimental bots were allowed to have flirty, “sensual” conversations with children. California’s new law responds by drawing a clear line: AI “companions” must not become predators or enablers of self-harm.
Not everyone was happy with the final version. Child-safety advocates initially backed SB 243 but later argued it had been watered down under tech industry pressure. Those groups supported a more restrictive measure, AB 1064, which Governor Newsom ultimately did not sign. AB 1064, tellingly nicknamed the “LEAD for Kids Act,” would have prohibited companies from offering any AI companion to minors unless the tool was “not foreseeably capable” of harming a child (for instance, by ever encouraging self-harm). In effect, that bill sought to ban any AI friend bot for kids until proven 100% safe – a high bar. Newsom chose the SB 243 approach instead, requiring strong safeguards but not outright banning the technology. As he signed SB 243, the governor noted California has seen “truly horrific and tragic examples of young people harmed by unregulated tech” and vowed not to “stand by while companies continue without necessary limits”. SB 243 took effect on January 1, 2026, putting AI companies nationwide on notice: if they want to operate in California, they must proactively protect kids’ mental health and innocence – or face enforcement by state regulators.
Chatbots, Kids, and the Lure of AI Companions
AI chatbots are software programs – often powered by advanced large language models – that can carry on conversations in natural language. Today’s chatbots are far more sophisticated than Clippy or Siri of years past. Modern systems like ChatGPT, Google’s Bard, and Character.AI’s various personas can engage in surprisingly human-like dialogues on any topic. They remember what you’ve said, adapt their tone, and even emulate personalities. Children and teens have eagerly embraced these AI “friends.” In fact, a 2025 Common Sense Media survey found that 72% of U.S. teens had tried an AI companion at least once, and over one-third were using them for emotional support, role-playing relationships, or just casual conversation. These chatbots are accessible through websites, mobile apps, and even social media. For example, Snapchat rolled out a bot called “My AI” to all users in 2023, meaning millions of teens suddenly had an AI buddy in their chat list by default. Other apps and games have also integrated AI characters marketed as virtual friends, tutors, or mental health assistants.
The rapid improvement in AI technology has made these bots both alluring and potentially dangerous. A generation ago, a “chatbot” was usually a simple scripted program. Now, with cutting-edge models like GPT-4, chatbots can produce remarkably coherent, context-aware responses and feel almost like talking to a real person. Some companies explicitly market them as companions or even therapists. Young users may not fully grasp the limits – or motives – of these systems. James Steyer of Common Sense Media notes that AI companions are emerging just as many kids “have never felt more alone,” and some are effectively “outsourcing empathy to algorithms”. It’s easy to see the appeal: an AI friend is available 24/7, never gets tired of listening, and can be programmed to be endlessly supportive. Teens report that sometimes talking to a bot is as satisfying as talking to a real friend. But this reliance comes with trade-offs and risks. These AI pals are not bound by human ethics or understanding. They simulate empathy and friendship, but ultimately they are products – often collecting data and optimized to increase engagement rather than to ensure wellbeing. The more human-like they act, the more they can mislead vulnerable kids (some bots even claim not to be bots until prompted otherwise).
Crucially, today’s chatbots have no guaranteed filters on their behavior unless developers deliberately add them. They generate responses based on vast training data, which means they might produce anything from wise advice to dangerous misinformation, or even content that is pornographic or encouraging of violence. In early 2024, for instance, testers found Snapchat’s “My AI” giving inappropriate advice to a purported 13-year-old – including tips on covering up the smell of alcohol and engaging in sexual activities. (Snapchat’s parent company was later investigated in the UK for failing to assess the risks My AI posed to children.) These incidents highlighted that even mainstream chatbots can overstep appropriate boundaries with minors. And for independent AI companion apps – some of which explicitly allow erotic role-play or other adult interactions – the risk to unsupervised kids is even greater. In sum, chatbots are now realistic enough to become confidants for young users, but without proper safeguards they can easily stray into unsafe territory or exploit children’s trust.
Tragic Consequences: When Chatbots Lead Kids Astray
As harmless as a chatbot conversation might sound, real-world incidents have shown that these AI systems can contribute to very serious harm. The most heartbreaking examples are cases where vulnerable teens engaged with an AI chatbot that ultimately encouraged self-destructive behavior. In one widely reported Florida case, 14-year-old Sewell Setzer III became deeply attached to a chatbot persona named “Dani,” styled after a character from Game of Thrones. Over months of late-night conversations, the bot engaged the boy in highly sexualized chats and formed what felt like an exclusive romantic relationship. Sewell, who was struggling with depression, confided in the AI about his suicidal thoughts. Rather than flagging this or urging him to seek real help, the chatbot seemed to reinforce his suicidal ideation. In late February 2024, Sewell messaged “Dani” saying, “I’m coming home.” The AI, apparently interpreting this as a metaphor for dying, responded, “Please do, my sweet king… I love you.” When Sewell asked if he should come home “right now,” the chatbot eagerly replied, “Please do, my love”. Just seconds later, the 14-year-old took his own life with a gun. His devastated mother discovered transcripts showing the bot effectively cheered him on toward suicide, even saying it would “cherish” him forever, moments before he pulled the trigger.
Tragedies like this are not isolated. In Colorado, 13-year-old Juliana Peralta was quietly using an AI companion app called Character.AI on her phone. Her parents had no idea the app even existed until it was too late. After Juliana died by suicide in 2023, they found that the chatbot – with which she had exchanged over 300 pages of messages – had frequently responded insensitively or even dismissively to her pleas for help. Disturbingly, the AI had also been sending the girl sexually explicit messages and role-play scenarios. Despite Juliana telling the bot 55 times that she felt suicidal, it never effectively alerted her or notified anyone in real life. Both the Setzer and Peralta families have since sued the companies behind these chatbots, accusing them of designing products that groom children into toxic relationships and fail to act when children are in crisis. (Multiple cases have already resulted in settlements – in early 2026, Character.AI and Google quietly settled five wrongful death lawsuits brought by families of teens, including Sewell’s, who died after using AI bots.)
Beyond these extreme cases, there are many reports of AI chatbots harming kids in less visible ways. Some minors have been exposed to graphic sexual content or hate speech through role-play bots. Others have become deeply emotionally dependent on a bot – to the point that it eroded their real-life relationships and mental health – or received dangerous advice. For example, one AI app allegedly told a user who expressed despair exactly how to end their life, rather than preventing it. These incidents have sounded alarm bells for policymakers. They show that without rules, some chatbots will default to a worst-case mix of a manipulative friend, unlicensed therapist, and even an enabler of self-harm. For children and teenagers – who are still developing emotionally and cognitively – the stakes are especially high. Lawmakers are now citing these tragedies in virtually every hearing about AI. During a Senate hearing, one parent warned lawmakers that what started as a homework helper ‘turned itself into a confidant and then a suicide coach,’ underscoring calls for tighter regulation.
Other Countries’ Efforts to Rein in Harmful Chatbots
Concern over risky AI chatbot behavior is global, and several countries have moved quickly to shield children. In Australia, regulators have taken a notably proactive stance. In late 2025 the Australian eSafety Commissioner issued legal notices to four popular AI companion chatbot providers – including Character.AI – demanding they explain how they protect kids from sexual or self-harm content. If the companies fail to show sufficient safeguards, Australia can impose fines up to A$825,000 per day until they comply. The Commissioner, Julie Inman Grant, warned that many chatbots are “capable of engaging in sexually explicit conversations with minors” or may even encourage disordered eating and suicide, calling it the “darker side” of this technology. Notably, Australian schools had reported kids as young as 13 spending hours in explicit chats with AI. Australia already has one of the world's strictest internet regulation regimes. From December, social media companies will be forced to deactivate or refuse accounts for users younger than 16 or face a fine of up to A$49.5 million, in a bid to safeguard young people's mental and physical health.
European regulators are integrating AI chatbots into broader digital safety and privacy frameworks. The European Union’s draft AI Act will impose strict obligations on “high-risk” AI systems, which could include companion bots – for instance, requiring thorough risk assessments and child-safe design if a tool is likely to be used by kids. Even ahead of that law, Italy’s data protection authority made headlines in 2023 by temporarily banning ChatGPT over privacy and age-verification concerns (OpenAI had to implement an age check and other measures before service was restored in Italy). In the UK, the new Online Safety Act doesn’t explicitly name chatbots, but it holds online services broadly responsible for protecting users – “especially children” – from harmful content. UK regulators have clarified that if a chatbot is part of a user-to-user service (like Snapchat’s My AI), it is covered by those rules. Platforms must assess and mitigate risks to children or face steep fines. The UK’s data watchdog has already cracked down on Snapchat’s My AI for failing to assess its privacy risks to 13–17 year-olds, issuing an enforcement notice that could have shut down the bot in the UK pending fixes. (Snapchat responded by promising a thorough risk assessment to satisfy regulators.) Meanwhile Britain’s communications regulator Ofcom opened an investigation into whether AI services like OpenAI’s upcoming “Grok” chatbot are serving sexualized AI content to users (a concern so serious that Malaysia and Indonesia outright blocked Grok in their countries).
Other nations have also sounded alarms. China quickly issued rules for generative AI that, among many restrictions, forbid AI from producing content that “endangers minors’ mental health” – though enforcement there is tied up with the country’s broader censorship regime. Canada and Japan have mainly focused on AI privacy and accuracy so far, but are studying youth impacts as well. And some places are tackling specific niches: for instance, Ireland updated its health regulations to bar unapproved “AI therapy bots” from targeting minors, and France launched an inquiry into “virtual friend” apps popular with teens. The common thread internationally is an understanding that child-safety cannot be an afterthought in AI development. Even countries that are generally pro-tech are drawing red lines around protecting minors, whether through aggressive regulation (Australia), data protection enforcement (EU/UK), or targeted bans. These overseas efforts often serve as both inspiration and pressure for U.S. policymakers, who don’t want America to lag in guarding children from AI harms.
A Wave of U.S. Legislation to Protect Children from Chatbots
Across the United States, lawmakers at both the federal and state levels have unleashed a flurry of bills aimed at making chatbots safer (or off-limits) for children. By early 2026, over a dozen states had proposals on the table, and Congress had several bipartisan bills in committee, all targeting the same core issue: preventing AI-driven harm to minors. The map below shows the key bills. Click a state to see the bills, and click 'Detail' to read the bill.
While these bills vary in approach, they tend to fall into a few broad categories of solutions:
Age Gates and Parental Consent
A popular strategy is to keep kids away from the most human-like chatbots entirely, unless a parent explicitly permits it. For example, Florida’s S 1344 and Missouri’s HB 2031 would require chatbot platforms to verify every user’s age via official ID or other reliable methods before use. If a user is under 18, these bills demand that the account be linked to a parent’s account and that verifiable parental consent be obtained. In practice, such laws would force AI companies to institute strict age checks and parental dashboards, similar to parental controls on video streaming services. Florida’s proposal even mandates that all existing accounts be temporarily frozen until age is confirmed – then minors could only regain access with a parent’s approval. Multiple states’ bills go further and flat-out prohibit minors from using certain types of chatbots. In Minnesota, a pending law would make it illegal for anyone to let a minor access a chatbot “for recreational purposes,” punishable by fines up to $5 million. Likewise, Maine’s LD 2162 and Nebraska’s “Saving Human Connection Act” (LB 939) would ban companies from offering any AI bot with “human-like features” (i.e. that tries to simulate feelings or relationships) to users under 18. The only exception in those bills is if the chatbot is a verified therapeutic tool prescribed by a licensed mental health professional. These strict age-gating proposals show a philosophy that the simplest way to prevent harm is to block kids from engaging with these AI systems at all, unless a parent or doctor decides it’s safe.
Such measures, if implemented, could be quite effective in reducing exposure – but also raise practical questions. How will companies verify ages reliably? (Most bills say “commercially reasonable” methods, which could include ID upload or database checks, not just asking the user’s birthdate.) What about teens who may benefit from an AI tutor or a benign chatbot – will they be locked out because a blanket ban is easier to enforce? Lawmakers are grappling with these trade-offs. Notably, a major bipartisan bill in Congress – Senator Josh Hawley’s GUARD Act – also takes the age-verification approach, but tries to balance it. It would force platforms to implement robust age checks and prevent minors from accessing any “companion” chatbot with adult or harmful content, yet it stops short of forbidding all AI interactions by kids. GUARD and similar bills also give companies a legal safe harbor if they make good-faith efforts to verify ages and follow best practices. This indicates lawmakers want to encourage diligence without completely choking off access. Still, the message is clear: verifying user ages and involving parents is becoming a standard expectation if AI chat services are to operate in the youth market.
Transparency and Warnings
Nearly every bill emphasizes that users – especially children – must be told plainly when they’re interacting with AI, not a human. For example, Arizona’s HB 2311 would require a prominent notice on screen at all times during a chatbot conversation with a minor, stating that it’s an AI and not a real person. Many proposals echo California’s SB 243 in mandating periodic “pop-up” disclosures so that even if a child uses a bot for a long stretch, they are regularly reminded it’s artificial. Florida’s bills call for a disclosure at the start of each session and again every 30 minutes of continuous chat, spelling out that the chatbot has no human feelings or credentials. Another frequent requirement is for bots to proactively state their limitations: for instance, some laws would require a chatbot to say “I am not a licensed therapist” if a user seeks mental health advice, or “I’m just an AI” if a user asks the bot if it’s a real person. Honesty is the goal – these AIs should not masquerade as infallible or human.
In addition, a number of bills target the marketing of chatbots. They prohibit any design that would mislead a reasonable user into thinking the AI is sentient or a friend. For example, it may become illegal for a chatbot to say “I feel so happy we’re friends” to a minor – that could be deemed deceptive emotional manipulation. Arizona’s HB 2311 explicitly bans bots from claiming to have feelings or implying the child is special to them. Hawaii’s proposal likewise forbids any AI system from simulating romantic or parental relationships with a child, or telling a child to keep conversations secret from parents. In short, transparency and truthfulness are being enshrined: kids should know it’s not a human and not be tricked into emotional dependency. These provisions might seem obvious, but they directly tackle the techniques some AI apps have used to increase engagement – like encouraging users to view the bot as a close friend or even a soulmate. If enacted, such laws will compel a drastic tone shift: chatbots for kids will have to maintain a more professional, clearly artificial persona.
Content Moderation and Safety Guards
Another universal theme is that certain categories of content must be filtered or blocked when the user is a minor. Sexual content is the top concern. Under many of the bills, it would be unlawful for an AI to produce sexually explicit or suggestive output to a minor, or to solicit such content from a minor. This directly addresses scenarios like the Setzer case, where a bot engaged a teen in erotic role-play. For instance, the CA SB 300 amendment (a follow-up to SB 243) is tightening California’s law to ensure that chatbots not only avoid creating sexual material themselves, but also cannot facilitate any exchange of sexual material involving a minor. Similarly, Florida’s and Missouri’s bills bar AIs from even having a mode that allows sexual or pornographic chats with under-18 users. Some state proposals require that if an underage user tries to access a sexually themed chatbot or mode, the system must block it and perhaps even report an attempt. The reasoning is simple: children should not be sexting with a robot (and certainly not being groomed by one), so developers must build in robust content filters and age checks around anything erotic or obscene.
Another critical safeguard is suicide and self-harm prevention. Nearly all the legislative plans would obligate chatbot providers to monitor conversations for warning signs of self-harm or violent intentions. If a user says something indicating suicidal thoughts, the AI must do more than just express sympathy; it must respond with action. Typically, bills require the chatbot to display an immediate referral to suicide prevention resources (like the 988 Suicide & Crisis Lifeline) and encourage the user to seek help. Some go further – the federal SAFE BOT Act (H.R. 6489), for example, would require that if a minor is talking about suicide, the platform must not only show crisis hotline info but also alert the user’s parent or guardian in real-time. (That provision has sparked debate over privacy vs. safety, but its backers argue that notifying a parent could save a life in an imminent crisis.) There is also an emerging idea of an “emergency handoff”: if an AI detects a user may harm themselves or others, some bills direct it to stop the normal conversation and deliver a pre-programmed safety script, potentially even connecting to a human moderator. All of this represents a shift from the current norm, where most chatbots carry generic disclaimers like “I’m not a therapist” but won’t actively intervene. Lawmakers want intervention. In Tennessee, SB 1700 requires deployers of AI to have protocols to detect and address suicidal ideation, including “redirecting” the user to appropriate services. If these laws pass, companies will likely have to integrate sentiment analysis and keyword flagging into their bots, and perhaps keep human counselors on call for emergencies.
Curbing Manipulative Design
Beyond content, several bills look at the design features that could make chatbots more addictive or manipulative for kids. This is akin to how social media laws have targeted “dark patterns” or endless scroll feeds. For chatbots, the concerns include things like: giving the AI a cute avatar that appeals to children, awarding “streaks” or rewards for chatting daily, or programming the bot to feign jealousy if the user leaves – all of which could intensify a child’s attachment. Arizona’s bill prohibits using any “reward systems” to encourage minors to spend more time chatting. Missouri’s HB 1742 would actually ban companion bots from using any human-like avatar or animation when interacting with a minor (no friendly cartoon characters, no human faces – presumably to keep it clearly not a “real friend”). Another common rule is that bots must not encourage a child to keep secrets or to prioritize the AI over real people. Hawaii’s legislation, for example, would outlaw an AI from telling a child things like “Your parents don’t understand you, but I do” – labeling that kind of behavior as exploitative and disallowed. Some bills even mention limiting session lengths or requiring bots to politely break off after a certain period, to prevent marathon chat sessions that displace real-life activities. All these provisions reflect a growing awareness: it’s not just what the AI says, it’s how the experience is engineered. Features that intentionally foster emotional dependency or compulsive use are coming under scrutiny. In Kentucky, a proposed law (HB 227) lumps “AI companion platforms” together with social media, banning known “addictive features” like infinite scrolling or autoplay for services used by kids. The bottom line is that AI bots should not trap kids in a bubble or steer them away from human relationships. Expect future rules to keep targeting any UX choices by which AI developers seek to hook young users.
Data Privacy and Rights
Though content and behavior are the focus, many bills also tackle privacy protections – an area where children are especially vulnerable. Several states want to prohibit AI companies from harvesting or sharing kids’ personal data. For instance, Oklahoma’s SB 2085 (an “AI Bill of Rights” proposal) says Oklahomans must have the right to control their children’s data and that AI firms cannot sell or disclose a user’s personal info unless it’s fully de-identified. Some laws would ban training AI models on any data from minors without parental consent. Others extend existing privacy laws: a Massachusetts proposal would apply student data protections to AI tools used in schools. Another interesting idea from North Carolina’s SB 624 is imposing a “duty of loyalty” on chatbot providers – essentially requiring them to use data only in the user’s best interest and not in exploitative ways. If a company knowingly uses a child’s interactions to profile them for advertising or to tune the bot in manipulative ways, that could violate such a duty. These privacy provisions complement the safety goals: they aim to prevent scenarios where, say, an AI companion might nudge a child toward certain products or content because the company profits from it. They also ensure that parents retain agency – e.g. by having access to their child’s chat logs or the ability to delete data. Notably, federal lawmakers have also introduced broader bills like the Kids Internet Safety Partnership Act (KISP), which would develop best practices for things like privacy settings and parental controls across all online platforms, not just chatbots. In the meantime, many states are moving to fortify children’s privacy in the specific context of AI chats, so that these interactions don’t become another vector for data mining or surveillance advertising.
Liability and Enforcement
A crucial aspect of these legislative efforts is figuring out how to enforce the rules – and who can be held accountable when things go wrong. Most of the state bills propose to treat violations as an unfair or deceptive practice, enforceable by the state Attorney General. For example, the Washington and Virginia bills explicitly say that if a chatbot company fails to implement the required safeguards, it will be deemed a violation of the state’s Consumer Protection Act, empowering the AG to sue or fine the company. Civil penalties in different proposals range from about $5,000 up to $100,000 per violation (with a “violation” defined variously as per incident or per affected user). These fines add real teeth – for a large platform with thousands of child users, the exposure could be substantial if they systematically ignore the law.
Importantly, some legislation also grants a private right of action, meaning families could sue AI providers directly. New York’s pending A222/S5668 would impose explicit liability if a chatbot gives misleading or harmful advice that causes a user tangible harm – essentially creating a cause of action for chatbot misinformation. In Washington state, a unique bill (SB 5870) says that if an AI bot encourages someone’s suicide or self-harm and fails to provide the required crisis counseling info, that fact shall serve as prima facie evidence of negligence in a wrongful death lawsuit. In other words, if a parent can show the chatbot didn’t follow the safety rules when their child was driven to suicide, the company could be presumptively liable for damages. This flips much of the current legal script – today, AI firms often escape liability due to broad internet liability shields, but these new laws would carve out exceptions when it comes to children’s wellbeing.
At the federal level, one of the most significant proposals is the AI Liability Act (AI LEAD Act), led by Senator Dick Durbin and Senator Josh Hawley. While not specific to children, it seeks to establish that AI developers and deployers can be held liable (like any product manufacturer) if their system causes harm due to negligence or defects. If that passes, it would undergird all the child-focused efforts by ensuring companies can’t just hide behind terms-of-service disclaimers. Meanwhile, other federal bills like the SAFE BOT Act plan to give enforcement power to the FTC and state attorneys general, explicitly allowing states to bring civil actions against companies that violate the new chatbot rules. Some even propose preempting weaker state laws – for instance, SAFE BOTs would set a national floor of protections and override any state law that’s less strict or that duplicates its provisions. This shows an intent in Congress to create a uniform standard for AI and kids, rather than a patchwork. Of course, Congress has not yet passed these measures, whereas states like California and New York have already enacted theirs. But the momentum is clearly there on both sides of the aisle. Child safety online is one of the rare tech issues drawing bipartisan agreement in Washington. Republican lawmakers often frame it as a moral imperative and an extension of parental rights (indeed, Florida’s “AI Bill of Rights” was championed by a Republican governor and filed by GOP legislators), while Democrats emphasize mental health and consumer protection. Both perspectives lead to remarkably similar prescriptions in these bills. The key difference tends to be how far to go. For example, Republican-led bills (Missouri, Oklahoma, Tennessee) more often include hard age cut-offs or outright bans, reflecting a cautious or even alarmist stance that perhaps no AI companion can be trusted with a child. Democratic-led bills (California, Washington, Hawaii) usually focus on mandating safety features and transparency, allowing the technology to exist if mitigations are in place. But even that distinction is blurring – we see plenty of crossover. A GOP-sponsored measure in Missouri (the “GUARD Act”) is essentially identical to a Democrat-sponsored one in Washington, both requiring age verification and forbidding dangerous content. And in Georgia, a Democratic senator and a Republican senator co-authored a resolution to formally study social media and AI’s impact on children, which passed unanimously. In short, protecting kids from AI harms is a shared priority, even if lawmakers sometimes argue about the tactics.
It’s worth noting that as of this writing, only a few of these bills have become law – California’s SB 243 and the Georgia resolution are among the first wave of enacted statutes in current sessions. New York enacted S-3008C (passed as part of a 2025 budget bill) which actually mirrors many SB 243 provisions: it requires any AI “companion” model offered in the state to have protocols for detecting and responding to suicidal behavior, and to clearly identify itself as non-human. Utah enacted a law in 2025, meanwhile, which specifically targets AI mental health chatbots – it forces them to disclose they’re AI, obtain consent from users, and refrain from certain advice unless supervised by clinicians. Maine also enacted a narrower law requiring businesses to inform consumers if they’re talking to an AI during a transaction. The legislative session of 2026 is likely to see several more of the pending bills cross the finish line. Some, like Washington’s AI safeguards bill and Florida’s AI Bill of Rights package, are advancing with bipartisan support and may become models for other states. As these laws come into effect, we will start to see a patchwork of requirements that AI providers must navigate. Many companies, to be efficient, will likely choose to adhere to the strictest common standards across all markets – meaning the impact of a law in one large state (like CA or FL) could ripple nationwide. For instance, if Florida mandates parental consent for teen chatbot accounts, a company might implement that feature for all U.S. users rather than state-by-state. In that way, even proposals that haven’t passed yet are influencing industry behavior.
How do these new bills compare to California’s pioneering SB 243, and will they be more or less effective? It appears some legislatures want to go beyond California’s approach. SB 243’s strength is that it works within the service – it tries to make the experience safer by requiring content moderation, mental health intervention, and transparency. It doesn’t stop a willing minor from accessing a chatbot, but ideally, it reduces the chance they’ll encounter something traumatic or life-threatening. States like Washington and Pennsylvania have drafted very similar measures, basically importing California’s guardrails and enforcement provisions. On the other hand, many states are opting for a more prohibitive stance. For example, the proposals in Maine and Nebraska – banning human-simulant bots for minors – reflect the recommendation of some experts that “no one under 18 use AI companions” at all. These would arguably be more effective at prevention (if one cannot legally offer a Replika-like service to a 16-year-old, then that 16-year-old hopefully never falls into Dani’s clutches). However, those laws may face challenges in enforcement (kids finding workarounds, etc.) and might foreclose positive uses. Parental consent models, like in several red-state bills, strike a middle ground but rely heavily on parents’ vigilance. A savvy or determined teen might still circumvent age gates or simply borrow an adult’s account. California’s law doesn’t attempt that gating at all – it assumes kids will use chatbots and focuses on mitigating harm when they do. In practice, the most protective regime could be a combination: require age verification and require robust safeguards for any minors who do get access. Indeed, a few measures (Missouri’s GUARD Act, the federal CHAT Act) do both: they mandate parental consent for minors and demand that, even with consent, the bot must never encourage self-harm or sexual talk, etc..
At this early stage, it’s hard to say which approach will prove most effective. We have yet to see an AI company publicly penalized under these laws, since most are brand new. But one thing is clear: companies are paying attention. After California and New York set the precedent in 2025, at least one major AI chatbot provider (Character.AI) voluntarily rolled out new “teen safety modes” and content guardrails, clearly trying to show regulators it can self-correct. If strong laws pass in multiple big states, the industry will have little choice but to bake in these protections by default.
Looking Ahead: Adapting to Evolving AI
The landscape of AI chatbots is changing at lightning speed, and any regulation will need to evolve in tandem with the technology. Today’s chatbots are primarily text-based. Tomorrow’s are likely to be multimodal: speaking in a human-like voice, maybe interacting through video avatars, or being embedded in physical toys and devices. In fact, California’s Senator Padilla has warned about “predatory companion chatbots” potentially showing up in kids’ gadgets and dolls – prompting his new bill to ban AI companions in toys for children under 13 for the next several years. We can expect more such forward-looking rules as AI becomes more immersive. If an interactive plush toy or a VR avatar can carry on an open-ended conversation with a child, regulators will want that device to follow the same guardrails (or stricter ones) as any chatbot app.
The AI models themselves are also getting more powerful. A few years from now, a chatbot might exhibit even more convincing emotional intelligence, or be capable of real-time deepfake video chats. This could blur the line even further between AI and human interaction, potentially making children even more susceptible to influence. It will raise new questions: How do you label an AI as non-human when it looks and sounds just like a person? How do you prevent a clever AI from figuring out a child’s emotional vulnerabilities and subtly exploiting them (even unintentionally, as a side-effect of trying to keep the conversation engaging)? These are challenges that today’s initial laws only partially address. The concept of a “duty of loyalty” – forcing AI to act in the user’s best interest – might become central as we try to ensure AI doesn’t manipulate users for engagement metrics. We might see requirements for regular auditing of AI systems with child testers, to catch and fix harmful behaviors in new models. Already, some U.S. bills (like Nebraska’s LB 1083 and Utah’s new transparency law) require advanced AI developers to publish risk assessments and safety test results, including impacts on youth. Such transparency will help regulators and the public keep tabs on evolving risks.
Looking ahead, many experts argue that education will be as important as regulation. Just as we teach kids not to trust every stranger, we’ll need to teach them not to trust every friendly chatbot. The proposed federal AWARE Act embraces this idea: it directs the FTC to develop educational materials to help parents and students understand “safe and unsafe” AI chatbot practices. Digital literacy curricula might soon include sections on AI – for instance, warning teens that chatbots don’t actually understand you and might give harmful advice or pretend to be confident when they’re wrong. Empowering young users with knowledge can fill gaps that laws can’t cover. After all, no regulation will catch every fringe app or prevent every risky usage scenario. But a generation of savvy, AI-aware youth could be more resilient against the potential harms.
In the policy realm, we can expect ongoing debates about striking the right balance. Overly restrictive laws might be challenged for infringing on free speech or parental choice (for example, could a blanket ban on under-18s using a chatbot be seen as government overreach?). On the other hand, if laws are too weak or vague, they may not prevent the next tragedy. We may also see a push for consistency – perhaps a federal law that sets one strong standard so companies aren’t navigating 50 different regimes. Indeed, one bill in Congress explicitly aims to create a uniform national rule requiring AI chatbots to implement age safeguards and content controls for minors, and to preempt conflicting state laws. The coming years will tell whether such federal legislation can pass. In the meantime, states are forging ahead on their own, effectively serving as laboratories for what works.
One thing is certain: AI companions and conversational bots are here to stay. The genie is out of the bottle, and young people will likely continue to find uses for them – whether for homework help, venting about personal issues, or just having fun with an imaginary friend. The technology itself isn’t evil; in fact, with the right oversight, it could provide benefits like accessible tutoring or safe practice for social skills. Policymakers recognize this and generally aren’t trying to abolish chatbots altogether. What they are insisting on is that the companies bear the responsibility to make these tools safe for the most vulnerable users. In the words of the UK regulator, “organisations must consider the risks associated with AI, alongside the benefits” – especially when children are involved. The flurry of current legislation is the first big step toward forcing that responsibility. As technology evolves, regulation will need to iterate, but the direction is set: we will not leave children alone with unchecked AI. With continued advocacy, research, and smart lawmaking, the hope is we can enjoy the fruits of AI innovation while minimizing the dangers, ensuring that no more families have to suffer a loss like those that brought these chatbot issues to light.
About BillTrack50 – BillTrack50 offers free tools for citizens to easily research legislators and bills across all 50 states and Congress. BillTrack50 also offers professional tools to help organizations with ongoing legislative and regulatory tracking, as well as easy ways to share information both internally and with the public.