State attorneys general from across the US are demanding more accountability from AI companies, warning them that their chatbots may be violating state laws. As reported by Reuters, the AGs have given Meta, Google, OpenAI, and others a deadline of January 16th, 2026 to respond to demands for more safety measures for generative AI, saying innovation is not “an excuse for noncompliance with our laws, misinforming parents, and endangering our residents, particularly children.”
The letter, which was made public on December 10th, claims, “Sycophantic and delusional outputs by GenAI endanger Americans, and the harm continues to grow.” It goes on to cite numerous deaths allegedly connected to generative AI, as well as cases of chatbots having inappropriate conversations with minors.
The letter also warns that some of these conversations directly break state laws, like encouraging illegal activity or practicing medicine without a license, adding that “developers may be held accountable for the outputs of their GenAI products.”
The attorneys general are demanding AI companies respond to these issues by implementing more safeguards and accountability measures, including mitigating “dark patterns” in AI models, providing clear warnings about harmful outputs, allowing independent third-party audits of AI models, and more. Their request comes as debate around AI regulation is heating up in Washington.
Google, Apple, Meta, and OpenAI did not immediately respond to a request for comment.
{content}
Source: {feed_title}

