Anthropic has responded to allegations that it used an AI-fabricated supply in its authorized battle towards music publishers, saying its Claude chatbot made an “trustworthy quotation mistake.”
In a response filed on Thursday, Anthropic protection legal professional Ivana Dukanovic stated that the scrutinized supply was real and that Claude had certainly been used to format authorized citations within the doc. Whereas incorrect quantity and web page numbers generated by the chatbot had been caught and corrected by a “handbook quotation verify,” Anthropic admits that wording errors had gone undetected.
Dukanovic stated, “sadly, though offering the proper publication title, publication yr, and hyperlink to the offered supply, the returned quotation included an inaccurate title and incorrect authors,” and that the error wasn’t a “fabrication of authority.” The corporate apologized for the inaccuracy and confusion attributable to the quotation error, calling it “an embarrassing and unintentional mistake.”
That is certainly one of many rising examples of how utilizing AI instruments for authorized citations has prompted points in courtrooms. Final week, a California Choose chastised two legislation corporations for failing to reveal that AI was used to create a supplemental temporary rife with “bogus” supplies that “didn’t exist.” A misinformation professional admitted in December that ChatGPT had hallucinated citations in a authorized submitting he’d submitted.
{content material}
Supply: {feed_title}