With artificial intelligence increasingly permeating the domains of publishing and media, online platforms are swiftly striving to establish fundamental guidelines for its application. This week, Wikipedia prohibited its editors from utilizing AI-generated text, though it did not go as far as to entirely ban AI from the site’s editorial workflows.
An updated directive from the site now plainly asserts that “the employment of Large Language Models for the creation or modification of article content is forbidden.” This novel phrasing serves to revise and clarify an earlier, more ambiguous stipulation, which had previously advised that LLMs “ought not to be utilized to originate novel Wikipedia articles from the ground up.”
A highly debated topic within the platform’s vast, volunteer-powered cohort of content curators has been the integration of artificial intelligence into Wikipedia entries. News outlet 404 Media indicates that the revised guideline, which was put forth for a vote among the site’s editors, achieved widespread acceptance, evidenced by a count of 40 in favor to only 2 opposed.
However, the updated stipulations do leave scope for the ongoing employment of AI within specific editorial procedures.
The updated policy specifies that “Contributors are permitted to utilize Large Language Models for suggesting fundamental textual revisions to their compositions, and to integrate such changes following human review, provided the LLM refrains from introducing novel content.” It issues a warning: “Prudence is essential, as LLMs are capable of surpassing explicit instructions and, in doing so, may modify the text’s meaning to an extent not corroborated by the cited references.”
{content}
Source: {feed_title}

