Greetings on this armistice day and a warm reception to Regulator, a periodic publication crafted for Verge subscribers, chronicling the tumultuous path of major technology firms navigating the political landscape. Should you not yet be a subscriber, registration is available here; however, my singular plea is for your enrollment prior to Donald Trump potentially reigniting past menaces directed at Iran, thereby initiating a global conflict.
I’ve returned, having been sidelined last week by the formidable combination of a mild respiratory ailment and the onset of allergy season. (A significant portion—twenty-one percent—of the District’s land area consists of public parkland, with DC regularly lauded as America’s premier urban park network. Regrettably, I possess an allergy to all arboreal and grassy flora.) Should you have any insights regarding overlooked matters or impending developments, kindly forward them to tina.nguyen+tips@theverge.com.
Is there any credibility to OpenAI’s assertions?
Monday saw OpenAI release a comprehensive 13-page policy document examining the prospective effects of artificial intelligence on the U.S. labor market. The firm additionally put forward its perceived remedy: imposing increased capital gains levies on companies substituting human employees with AI, thereby utilizing these funds to fortify the public welfare system. Among its suggested measures were a communal wealth repository, a condensed four-day work schedule supported by “productivity benefits,” and governmental initiatives designed to facilitate workers’ transition into roles emphasizing human interaction, with all these efforts underpinned by the ample resources anticipated from artificial intelligence.
Regrettably, its publication coincided with The New Yorker‘s Ronan Farrow and Andrew Marantz unveiling a scrupulously researched exposé, exceeding 17,000 words, which delineated Sam Altman’s consistent pattern of deception towards his associates, encompassing his Silicon Valley investors, staff, board members, and—pertinently here—legislators endeavoring to govern AI. This New Yorker piece substantiated an enduring perception concerning Altman, and by extension, OpenAI: that while they might articulate lofty ideals, these principles are swiftly abandoned in pursuit of monetary and political advantages.
Independently, as multiple individuals I consulted affirmed, the document represented a net benefit to AI governance at large, by injecting novel concepts into the political dialogue surrounding this nascent technology. However, critics of OpenAI argued that unless the corporation’s policy and political leverage translated these pledges into action, the document itself would remain merely an inert piece of text.
“I surmise that within the team, there are individuals who genuinely value this content, who have diligently contemplated this paper, and who take pride in their diligent efforts, even if it doesn’t encompass every query I desire it to resolve,” Malo Bourgon, Chief Executive Officer of the Machine Intelligence Research Institute (MIRI), conveyed to me. “And the persistent inquiry remains: Will these individuals ultimately encounter the same predicament as numerous former OpenAI personnel, who initially believed the company upheld particular principles or resonated with their convictions, only to discover otherwise, subsequently growing disillusioned and departing?”
Given OpenAI’s policy proposals, a retrospective examination of its governmental interactions is warranted, a subject the New Yorker article thoroughly explores. Altman, notably, was among the initial prominent CEOs to publicly champion federal supervision for AI, even advancing the notion of a federal body to monitor sophisticated models in 2023—yet, in private, he endeavored to quash legislation that incorporated his very own safety recommendations. A Californian legislative assistant charged OpenAI with deploying “progressively artful, misleading conduct” to defeat an AI safety bill from 2023 which the company publicly endorsed. By 2025, the corporation issued subpoenas to proponents of a California state AI legislative initiative, in an endeavor to, as one such advocate recounted to The New Yorker, “essentially intimidate them into silence.” Furthermore, despite Altman’s prior comprehensive collaboration with the Biden administration on establishing AI safety protocols, upon Donald Trump’s assumption of the presidency, Altman effectively convinced him to discontinue the very initiatives he had once championed.
Nathan Calvin, Encode’s general counsel—an AI policy nonprofit concentrating on state legislative endeavors—was among those served with a subpoena. “Their involvement in policy and governmental affairs has, in my observation, been utterly dreadful,” he conveyed to me. Although he presumed the team responsible for authoring the OpenAI proposal, predominantly stemming from the technical safety research division, acted with honorable intentions, he maintained a cautious stance on full endorsement. “Will those individuals persist in their involvement as we transition from broad policy tenets to the numerous other avenues through which lobbying and governmental sway are genuinely exerted? A portion of me holds optimism, yet a substantial part remains rather doubtful concerning that eventuality.” (OpenAI offered no response to a request for commentary.)
A humble, decidedly un-obsequious solicitation:
My intention for next week is to publish an installment of Regulator detailing the most intellectually intense gatherings occurring amidst Nerd Prom, otherwise known as the White House Correspondents’ Dinner social circuit. Should you be a technology entrepreneur, a tech enterprise, or an individual engaged in technology-related activities, and you’re hosting an event during WHCD week, I kindly request you inform me of your plans! Based on intelligence gathered thus far, the technological sphere appears poised to disrupt the customary social rhythms of the week — preliminary whispers include the Grindr soirée in Georgetown, and the Substack gathering, graced by the presence of renowned looksmaxxer Clavicular — and I am immensely eager to compile the most outrageous “SPOTTED” feature ever witnessed in Washington.
(Once more, this is conditional upon whether we find ourselves in conflict with Iran by April’s conclusion, in
which case, I imagine no one will be up for frivolity.)
Regarding Washington D.C. journalists, this observation accurately pertains to each one of us:
{content}
Origin: {feed_title}

