Securing an audience with Sam Altman proves challenging—a fact Adam Bhala Lough, the director responsible for the recent film *Deepfaking Sam Altman*, can readily attest to.
Lough initially conceived a production delving into AI’s prospects and risks, intending to center it on a dialogue with OpenAI’s CEO. However, after his requests remained unheeded for several months, he chose instead to enlist a chatbot that emulated Altman’s verbal mannerisms and simulated his facial expressions via a digital avatar.
The genuine Altman, nonetheless, engaged for the upcoming film, *The AI Doc: Or How I Became an Apocaloptimist*, set to premiere on March 27. Dario Amodei, Anthropic’s CEO, and Demis Hassabis, joint founder and chief executive of Google’s DeepMind Technologies, also participated. (Despite the filmmakers’ claims of seeking discussions with Meta’s Mark Zuckerberg and X’s Elon Musk, neither individual appeared.)
This marks a remarkable degree of entrée for Daniel Roher, the joint director and central figure of the documentary, whose 2022 film *Navalny*, focusing on Russian opposition leader Alexei Navalny, secured an Academy Award. The difficulty, though, is that once filmed, Altman and his peers offer scant new information, instead evading with superficial responses regarding their obligations to humanity at large. When Roher inquires why anyone ought to trust him to direct the swift advancement of AI, considering its profound consequences, Altman responds: “You shouldn’t.” The questioning ceases there.
*The AI Doc* is contextualized by Roher’s apprehension regarding the imminent birth of his son, his inaugural offspring with his wife, filmmaker Caroline Lindy. He ponders the nature of the world his boy will inherit and whether AI’s ascension will prevent the experiences that mold us into independent individuals. In Roher’s initial interviews, his gravest concerns appear validated. Tristan Harris, cofounder of the nonprofit Center for Humane Technology, offers one of the most unsettling pronouncements: “I know people working on AI risk who don’t anticipate their children reaching high school,” he states, describing a situation where the technology dismantles the fundamental framework of conventional schooling.
Despite the atmosphere of escalating dread, Roher and joint director Charlie Tyrell offer a commendably comprehensive primer on AI and the paramount issues it raises, further aided by Roher’s commitment to clarifying concepts in accessible terms rather than industry jargon. Visually, the film is engagingly personal, showcasing vivid illustrations and artworks by Roher, while playful stop-motion segments suggest the impact of producer Daniel Kwan, the Academy Award-winning joint director of *Everything Everywhere All at Once*. This lively inventiveness amidst harbingers of catastrophe provides some of the hope Roher is ardently searching for.
Yet, subsequent discussions with Silicon Valley’s tech enthusiasts, who vow AI will vanquish illnesses and global warming—followed by CEOs maintaining their customary equilibrium between exaggerated promotion and expressions of measured circumspection—proceed with little scrutiny of ambitious assertions. Scarcely any time is devoted to contemplating why or how we should anticipate the present iteration of imperfect large language models to engender the legendary “artificial general intelligence” (AGI) that would surpass human intellect. At best, there are indirect admissions (such as from venture capitalist Reid Hoffman) that any advantages will be accompanied by undisclosed detriments.
Even when leading figures assert that AI’s immediate consequences are as momentous as the dawn of atomic weaponry, they are reverting to a customary strategy, portraying their offerings as uniquely pivotal in some respect—implying that solely they are reliable stewards for their progression.
{content}
Source: {feed_title}

