The AI safety summit, and its critics
Presented by the Computer & Communications Industry Association
Last week’s AI Safety Summit in the United Kingdom was, to hear its participants tell it, a rousing success — but critics accuse those leaders of living in a fantasy world.
Those critics are part of the growing rift in the AI community over how much to focus on the “existential” risk of “frontier” models that could possibly, well… end the world. The AI policy community is at a crossroads that will determine whether the future of the technology is governed with here-and-now societal risks in mind or with an eye toward a sci-fi future where ideas about governance are effectively upended, while each side claims their view of the technology encompasses both risks.
“It’s disappointing to see some of the most powerful countries in the world prioritize risks that are unprovable and unfalsifiable,” Data & Society policy director Brian Chen said in an email. “Countries are dedicating massive institutional resources to investigate existential claims that can’t hold up under basic principles of empirical inquiry.”
In other words, the fight over AI should be less about preventing SkyNet from killing us and more about protecting consumers from opaque algorithms that decide to reject a home loan or decline coverage for a medical procedure. Chen and his peers do believe the government has a role to play in AI safety. But the merest whiff of “doomerism” in Silicon Valley triggers a fear that the biggest AI developers are trying to cement their dominance in the field by obscuring present-day threats at the expense of hypothetical ones.
Amba Kak of the AI Now Institute, one of the few representatives of civil society at last week’s summit, said at the event’s conclusion that “we are at risk of further entrenching the dominance of a handful of private actors over our economy and our social institutions.” In her remarks, Kak acknowledged efforts by the Biden administration to encourage fair competition and redress bias in AI, but said future gatherings should include voices from across society, not just the biggest tech companies and governmental leaders.
Some groups see current-day AI safety and a competitive industry open to new players as inextricably paired. Mark Surman, president of the Mozilla Foundation, and the researcher Camille François said in a blog post yesterday that “competition is an antidote” for what they see as the undemocratic nature of current AI policy debates, dominated by industry giants.
They emphasize making AI development tools available to everybody, accusing major players like OpenAI of using “the fear of existential risk to propose approaches that would shut down open-source AI.” (A “joint statement” published on Halloween with signatures from Surman, François, and no less than Meta AI chief Yann LeCun called for making open-source AI development a “global priority.”)
As Kak alluded to, some leaders in the U.S. have spoken out about these issues. Vice President Kamala Harris was outspoken at last week’s summit in urging other global leaders and AI companies to prioritize the here-and-now risks of algorithmic discrimination. Federal Trade Commission Chair Lina Khan wrote in the New York Times in May that “The expanding adoption of A.I. risks further locking in the market dominance of large incumbent technology firms,” and urged anti-monopolistic policy choices along the lines of those called for by Mozilla.
The Biden administration’s executive order did address those topics, specifically directing Khan’s FTC to investigate monopolistic practices in AI, establishing privacy protections for government uses of AI, and ordering the Department of Housing and Urban Development to provide guidance for stopping discriminatory AI systems in lending. The Bletchley Declaration itself elaborates on both immediate human risk and the doomy predictions of apocalypse-by-frontier-model.
Still, for some on the outside of industry and government who have studied the policy fights of the past epoch in tech, they’ll believe that AI giants will voluntarily accept accountability for their products’ potential harms when they see it.
“We… need a more holistic, human-centered vision of AI systems — their impact on workers, their extraction of data, their massive consumption of energy and water,” Chen said. “This was lacking at last week’s summit.”
Meta made another tweak to how it will police political ads and content across its platforms.
The company announced in a blog post today that it will disclose to users when content is created or altered by generative AI or other digital tools.
Starting “in the new year,” advertisers will be required to disclose when they use AI-generated content to depict events, or speech, that didn’t actually happen in the context of a real-life political debate. Meta will then notify viewers that advertisers made one of these voluntary disclosures, and if advertisers fail to do so Meta will remove the ad with “penalties” for repeated failure to disclose.
University of California Berkeley researchers published a report today that sets sweeping standards for evaluating risks in AI systems.
The 118-page document sets best practices meant to complement those set out by the National Institute for Standards and Technology and the International Organization for Standardization.
“We intend this Profile document primarily for use by developers of large-scale, state-of-the-art GPAIS,” the authors write, saying it “aims to help key actors… achieve outcomes of maximizing benefits, and minimizing negative impacts, to individuals, communities, organizations, society, and the planet,” including “protection of human rights, minimization of negative environmental impacts, and prevention of adverse events with systemic or catastrophic consequences at societal scale.”
Key recommendations include establishing risk-tolerance thresholds for AI’s use, the use of red-teaming and adversarial testing, and involving outside actors and users of the system in the risk-identification process.
- A buzzy paper about room-temperature superconductors was retracted.
- Go inside the Manichean clash between the schools of thought on AI risk.
- Anthropic and Google are expanding their partnership to include microchips.
- A Sergey Brin-backed airship startup is preparing for launch in Silicon Valley.
- There is a yawning gap between Elon Musk and his inspiration, Douglas Adams.
Stay in touch with the whole team: Ben Schreckinger ([email protected]); Derek Robertson ([email protected]); Mohar Chatterjee ([email protected]); Steve Heuser ([email protected]); Nate Robson ([email protected]) and Daniella Cheslow ([email protected]).
If you’ve had this newsletter forwarded to you, you can sign up and read our mission statement at the links provided.
Source: https://www.politico.com/