The billionaire bucks shaping AI policy
Who influenced President Joe Biden’s new executive order on artificial intelligence?
Over the weekend, POLITICO’s Brendan Bordelon uncovered the fingerprints of the RAND Corporation, which has cultivated ties to a growing influence network backed by tech money.
For months, Brendan has been following the money shaping the regulatory debate over AI.
And much of that money is coming from wealthy backers of “effective altruism,” a prominent ideology in Silicon Valley circles that places great a deal of weight on the existential risks of AI.
Brendan’s latest reporting reveals that RAND, one of the nation’s oldest, most venerable think tanks, is facing a wave of internal concern. Some employees have voiced worry that after taking funding from Open Philanthropy, a foundation started by Facebook co-founder and effective altruist Dustin Moskovitz, RAND has become a vehicle for inserting the movement’s ideas into policy.
DFD caught up with Brendan to dig into the discontent at RAND and the factions fighting over the future of AI policy. Our conversation has been edited and condensed for clarity.
You reported that RAND staffers — some of whom joined the think tank from the Biden administration last year — shaped parts of Biden’s executive order. But you also obtained audio of a meeting where other RAND workers expressed concerns about the organization’s work on AI. What’s the problem? Aren’t think tanks supposed to be close to governments and shape policy?
I don’t think it’s necessarily uncommon that RAND CEO Jason Matheny and senior information scientist Jeff Alstott jumped straight from the National Security Council and the White House Office of Science and Technology Policy to this think tank that is still very much embedded in how the White House and the federal government approach AI.
The bigger issue is links these folks have with an ideological movement. And it’s a movement that’s very much associated with the top AI companies driving the field at the moment. That’s where folks start to raise eyebrows.
That’s particularly supercharged when you see the money coming in from groups like Moskovitz’ foundation, Open Philanthropy, that are aligned with effective altruism, that are building this much broader network than just RAND. In some ways, RAND is just one piece of a broader machine that effective altruists are building in Washington.
RAND’s position is that the funding from Open Philanthropy does not influence the substance of its policy recommendations. If funders don’t dictate policy recommendations, what’s the issue here?
The way that Open Philanthropy throws around money in this space, it becomes difficult to say no to. They are prolific funders, and my understanding is they often come in without a ton of strings attached beyond “Here’s the area we’d like you to focus on.”
And because of that money, it’s difficult to get thinkers in Washington who are worried about the explicit focus on existential risk, to the exclusion of many other problems in the AI space, to go on record with those concerns.
It has increasingly had a chilling effect on policy people who say, “Hey, look, I understand why you’re concerned about the AI apocalypse. But we need to base policymaking on evidence. And if there’s no real evidence at the moment, beyond sort of philosophical evidence, we need to spend our time and money elsewhere.”
The problem is, when you have a lot of money coming in with an explicit or implicit desire for research to focus on those questions, it starts to take on a life of its own. You have think tanks across the board all saying the same thing: “Existential AI risk is the key problem of our time, we have to spend all this time and attention on it.” And then it becomes an echo chamber.
Effective altruism is an idealistic worldview backed by savvy business moguls. Are these influence efforts really altruistic or is this just a fresh new guise for typical industry lobbying efforts?
I’ve put this question to a ton of people. And I don’t think it’s an either/or.
There is a sense that a lot of the problems that effective altruists are raising are real questions to ask. Even critics believe that most of these people are true believers when it comes to existential AI risks.
The problem is, there’s just not enough evidence at the moment to back up some of the fears about frontier AI systems.
So if there’s not this evidence, why are people so fixated on these concerns — and to the point where tens of millions of dollars are being pumped into research and policymaking communities pushing in this direction, and away from the near-term harms of AI and biotech?
What other factions are out there exerting influence on AI policy?
You’re starting to see this come up on the internet, actually, and I don’t know how much money is behind it, but “effective accelerationists” are growing in prominence.
This group also believes in the transformative power of AI technology, but they just feel like it’s going to be good.
You see a lot of that from Andreessen Horowitz lately, and other Silicon Valley venture capitalist groups that are increasingly concerned that effective altruists are slowing down the technology’s development.
You see it in questions around access to computing power, or a potential licensing regime — and the big thing right now, the open source-closed source debate. Should these open source models be allowed to proliferate? Or should the government come in and lock those down?
So … another group of wealthy tech investors who just want to see less regulation of AI?
You hear this characterization from a lot of AI researchers on the ground. They say AI technology is not going to be overwhelmingly transformative, neither in the super positive nor in the super negative sense. It’s going to be like any other technology, where there’s going to be fits and starts in its development and people are going to have to muddle through unexpected setbacks that arise.
That argument’s not getting a lot of money or attention. And that’s where some think tankers are really frustrated by what’s happening right now.
On the one hand we’ve got a tight network of rich and powerful people that is being compared in some corners to a cult. On the other hand, the thing binding them together is a very nerdy set of beliefs about technology. Come midnight on the next full moon, are we more likely to find effective altruists performing “Eyes Wide Shut”-style rites or DM’ing with Eliezer Yudkowsky about thought experiments?
Obviously the latter.
A group of pro-crypto super PACs backed by Andreessen Horowitz, Coinbase, and the Winklevii is raising big money to influence the 2024 election.
POLITICO’s Jasper Goodman reported on the push this morning, which has so far raised $78 million to back crypto-friendly candidates. Their project coincides with major crypto legislation working its way through the House of Representatives, and desperation from crypto boosters to rehabilitate their political image in the wake of the FTX scandal.
It’s become “more apparent that the only way to counteract the lobbies of the big banks and big tech is to show that crypto and blockchain can be a force, too,” wrote Andreessen Horowitz’s Chris Dixon on X today by way of announcing the firm’s investment in the pro-crypto Fairshake PAC, saying their goal is “bringing together responsible actors in web3 and crypto to help advance clear rules of the road that will support American innovation while holding bad actors to account.” — Derek Robertson
OpenAI is adopting a framework to track and prepare for what it sees as potential “catastrophic risks” posed by artificial intelligence models.
The “Preparedness Framework,” unveiled in a blog post and 27-page document Monday and reported in today’s National Security Daily newsletter, details how the ChatGPT maker will “develop and deploy our frontier models safely.” Among the steps OpenAI will take are running evaluations to assess risk, searching for “unknown categories of catastrophic risk,” and limiting deployment of models deemed too high-risk.
“The study of frontier AI risks has fallen far short of what is possible and where we need to be,” the company wrote.
OpenAI launched the framework weeks after its board ousted CEO Sam Altman over reported safety concerns before he was reinstated days later. Most of the board members who worked to remove Altman have since resigned.
The effort comes as U.S. lawmakers have struggled to regulate artificial intelligence, while Europe leads the world in passing laws that place guardrails on the tech. Last week, Pope Francis called for a global treaty to regulate AI, a move that Sen. Mark Warner (D-Va.) said Washington is not ready for. — Matt Berg
- Pioneering digital artist Vera Molnar died at the age of 99.
- Google’s AI search tool could be a threat to news publishers.
- Could driverless “microtransit” be more profitable than automated cars?
- The future of online scams: fake medical documents for anti-vaxxers.
- AI-generated news anchors are living deep in the uncanny valley.
Stay in touch with the whole team: Ben Schreckinger ([email protected]); Derek Robertson ([email protected]); Mohar Chatterjee ([email protected]); Steve Heuser ([email protected]); Nate Robson ([email protected]) and Daniella Cheslow ([email protected]).
If you’ve had this newsletter forwarded to you, you can sign up and read our mission statement at the links provided.
Source: https://www.politico.com/