The global summit to solve the future
A funny thing has happened in the year-plus since this newsletter launched: A good deal of the “future” technology and policy we’ve set out to cover has made its way into the present.
At POLITICO’s Global Tech Day, held today in London, a global group of policymakers and thinkers gathered to hash out how governments might respond to dizzying developments in AI, digital payments, and competition with China. Some of the day’s highlights, issue by issue:
Artificial intelligence: Sen. Ted Cruz (R-Texas) kicked off the day with some incendiary comments around Congress’ ability to meaningfully regulate AI, saying with Texan tact that the body“doesn’t know what the hell it’s doing” on the topic.
“This is an institution [where] I think the median age in the Senate is about 142. This is not a tech savvy group,” Cruz said, before criticizing the European Union’s sweeping approach to AI regulation through the forthcoming AI Act. By his lights, however, U.S. lawmakers’ cluelessness might not be a bad thing: He compared America’s hands-off approach favorably to Europe’s sticky fingers, saying the latter was “far less concerned with creating an environment where innovation can flourish.”
Sen. Mark Warner (D-Va.) chimed in from the other side of the aisle on the recently proposed push from Senate Majority Leader Chuck Schumer to legislate around AI, saying that from his perspective as chair of the Senate Intelligence Committee, it’s a national security issue as well as a tech one.
“Many of us believe that we are in an enormous technology competition, particularly with China, and that national security means winning the battle around AI,” Warner said.
A group of European regulators were, of course, on hand to (implicitly) defend and discuss their approach — particularly when it comes to generative AI, the popular rise of which occurred late into the writing of the AI Act. Regulators from the U.K., Italy, and Romania discussed the very practical, real-world regulatory problems AI poses already, especially around data and privacy.
“Privacy is one [AI concern]... but also beyond privacy, there are issues of bias and discrimination” around generative AI, said the European Commission’s Lucilla Sioli, by way of touting the AI Act’s “risk-based” regulatory approach. That structure places tighter restrictions on AI use depending on the sensitivity or potential for harm in certain tasks it might be used for.
That’s the stuff we know. The assembled regulators also addressed the inherently unknowable parts of AI risk, with Stephen Almond of the U.K.’s Information Commissioner’s Office saying the country already sees a potentially existential AI risk as part and parcel of their overall policy approach.
Almond said he doesn’t think of that risk as separate from today’s policy issues, but that “the bigger risk is a progression in the growth of technology… by solving the immediate, here-and-now risk we get better and better, and we can put in place the institutions that we need.”
Digital payments: John Cunliffe, a deputy governor of the Bank of England, spoke with POLITICO’s Izabella Kaminska about the U.K.’s investigation of a potential “digital pound,” a central bank digital currency similar to one the U.S. is exploring. (And yes, China has already embraced its own.)
Cunliffe said the case for a British CBDC is that it would “ensure confidence in money, and the uniformity of money” in a sometimes-bewildering digital marketplace. “People won’t have to think, ‘Am I using a stablecoin? Am I using an HSBC deposit? What form of money is this, what is it worth?’”
He warned that “retrofitting legislation on them, once they become established, is hugely difficult,” and that the U.K.’s proactiveness is an attempt to “deal with likely futures before we’re surprised, and suddenly we’re running after the trying to catch up.” (DFD’s Ben Schreckinger reported Monday on how Europe sees an opportunity to potentially surpass the U.S. on the new technology, given pushback from conservatives in Washington and Silicon Valley.)
Global competition: Let’s be real — one of the main reasons we’re here, both reading (and writing) this newsletter and listening to the machers in London today, is because it really matters who gets to write the rules for these powerful new technologies. That self-awareness was markedly on display today, especially as POLITICO’s Mark Scott and Brendan Bordelon reported on the growing unease in Europe and among some Asian countries with the U.S.’ efforts to box out China on tech.
Officials from Singapore, the European Commission, and Malaysia all insisted that they would continue to engage with China. “Malaysia is a neutral country, we do adhere to a free market policy,” said Fahmi Fadzil, Malaysia’s communications and digital minister.
And it’s not just China that scrambles the calculus when it comes to how governments deal with these slippery, border-crossing digital technologies. Julie Brill, Microsoft’s chief privacy officer, sat down with POLITICO CEO Goli Sheikholeslami to argue for more transatlantic collaboration on tech. “We need to see regulators move forward starting to demand transparency” and “make companies live up to what they’re supposed to be doing.”
On the ongoing story of whether AI should be allowed to kill you: Kathleen Hicks, the Deputy Secretary of Defense, wrote in POLITICO Magazine today to explain how the Pentagon is deploying artificial intelligence.
Hicks first points out that the DOD has been working on this for quite some time: there’s the responsible use policy from 2012 (which was updated in January), a series of “ethical principles” published in 2021, and a “responsible AI strategy” from last year. But this field is moving fast. What does Hicks have to say about the Pentagon’s plan to keep up, especially as geopolitical competition with China engulfs the world of tech?
“Our commitment to values is one reason why the United States and its military have so many capable allies and partners around the world, and growing numbers of commercial technology innovators who want to work with us,” Hicks writes. It’s a line that continues the Pentagon’s efforts to frame the global tech competition as a philosophical one that will mirror military and diplomatic efforts to ensure it’s more in countries’ self-interest to align with U.S. principles instead of China’s.
Hicks adds a disclaimer: “Even as our use of AI reflects our ethics and our democratic values, we don’t seek to control innovation. … While that makes me choose our free-market system over China’s statist system any day of the week, it doesn’t mean the two systems cannot coexist.” She also drops a few tidbits for the safety-minded when it comes to automated weapons, including “a bright line when it comes to nuclear weapons” that would ensure they’re impossible to deploy without human involvement, a refusal to “use AI to censor, constrain, repress or disempower people,” and a hands-off approach to the industry itself.
A new academic pre-print posits there might be a limit to how much AI-generated content can fill up the internet before making AI systems themselves unusable.
A group of British and Canadian researchers found that once AI models start being trained on AI-generated content, they essentially… break.
“We find that use of model-generated content in training causes irreversible defects in the resulting models,” they write, calling the resulting phenomenon “model collapse.” When a model “collapses” they find that it becomes pretty much useless, forgetting the original, human-generated data on which it was trained and producing more and more errors and nonsensical output.
They propose that developers ensure that an AI-content-free, human-generated dataset is always available to retrain on or reintroduce to their AI models. As one of the researchers told VentureBeat: “Data needs to be backed up carefully, and cover all possible corner cases. … As progress drives you to retrain your models, make sure to include old data as well as new. This will push up the cost of training, yet will help you to counteract model collapse, at least to some degree.”
- IBM is touting a massive breakthrough in quantum computing.
- AI could create awkward, potentially dangerous power struggles in the hospital.
- Lawmakers and VCs are trying to correct recent history on AI regulation.
- The U.K. is warning businesses to tackle privacy risks before adopting AI.
- Oops: Twitter got evicted from its Boulder office over unpaid rent.
Stay in touch with the whole team: Ben Schreckinger ([email protected]); Derek Robertson ([email protected]); Mohar Chatterjee ([email protected]); and Steve Heuser ([email protected]). Follow us @DigitalFuture on Twitter.
If you’ve had this newsletter forwarded to you, you can sign up and read our mission statement at the links provided.
Source: https://www.politico.com/