The AI-sized holes in the UN cybercrime treaty
With help from Derek Robertson
Negotiators within the United Nations are grappling with how to address artificial intelligence and potential state surveillance of political dissidents in a new cyber security treaty that’s in the works.
Like many tech policy discussions lately, the rapid emergence of AI as a dual-use tool for carrying out and protecting against cyberattacks has thrown a wrench in the proceedings in New York City, as negotiators sketch out how countries should cooperate with each other when investigating cybercrime. The treaty would bind countries to common standards for sharing data and information, shaping how countries deal with criminal investigations in the digital realm for decades to come.
With the current session wrapping on Sept. 1, negotiators from different member states are duking it out over critical definitions in the treaty with wide-reaching implications on what qualifies as a cybercrime, and what safeguards need to be placed on the flow of information between countries.
One of the core tensions playing out is how much information the U.S. and its allies must provide to countries like Russia and China with less than democratic regimes — particularly on cybercrime investigations that could double as surveillance operations.
Some countries want the treaty to broadly cover the misuse of information and communication technologies, which would allow access to “everything that touches the flow of data,” said Deborah McCarthy, a retired ambassador who is the U.S.’ lead negotiator on the treaty. “That will include AI, in all aspects, in all its forms,” she said.
The United States wants more specific definitions and for the treaty to focus instead on a narrow set of crimes in order to limit the control a country can exert over its or other nations’ information space.
Digital rights advocate Katitza Rodriguez, policy director for global privacy at the Electronic Frontier Foundation, said the broad scope of the current treaty could authorize sharing personal data with law enforcement in other countries — including biometric information and datasets used to train AI. Rodriguez said the treaty’s lack of precision on what kinds of data needed to be shared “could potentially lead to sharing of intrusive data without a specific assistance request.”
“In theory, the long arm of this treaty could access citizens in other countries who may express opinions counter to the government of the country that is requesting [information on the citizen],” McCarthy said. “And we’re saying no, it has to be for certain crimes, under certain conditions and safeguards would apply.”
Negotiators will hammer out safeguards this afternoon for the flow of information between law enforcement, McCarthy said. The U.S. and its allies specifically want to lay the groundwork that would deny information-gathering requests that could be used to target political dissidents.
Additionally, in its current iteration, digital rights advocates are worried the treaty’s broad definitions of cybercrime might criminalize legitimate cybersecurity research on emerging technologies like AI, thus chilling work in the field.
Protections for private citizens carrying out cybersecurity research are still under debate on the global stage, even as the U.S. federal government turns to hackers to help it catch vulnerabilities in large language models. Raman Jit Singh Chima, Asia policy director and senior international counsel for the digital rights advocacy group Access Now, said the UN treaty does “not actually help those who are trying to make sure that AI does not result in an explosion in cybercrime.”
McCarthy noted that the need for built-in protections for cybersecurity researchers was a “consistent message” from industry, think tanks and human rights groups, and that proposals for such protections are “still being discussed.”
With the new school year here, educators are slowly learning to embrace ChatGPT and other AI tools in the classroom.
That’s the main takeaway from a report this morning by POLITICO’s Blake Jones, Madina Touré, and Juan Perez Jr., who write about how after early bans and panic over the technology, it’s now being consciously integrated into curricula across the country.
Olli-Pekka Heinonen, the director general of the International Baccalaureate program, told them that “AI will be affecting societies to a large extent, and they are so strongly influencing the basic ways of how we make sense of reality, how we know things, and how we create things, that it would be a mistake if we would leave schools out of that kind of development.”
Although individual schools and local and state governments are getting more ChatGPT-friendly, there still isn’t an education-focused regulatory response to the technology (with the exception of guidance issued in May by the Department of Education for personalized learning). The POLITICO team reports that nonprofits, unions, and educators are largely concerned with privacy, security, and job preparation. — Derek Robertson
What does one of the highest-profile champions of open technology think about Elon Musk’s efforts to crowdsource fact-checking on X?
Ethereum founder Vitalik Buterin offered his thoughts in a recent blog post, arguing that the “community notes” feature meant to provide Wikipedia-like, consensus-driven fact-checking on the platform formerly called Twitter is not only “informative and valuable” but highly aligned with the ethos of the crypto world.
“Community Notes are not written or curated by some centrally selected set of experts; rather, they can be written and voted on by anyone, and which notes are shown or not shown is decided entirely by an open source algorithm,” Buterin writes. “It’s not perfect, but it’s surprisingly close to satisfying the ideal of credible neutrality, all while being impressively useful, even under contentious conditions, at the same time.”
He writes that although it doesn’t quite add up to the vision of “decentralized” social media that many in the crypto world hold, it could play a big role in driving interest in, and preference for, the principles that the open-source world holds dear. — Derek Robertson
- A computer science student explains his breakup with an AI “agent.”
- A pioneer in the world of “feeling” prosthetic limbs has died at 49.
- British chip company Arm’s upcoming IPO will test the investment appetite for AI.
- What’s the best way to get your resume noticed in an AI-driven hiring world?
- Sam Altman’s ambitious Worldcoin project is already under serious threat.
Stay in touch with the whole team: Ben Schreckinger ([email protected]); Derek Robertson ([email protected]); Mohar Chatterjee ([email protected]); and Steve Heuser ([email protected]). Follow us @DigitalFuture on Twitter.
If you’ve had this newsletter forwarded to you, you can sign up and read our mission statement at the links provided.
Source: https://www.politico.com/