The Pentagon’s endless struggle with AI
With help from Derek Robertson
War is changing — fast. And that dizzying pace extends to the world of military tech, as POLITICO’s Mohar Chatterjee reported in detail for a story today exploring how the Department of Defense is struggling to bolster its AI capabilities and keep up with the likes of Russia and China. Read the full story here, and an excerpt below.
Russia’s use of military drones in Ukraine has grown so aggressive that manufacturers have struggled to keep up. China’s strategy for a “world-class military” features cutting-edge artificial intelligence, according to Xi Jinping’s major party address last year.
The Pentagon, meanwhile, has struggled through a series of programs to boost its high-tech powers in recent years.
Now Congress is trying to put new pressure on the military, through bills and provisions in the coming National Defense Authorization Act, to get smarter, faster, about cutting-edge technology.
Defense pundits widely believe the future competitiveness of the U.S. military depends on how quickly it can purchase and field AI and other cutting-edge software to improve intelligence gathering, autonomous weapons, surveillance platforms and robotic vehicles. Without it, rivals could cut into American dominance. And Congress agrees: At a Senate Armed Services hearing in April, Sen. Joe Manchin (D-WV) said AI “changes the game” of war altogether.
But the military’s own requirements for purchasing and contracting have trapped it in a much slower-moving process geared to more traditional hardware.
To make sure the Pentagon is keeping pace with its adversaries, Sens. Mark Warner (D-Va.), Michael Bennet (D-Colo.) and Todd Young (R-Ind.) introduced a bill this month to analyze how the U.S. is faring on key technologies like AI relative to the competition.
The 2024 NDAA, currently being negotiated in Congress, includes several provisions that target AI specifically, including generative AI for information warfare, new autonomous systems and better training for an AI-driven future.
Other members of Congress have started to express their concerns publicly: Rep. Seth Moulton (D-Mass.), who sits on the House Armed Services Committee, told Politico that the military had fallen “way behind” on AI and that military chiefs had received “no guidance.”
Sen. Angus King (I-Maine), who sits on the Senate Armed Services Committee, called Gen. Mark Milley, chair of the Joint Chiefs of Staff in March, looking for answers on whether the DOD was adapting to the “changing nature of war.” In response, Milley said the military was in a “transition period” and acknowledged it urgently needed to adapt to the new demands of warfare.
As AI has quickly become more sophisticated, its potential uses in warfare have grown. Today, concrete uses for AI in defense range from piloting unmanned fighter jets to serving up tactical suggestions for military leaders based on real-time data from the battlefield. But it still amounts to just a tiny fraction of defense investment. This year the Pentagon requested $1.8 billion to research, develop, test and evaluate artificial intelligence — a record, but still just a small fraction of the nearly $900 billion defense budget. Separately, the Pentagon asked for $1.4 billion for a project to centralize data from all the military’s AI-enabled technologies and sensors into a single network.
For years, the Pentagon has struggled to adapt quickly to not just AI, but any new digital technology. Many of these new platforms and tools, particularly software, are developed by small, fast-moving startup companies that haven’t traditionally done business with the Pentagon. And the technology itself changes faster than the military can adapt its internal systems for buying and testing new products.
A particular challenge is generative AI, the fast-moving new platforms that communicate and reason like humans, and are growing in power almost month-to-month.
To get up to speed on generative AI, the Senate version of the 2024 NDAA would create a prize competition to detect and tag content produced by generative AI, a key DOD concern because of the potential for AI to generate misleading but convincing deep fakes. It also directs the Pentagon to develop AI tools to monitor and assess information campaigns, which could help the military track disinformation networks and better understand how information spreads in a population.
And in a more traditional use of AI for defense, the Senate wants to invest in R&D to counter unmanned aircraft systems.
Another proposed solution to rev up the Pentagon’s AI development pipeline is an entirely new office dedicated to autonomous systems. That’s the idea being pushed by Rep. Rob Wittman (R-Va.), vice chair of the House Armed Services Committee, who co-sponsored a bill to set up a new Joint Autonomy Office that would serve all the military branches. (It would operate within an existing central office of the Pentagon called the Chief Digital and Artificial Intelligence Office, or CDAO.)
The JAO would focus on the development, testing and delivery of the military’s biggest autonomy projects. Some are already under development, like a semi-autonomous tank and an unmanned combat aircraft, but are being managed in silos rather than in a coordinated way.
The House version of the 2024 NDAA contains some provisions like an analysis of human-machine interface technologies that would set the stage for Wittman’s proposed office, which would be the first to specifically target autonomous systems, including weaponry. Such systems have become a bigger part of the Pentagon’s future defense strategy, driven in part by the success of experimental killer drones and AI signal-jamming in the Ukraine war.
Read about the DoD’s previous efforts and more here.
Having trouble keeping track of what AI leaders think about the whole “civilizational risk” thing?
The Institute of Electrical and Electronic Engineers’ magazine, Spectrum, has compiled a handy scorecard breaking down in simple terms where figures like OpenAI’s Sam Altman and Oxford professor Nick Bostrum fall on the spectrum of AI doomerism. They use a few simple heuristics like “Is the success of GPT-4 and today’s other large language models a sign that an [artificial general intelligence] is likely?”, and “Is an AGI likely to cause civilizational disaster if we do nothing?” It includes some telling signature quotes like:
“We can imagine other futures, but to do so, we have to maintain independence from the narrative being pushed by those who believe that ‘AGI’ is desirable and that LLMs are the path to it.” — AGI skeptic and University of Washington professor Emily M. Bender.
“Variations of these A.I.s may soon develop a conception of self as persisting through time, reflect on desires, and socially interact and form relationships with humans.” — Bostrom, one of the earliest public figures concerned about AI’s “existential” risk.
“Why do we need to create these? What are the collateral consequences of deploying these models in contexts where they’re going to be informing people’s decisions?” — Signal Foundation President Meredith Whittaker. — Derek Robertson
Ever since Congress approved rules back in 2021 to make it easier for the IRS to track digital currencies, the industry has been preparing for a crackdown.
It hasn’t come. That’s what POLITICO’s Brian Faler reported yesterday, as Washington scratches its collective head over radio silence from the Biden administration on what the actual, nitty-gritty requirements for crypto reporting will be.
“This is the single easiest thing they can do to improve compliance, and they’re not doing it,” said Lisa Zarlenga, a former Treasury tax official and now cryptocurrency tax expert at the law firm Steptoe & Johnson.
A Treasury spokesperson said the department is “working diligently to finalize these important and complicated regulations.” Meanwhile the industry continues to chug along apace, to the chagrin of crypto critics on the Hill: “The SEC has proved they’re not afraid of the crypto bros, I know you’re not afraid of the crypto bros, I hope the IRS is not afraid of them — when are we going to see these regulations?” asked Rep. Brad Sherman (D-Calif.). — Derek Robertson
- A judge fined the lawyers who cited imaginary ChatGPT-generated precedent.
- Japan is taking a more aggressive approach to the chip war.
- …Why, exactly, does Elon Musk want to fight Mark Zuckerberg?
- New York state is buying a supercomputer to help it understand AI.
- DeepMind’s CEO is touting a big leap to surpass ChatGPT.
Stay in touch with the whole team: Ben Schreckinger ([email protected]); Derek Robertson ([email protected]); Mohar Chatterjee ([email protected]); and Steve Heuser ([email protected]). Follow us @DigitalFuture on Twitter.
If you’ve had this newsletter forwarded to you, you can sign up and read our mission statement at the links provided.
Source: https://www.politico.com/