The future, in books
While the most zealously future-minded might proclaim the death of print, good old-fashioned dead-tree books had a lot to say this year about the forces shaping our tomorrow.
With that in mind, today’s edition of DFD will recap some of the best (or simply most notable) books of the year that chronicled the likes of Elon Musk and Sam Bankman-Fried, took stock of the biggest ideas driving modern innovation, including even through the lens of historical fiction. Without further ado, broken up into a handful of sections:
The portraits. Arguably the year’s biggest publication was Walter Isaacson’s long-awaited biography “Elon Musk,” which we covered on its release — a tome that, typical of Isaacson’s biographies of figures like Steve Jobs, trades a more critical eye for the up-close-and-personal access that gives readers insight into what makes its subject tick. The answer, naturally, is AI. As I wrote in September, “The portrait that emerges is one that resembles a hard-charging, frequently alienating Gilded Age-style captain of industry with a particular fixation on AI that ties everything together, from his vision for a driverless planet to his personal relationships with figures like Larry Page and Sam Altman.”
Another ballyhooed personal chronicle was Michael Lewis’ “Going Infinite: The Rise and Fall of a New Tycoon,” which told the story of Sam Bankman-Fried as his crypto empire suddenly collapsed. The book’s perspective is unusual for Lewis, whose previous works like “The Big Short” and “Moneyball” told stories of likable genius underdogs… the exact kind of person Bankman-Fried appeared to be. Lewis’ book, its protagonist-driven storytelling coupled with the author’s growing self-awareness of SBF’s glaring flaws, is a perfect reflection of how rapidly modern tech “heroes” emerge and then self-destruct.
The ideas. Ideas deserve profiles, too. And this year didn’t feature any shortage of them: “Invention and Innovation: A Brief History of Hype and Failure,” by Vaclav Smil, put a sharp focus on the hype cycle that drives so many tech innovations, like AI and biotech. For one example, he turns his lens on the often-promised microreactor revolution in nuclear power, scoffing that as of yet “no nation has announced any specific, detailed, and binding commitment to what would have to be a multidecadal program of reactor construction.” In other words as we’ve noted here in DFD, the future might be coming, but it’s often hard to build.
Another notable book this year came from the Stanford Institute for Human-Centered Artificial Intelligence’s Fei-Fei Li. Li’s “The Worlds I See: Curiosity, Exploration, and Discovery at the Dawn of AI” combines memoir and commentary, describing how her personal experience as an immigrant inspired her work of placing the human experience with AI first in her research and development process. As she told me in November: “The arc of the book is a scientist eventually turning into a humanist, because in the future we will need scientists, social scientists, humanists, and everyone on the planet to have a moral understanding of the world as powerful technologies like AI shape our civilization.”
The rest. A handful of books published this year defy neat categorization while sparking plenty of imagination. Naomi Klein’s “Doppelganger” is an investigation of an experience that’s less sui generis than it might seem at first: Klein, a veteran left-wing writer and activist, finds herself increasingly mistaken online for Naomi Wolf, the author of the 1991 feminist bestseller “The Beauty Myth” who became a right-wing anti-vaccination activist. Klein’s lesson is not about her own nominative misfortune, but about what she calls a “doppelganger culture” where digital media distorts individuals and their political views into amorphous masses of “others” who one can endlessly villainize and oppose.
Brian Merchant’s “Blood In the Machine: The Origins of the Rebellion Against Big Tech” uses a historical example to show what happens when average people are economically squeezed or alienated by strange new technology — namely, they revolt. The book is a history of 19th-century Luddite opposition to the mechanized workplace, explicitly framed as an allegory for modern tech-driven displacement, from gig economy apps like Uber to AI agents potentially replacing human workers. The Luddites, of course, didn’t exactly “win,” but their activism reveals the extent to which governments and employers often take sides in debates over tech.
Finally, a novel. “The MANIAC” by Chilean writer Benjamin Labatut imagines the life of Hungarian-American scientist John von Neumann, a pioneer in mathematics and quantum physics who contributed to the Manhattan Project. Von Neumann’s constant searching for objective, quantitative answers to his heady questions about the functioning of the universe and even his own inner life mirrors the modern societal compulsion to “optimize” our economies and even personal lives via omnipresent information technology and increasingly sophisticated AI. Spoiler alert: Humans, as it turns out, are inevitably a little bit messier than all that.
Americans largely favor European-style AI regulation, according to a poll from the Artificial Intelligence Policy Institute published today.
Surveying 1,222 adults on Dec. 13, the AIPI found that respondents support the passage of the European Union’s AI Act by nearly a four-to-one margin, and that nearly two-thirds of respondents support similar legislation in the United States. More than half of respondents also said that Stability AI, the company that developed Stable Diffusion, an image-generating AI model that has been linked to increasingly sophisticated deepfakes, should be held legally liable when its software is used to create nonconsensual pornographic images.
AIPI executive director Daniel Colson said in a statement the findings make “abundantly clear that the American public isn’t convinced by what the tech companies have been selling, and that they much prefer a slower, more controlled approach to AI than one that entails high levels of risk.” Europe’s AI Act contains guardrails and mandated transparency requirements for AI models according to a hierarchy of “risk,” which is based on where and how they’re used in society (like home lending, welfare system management, employment and other critical systems).
They also polled respondents on AI-generated media, and didn’t find much support: 84 percent found Sports Illustrated’s recent practice of publishing AI-generated stories under fake bylines unethical, and 80 percent (!) said the practice should be outright illegal.
Another day, another quantum breakthrough.
The website SciTechDaily reported Friday on a recent paper published by a group of Caltech researchers who say they’ve found an effective method for “erasure” of the errors that plague current quantum computing systems. The researchers used lasers to identify and remove atoms in a system that aren’t “behaving” conducively to the system’s functioning — a key step in preserving the fragile subatomic state that makes quantum computers function.
“It’s normally very hard to detect errors in quantum computers, because just the act of looking for errors causes more to occur,” says Adam Shaw, one of the study’s co-lead authors, told SciTechDaily. “But we show that with some careful control, we can precisely locate and erase certain errors without consequence, which is where the name erasure comes from.”
For more on the extent to which promising research like this interacts with the legislative R&D ecosystem — like, for example, the reauthorization of the National Quantum Initiative Act making its way through Congress at the moment — read last Thursday’s DFD.
- A group in California is planning to build an environmentally sustainable future city.
- Meta’s new smart glasses might come with serious privacy concerns.
- Learn how the chips that power AI actually work.
- A photonic computing startup is now valued at $1.2 billion.
- Spike Jonze’s “Her” turns 10 amid the AI transformation it imagined.
Stay in touch with the whole team: Ben Schreckinger ([email protected]); Derek Robertson ([email protected]); Mohar Chatterjee ([email protected]); Steve Heuser ([email protected]); Nate Robson ([email protected]) and Daniella Cheslow ([email protected]).
If you’ve had this newsletter forwarded to you, you can sign up and read our mission statement at the links provided.
Source: https://www.politico.com/