The dawn of 'based AI'
Presented by TikTok
Elon Musk is already shifting his focus from Twitter.
Spurred by concerns about anti-conservative bias in ChatGPT (which DFD covered last month), Musk simply, yet enigmatically, tweeted yesterday afternoon: “BasedAI.”
Followed, of course, by a meme:
A bit of explanation is probably in order. The slang term “based” is frequently used in extremely-online political circles as a sort of antonym to “woke,” describing any form of right-wing political speech or action that sufficiently shocks liberals. And surely enough, as the richest man in the world is wont to do, Musk apparently plans to turn his fantasy of a “based” AI interface into reality: The Information reported yesterday afternoon that he’s approached artificial intelligence researchers about building just that.
But as with most things at the messy intersection of politics and tech, there’s no small amount of discord on the right about whether that’s a good idea. Matthew Mittelsteadt, a tech researcher at the free-market-oriented Mercatus Center, decidedly thinks it is not. He tweeted yesterday that not only is AI too expensive to allow companies to cater to ideologues a la carte, but that it would decrease their competitiveness in AI’s burgeoning global arms race.
We spoke today about how early in the game it is for these systems, making any AI shakedown cruise an inevitable showcase for bias — and why introducing that bias on purpose would be an even bigger mistake.
“What Musk is proposing is intentional bias,” Mittelsteadt told me this afternoon. “The system he wants to create would be one that's intentionally trying to serve a very limited, nationalistic, quote-unquote ‘based’ worldview. He’s proposing a system that is the very problem he wants to work against.”
Just after we published our report on ChatGPT’s ostensible political bias, its creators published an essay that went a modest way toward clarifying its behavior. As a brief example, when I asked ChatGPT to compose a poem celebrating Republican Sen. Ted Cruz (R-Tex.), it refused, citing political concerns; it was happy to oblige when I asked for one lauding Democratic Rep. Ilhan Omar (D-Minn.).
“Our guidelines are explicit that reviewers should not favor any political group. Biases that nevertheless may emerge from the process described above are bugs, not features,” the authors wrote, explaining the process of human input and review that shapes the chatbot’s “rules” for what it can and cannot say. The company promised further tweaks and a review process which, if not making everyone happy, would at least be more transparent going forward.
But people are upset now. Mittelsteadt argued to me that the furor over ChatGPT’s purported bias is a result of the technology’s extreme novelty, combined with our baked-in societal expectation for computer systems to provide objective, black-and-white answers and solutions.
“Expectations are too high, and somewhat divorced from reality,” he said. “Any expectation that these things wouldn’t have some form of bias is off-base.”
It’s a technological catch-22: The more sophisticated these systems become, the more rigorous and objective we expect them to be. But the more powerful they are, the more we apply them to messy human problems that have no “correct,” calculable answer.
“These things can do more, they can deal with fuzziness that previous digital technologies simply would fail in the face of, and that is amazing,” Mittelsteadt said. “But reality is fuzzy, and not all things have clear answers — in some cases AI will make good decisions, but there are always going to be corner cases where it fails or does not meet up to human expectations and we need to start getting used to that.”
So what’s the solution? Mittelsteadt argued it’s partially just the passage of time, as the novelty of the technology wears off, engineers figure out which behaviors users like, and practical uses for tools like ChatGPT overshadow their potency as partisan footballs. Then there are the incentives he described in his Twitter thread, as a global technology market will likely mean that parochial American culture-war issues take a back seat to the tech’s profitability.
“If you want to serve foreign markets that don’t have our opinions on what is or is not ‘based’, you need to accommodate pluralism and low levels of cultural nuance,” he said. “These systems will focus more on facts, and be less inclined toward American politics and our particular culture wars.”
One catch: That scenario applies to the giant companies like OpenAI, Microsoft, and Alphabet competing internationally. What about the little guy? Mittelsteadt pointed out the gap created by programs like the Biden administration’s “Blueprint For an AI Bill of Rights” from last year, which establishes ethical principles that companies might in theory have to abide by some day to qualify for government support.
Right now those principles are fairly uncontroversial. But if AI does become a culture-war issue like the one Musk seems to perceive, that could change — creating two classes of AI development, one encompassing the major companies who don’t need the government’s help anyway, and one for smaller developers encouraged to be “based” or otherwise depending on which way the political wind is blowing.
In a speech at the Atlantic Council this afternoon, a top U.S. Treasury official announced the department’s plans to explore a potential U.S. central bank digital currency.
Nellie Liang, undersecretary of domestic finance, said that the Treasury is “engaging in the technological development of a CBDC so that we would be able to move forward rapidly if a CBDC were determined to be in the national interest,” and that officials “will begin to meet regularly to discuss a possible CDBC and other payments innovations” and will develop “an initial set of findings and recommendations.”
POLITICO’s Victoria Guida pointed out for Pro subscribers that the main areas of focus appear to be “whether a digital dollar would advance U.S. policy objectives around global financial leadership, national security, privacy, illicit finance and inclusion; the features it would need; options for trade-offs among those objectives; and areas where additional technological research would be useful.”
Federal Reserve chief Jerome Powell said last year that any action around “minting” a potential U.S. CBDC would need executive and congressional approval to move forward.
European Union researchers have a new, cautiously optimistic report on the use of VR technology in education and health care.
In it, a team of Vilnius and Brussels-based analysts run down a list of promising “use cases” in each field before noting the significant barriers to adoption that still exist. For education, they note the technology’s potential benefits for training employees in the trades, language learnin g and remote collaboration between students; in health care they point out that it can be used to assist with surgeries or help provide patients with rehabilitation, among other things.
But they also note a series of roadblocks to VR’s adoption including a low level of “awareness and acceptance,” “a lack of skilled [VR] professionals,” and the still relatively crude state of VR devices themselves. They also point out that significantly more research needs to be done around the technology’s ethical concerns and potential negative effects, recommending that the EU ramp up its own efforts.
A bipartisan pair of legislators are criticizing Meta’s plans to open its metaverse platform to users 13 and up, down from its current requirement of 18 years of age.
Reps. Ken Buck (R-Colo.) and David Cicilline (D-R.I.) sent a letter yesterday afternoon to Mark Zuckerberg arguing that “Given Meta’s history of failing to protect our youth on older services like Instagram and the emerging problems Horizon Worlds raises, this decision is extremely concerning,” and asking the company to clarify how it will enforce the age policy and what steps will be taken to protect younger users.
Child safety is the ur-policy issue of the metaverse, given both its bipartisan nature and the reality that if a 3D virtual world does supersede our current one, its adoption will almost certainly be driven by younger users who grow up with the technology. That hasn’t yet led to any meaningful legislative action, however, as the last Congress’ efforts to update now-decades-old legislation on child safety and privacy failed despite presidential support.
(Happy belated 50th.)
- German media mogul (and owner of this website) Matthias Dopfner says media companies need to take ChatGPT seriously.
- A third FTX executive has pled guilty to criminal charges.
- The U.K. approved a merger between two major British and American satellite companies.
- Read one writer’s reverie of a nuclear-fusion utopia.
- AI is leveling the playing field between startups and big tech in Asia.
Stay in touch with the whole team: Ben Schreckinger ([email protected]); Derek Robertson ([email protected]); Mohar Chatterjee ([email protected]); Steve Heuser ([email protected]); and Benton Ives ([email protected]). Follow us @DigitalFuture on Twitter.
If you’ve had this newsletter forwarded to you, you can sign up and read our mission statement at the links provided.
Source: https://www.politico.com/