The very strange ways AI might be coming for you
With help from Derek Robertson
A disconcerting window into just how deeply AI could interfere with human livelihoods opened in Washington yesterday.
The setting was, of all places, the Federal Trade Commission.
The FTC’s virtual roundtable on the creative economy and generative AI had immediate interest for the tech industry, since FTC Chair Lina Khan has been pushing the agency forward as a potential AI regulator while the rest of Washington largely spins its wheels on the issue.
On the regulatory front, there weren’t many surprises from the hearing; it was a listening session, not a press conference.
But for anyone trying to get a grip on just how broadly generative AI might impact human striving, some of the testimony was eye-opening, even for those of us who’ve been watching it for a while.
The lineup included models, writers, musicians and voice actors, who offered some unsettling twists on the fast-evolving picture of what, exactly, generative AI models are doing to creative work.
- In the modeling industry, Sarah Ziff, founder and executive director of the Model Alliance, brought up concerns around models being asked to undergo 3D body scans without getting much transparency about how those scans would be used. She also raised the alarm on companies turning to AI generated models to fill diversity quotas instead of hiring people of color.
“Earlier this year, Levi’s announced that the company is creating AI generated models to increase the number and diversity of their models,” Ziff said. “There is a real risk that AI may be used to deceive investors and consumers into believing that a company engages in fair and equitable hiring practices and is diverse and inclusive when they are not.”
- In the music industry, “the increasing scale of machine-generated music dilutes the market and makes it more difficult for consumers to find the artists they want to hear,” said Jen Jacobsen, executive director of the Artists Rights Alliance. “Musicians’ work is being stolen from them and then used to create AI generated tracks that directly compete with them,” she said. That competition concern has already become familiar in the AI debate and has echoes in other industries (including mainstream media, where AI-filled junk sites are already pulling in advertising dollars).
- The hearing had some real-world examples of how AI was being used to deliberately confuse consumers: Jacobsen pointed to a hacked podcast episode where AI-generated voices purporting to be from the band Avenged Sevenfold told fans that its upcoming performances would be canceled — and an AI version of Tom Hanks promoting a dental plan against the actor’s consent.
- Creators raised concerns over needing to constantly “opt out” of having their work be caught in the digital dragnet of AI development. Compound that with the very obvious worry that AI-generated voices, books, illustrations and music could elbow aside human artists in the long run, and you can see why the creative industry is freaked out.
So who’s going to solve the problem? The FTC has already said it wants to discourage tools that would allow the kind of “deepfake” deception that fools people into thinking they’re listening to a famous actor or favorite band without the artist’s consent.
Outside the regulatory world, some of the burden of negotiating fair terms for artists in the AI era has fallen on unions. The Writers Guild of America recently concluded a 148-day strike upon reaching a historic agreement with Hollywood studios over the use of AI.
But “the solution cannot merely be the bargaining of replacement and remuneration, if the job opportunities are replaced wholesale,” said John Painting from the American Federation of Musicians of the United States and Canada. “The solution needs to be wider than the traditional paths we’ve all taken, owing to the cultural damage that this problem yields.”
On Capitol Hill, there’s already a piece of legislation in the House, sponsored by Rep. Deborah Ross, that would grant more bargaining power to artists. The law would create an antitrust exemption to allow artists to band together to negotiate licensing terms with major streaming platforms and generative AI developers.
Whatever happens next, the solution is likely to be as multi-pronged as the worries that are arising.
FTC commissioner Alvaro Bedoya — who has been an outspoken advocate for data privacy and digital rights in tech policy circles for years — said that the FTC’s mandate was always meant to be flexible enough to deal with innovation in unfair methods of competition.
“When I hear about new writers, young writers, worried that ‘The moment I arrive, I’m gonna be asked to feed my scripts in to train a new AI’; when I hear about background actors, young actors — how lots of future actors are discovered but who are the least powerful, least experienced, least savvy of all actors — being forced to get scanned, in the nude sometimes, or other really uncomfortable situations, it strikes me as more than innovative,” Bedoya said. “It fills me with concern.”
POLITICO’s Mark Scott poured cold water this morning on warnings that AI is going to supercharge the world of disinformation.
Writing in the Digital Bridge newsletter, Mark argued that AI-generated text and video is no more or less likely to warp the media landscape than its human-generated predecessors — and, at that, it’s symptomatic, not necessarily the cause, of a political landscape that leaves voters susceptible to such “false information” in the first place.
“No one is saying that AI-generated falsehoods (be they videos, text or images) aren’t a problem,” Scott writes. “It’s just that, given the state of social media, they are fringe issues to the main event: a decade-long polarization that has left the online world segmented along party lines; increasingly fragmented between multiple social networks; and where politicians remain the main purveyors of falsehoods.”
“Into that complex mix, artificial intelligence just isn’t going to move the needle beyond making existing problems worse,” he continues. (If you missed it, earlier this week Mark took an in-depth look at exactly what those existing problems are ahead of a sure-to-be-intense 2024 election cycle.) — Derek Robertson
More bad news for NFT holders: Even the most powerful power users are vulnerable to having their tokens stolen.
Fred Wilson, a venture capitalist with deep ties to the world of Web3, wrote on his blog recently about his experience getting scammed out of 46 NFTs through a link to a fake NFT drop. Wilson explains how he was tricked, with a few words to the wise: “The fact that I was signing transactions in the same wallet where I keep my NFTs is also bad practice and I knew it… Signing transactions is risky business and needs to be done carefully.”
Wilson said he eventually recovered most of his NFTs, but the story has greater implications than that. As fellow VC and analyst Benedict Evans wrote in his weekly newsletter yesterday, “this is supposed to be better than US banking infrastructure, and if someone who’s been deep in the weeds of this can get robbed so easily, this isn’t ready yet.” — Derek Robertson
- Wall Street is determined to use AI to beat the market.
- One writer argues that over-regulating AI could make it more dangerous.
- An AI chatbot could help you pick your next favorite movie or book.
- Sales teams are happy to surrender (at least parts of) their jobs to AI.
- An increasingly popular conspiracy theory posits the NSA invented Bitcoin.
Stay in touch with the whole team: Ben Schreckinger ([email protected]); Derek Robertson ([email protected]); Mohar Chatterjee ([email protected]); Steve Heuser ([email protected]); Nate Robson ([email protected]) and Daniella Cheslow ([email protected]).
If you’ve had this newsletter forwarded to you, you can sign up and read our mission statement at the links provided.
Source: https://www.politico.com/