Women and the dark side of AI
Happy Friday! Glad to be back as your host this week. Today, we’re talking about the hottest tech story right now, AI, and some of its consequences that disproportionately affect women. Send me your feedback: [email protected].
Sen. Richard Blumenthal (D-Conn.) made headlines on Tuesday when he used ChatGPT to write his opening statements for a Senate hearing on generative AI tools, to prove just how advanced the technology has become. The hearing was thorough, with eager witnesses, including OpenAI CEO Sam Altman, offering three hours of testimony on an array of hot topics: compensation for artists that AI replicates, chatbot language inclusivity and how to protect minors using the technology.
But the hearing was less thorough on one issue: discussion of the dangers that AI poses specifically towards women. Blumenthal plans to rectify that omission in upcoming hearings (Tuesday’s was the first of several, according to the Senator): “We’re going to have witnesses who focus on harassment of women,” and bias problems associated with the technology, he told Women Rule.
The problems that Blumenthal mentioned have been gaining traction in the media in recent months. And it’s a complex topic, because there’s a plurality of ways that AI could prove harmful to women –and the risks are expanding with technology.
For one, the introduction of more advanced and widely available AI has worsened the threat of women’s images being falsely imposed onto photos or videos of other people – known as deepfakes – and depicted in compromising or violent situations. This form of harassment is particularly potent for women in the public eye, whose photos are widely available online.
“We’re already seeing this coordinated, chronic online abuse that’s experienced by women in politics and women in journalism,” said Gina Neff, executive Director of the Minderoo Centre for Technology & Democracy at the University of Cambridge.
“What the new AI tools allow for is an amping up at unprecedented speed, scope and scale of that chronic abuse that women are already facing.”
That “unprecedented speed” is what’s worsening deepfake abuse, according to Kristen Lorene Zaleski, Director of Forensic Mental Health at Keck Human Rights Clinic. While deepfakes are not new, they were, until recently, time consuming and technologically complex to create. “Now, you can do it for a couple of dollars, or even on some apps and programs for free,” Zaleski told Women Rule.
Zaleski is also a licensed clinical social worker, and she’s worked with several clients who have been targets of deepfake porn. She’s noticed a societal lack of knowledge about the issue, which can have devastating effects for victims. One of her clients, who was the target of a deepfake video that was spread online, lost her job as a result.
“It’s not talked about, so people that don’t specialize in this don’t necessarily see it as an issue or understand it,” Zaleski said.
Past reports have shown that the vast majority of deepfakes online are pornographic videos that target women.
Other countries, like Australia and the United Kingdom, already have or are working to pass legislation that would criminalize this type of abuse.
Last year, President Joe Biden signed into law the Violence Against Women Reauthorization Act, which made it possible for victims to sue for civil penalties over the nonconsensual disclosure of their intimate image, but did not extend those same protections to those affected by deepfakes, according to Rep. Joe Morelle (D-N.Y.) Morelle introduced legislation intended to mitigate the spread of harmful deepfakes earlier this month, but it has yet to see any action.
“The vastly increasing number of deepfakes are one of the reasons why we need oversight and regulation,” Blumenthal told Women Rule. “There are ways to stop people putting public figures’ faces on pornographic stars. There are ways to restrict images that are clearly intended for harassment.”
While deepfakes pose an increasingly prevalent problem, there are also other AI-related risks facing women. Because AI relies on the endless wasteland of internet content to learn from and replicate, it runs the risk of reproducing and perpetuating societal biases.
One real world example? The over-sexualization of run-of-the-mill photos of women.
In their research for an article for the Guardian, “Zero to AI: A Non-technical, Hype-free Guide to Prospering in the AI Era” co-author Gianluca Mauro and New York University Journalism Professor Hilke Schellmann saw this bias problem first hand.
They found that AI algorithms used by social media platforms, including Instagram and LinkedIn, decide what content to promote and what content to suppress partially based on how sexually suggestive it deems a photo to be. (The more sexually suggestive, the less likely it will be seen.)
But when the program analyzed comparable pictures of men and women in underwear, it identified the photos of women to be much more sexually suggestive. It also flagged photos of pregnant women as very sexually suggestive. As a result, posts featuring pictures of women may be more likely to be shadowbanned.
“Philosophically, it means that women’s bodies are considered sexual and therefore should not be seen,” Mauro told Women Rule.
AI’s have also reflected societal bias in terms of hiring — Amazon had to scrap a recruiting tool that was intended to streamline their recruiting process after it became clear that the AI preferred male candidates. And disparities in available health data could make medical apps that use AI less accurate for women.
“It’s kind of a cop out for a lot of companies to just say, ‘well, we have societal bias against women in society, so of course that will be reflected in the algorithm,’” Schellmann said. “It’s like, ‘No, no, no. We’re trying to build technology to make the world better, not worse, and not to replicate the problems that we already have.’”
ANALYSIS: “Will Kamala Harris Harm Joe Biden’s Reelection Chances?,” by Jeff Greenfield for POLITICO Magazine: “It’s the specter of Biden’s age — the actuarial data that looms over his candidacy — that throws the ‘Harris’ question onto center stage. There’s no question the vice president will face serious scrutiny in 2024, and fairly or not, she’s struggled to win over Washington and much of the public. Particularly if the GOP sees Harris as a weaker figure than Biden, the attacks on her as a potential president will only increase.”
“Anti-abortion leaders worry they may have to oppose Trump if he doesn’t back national ban,” by Meridith McGraw and Natalie Allison for POLITICO: “Top anti-abortion leaders are continuing to lobby Donald Trump on a 15-week ban they believe should be the standard for the Republican Party.
“Their efforts come even as Trump has not only refused to embrace a ban but has framed some abortion legislation as electorally toxic. And it is being driven by a desire to avoid the politically uncomfortable spectacle of having to rebuke the man who not only delivered their movement its greatest win, but is likely to be the GOP’s presidential nominee.”
Read more here.
“In a first, women poised to become mayors of Philadelphia and Jacksonville,” by Brendan O’Brien for Reuters: “A Philadelphia Democrat who promised to hire more police has won her party’s nomination for mayor, while a former TV news anchor has ousted a Republican from the mayor’s office in Jacksonville, Florida, the largest city in the country in which the party had control.”
“‘Systematically erased’: Middle Eastern and North African women and LGBTQ+ Americans don’t see themselves in U.S. data,” by Jasmine Mithani for the 19th News.
“Beyond the ‘abortion pill': Real-life experiences of individuals taking mifepristone,” by Becky Sullivan and Selena Simmons-Duffin for NPR: “The stories illustrate how mifepristone is indeed an ‘abortion pill’ — but it also plays other important roles in people’s lives.
“Many people wrote about how they took the medicine in treatment of a miscarriage. Others used it as part of their fertility journeys. Physically, taking mifepristone and misoprostol was a seriously painful experience for some and caused few symptoms for others. Some said they had never been more sure of any decision, others wrote that they still weren’t sure if they made the right call.”
“Many Women Have an Intense Fear of Childbirth, Survey Suggests,” by Roni Caryn Rabin for the New York Times.
Rachel Lyngaas has been named the Treasury Department’s first ever chief sanctions economist. She will join the newly created Sanctions Economic Analysis division. She last served at the International Monetary Fund. (h/t Nat Sec Daily)
Jamie Hill is joining Shallot Communications advising on strategic comms and public affairs issues. She most recently was head of comms for mental health startup Real and is an Obama HHS and Google alum. … Liza Pluto is joining Mars Inc. as media relations and issues manager on the corporate affairs team. She previously was on the MSNBC PR team.
Imani Bentham is now director for new membership development at the U.S. Chamber of Commerce. She most recently was director for engagement and membership strategy at National Journal. … Gayatri Patel is now an independent consultant for foreign policy and international development. She previously was VP of advocacy and external relations at the Women’s Refugee Commission. (h/t Playbook)
Source: https://www.politico.com/