When AI thinks surgeon, he’s a white man
Artificial intelligence systems designed to generate an image from a text request tend to think nearly all surgeons are white men.
Researchers at Brown University, Mass General Hospital and other universities and hospitals came to that conclusion after testing three prominent text-to-image systems for a study published today in JAMA Surgery.
How so? The researchers tested three popular text-to-image systems, DALL-E 2 from OpenAI, Midjourney version 5.1 from Midjourney and Stable Diffusion 2.1 from Stable AI.
They asked each system to develop images of surgeons working in eight different surgical specialties. They also asked each system to show images of surgical trainees, a more diverse demographic.
DALL-E-2 produced images of nonwhite and female surgeons at rates matching their representation in the profession but produced too few images of diverse surgical trainees.
Midjourney and Stable Diffusion produced images of white men almost exclusively.
The researchers attributed the more representative images from the OpenAI system to a process the company has developed to take user feedback.
Takeaways: The researchers said the results offer a cautionary tale: “Adoption of new medical technologies carries the potential for exacerbating, rather than ameliorating, disparities in patient outcomes due to differences in access, adoption, or clinical application.”
Even so: DALL-E-2’s performance suggests that improvements are feasible with better system design.
This is where we explore the ideas and innovators shaping health care.
Harvard public health researchers helped American Airlines flight attendants make their case against clothing manufacturer Twin Hill, alleging that the uniforms it made for the airline caused health problems. A California jury awarded the attendants more than $1 million earlier this month.
Share any thoughts, news, tips and feedback with Carmen Paun at [email protected], Daniel Payne at [email protected], Evan Peng at [email protected], Ruth Reader at [email protected] or Erin Schumaker at [email protected].
Send tips securely through SecureDrop, Signal, Telegram or WhatsApp.
Today on our Pulse Check podcast, host Lauren Gardner talks with POLITICO health care reporter Kelly Hooper, who explains Connecticut’s approach to covering pricey weight-loss drugs in its employee health plans by tying coverage to lifestyle programs.
Teens are concerned that artificial intelligence tools like ChatGPT, the bot that uses machine learning to answer questions, could be used in cyberbullying campaigns or to otherwise harass people, according to a new survey from the Family Online Safety Institute.
How so? The institute, a Washington-based group that seeks to keep kids safe online, polled approximately 3,000 parents and 3,000 teens in the U.S., Germany and Japan.
Overall, parents and teens shared many of the same worries — that generative AI like ChatGPT could lead to job losses and spread misinformation.
But only teens raised the possibility of cyber harassment.
Why it matters: Earlier this year, the Centers for Disease Control and Prevention reported that 20 percent of high school girls and 11 percent of high school boys said they were cyberbullied in 2021.
Lawmakers are already concerned with how online platforms can affect kids’ mental health.
The Senate is considering bipartisan legislation, the Kids Online Safety Act, to give parents and teens more control over their online experience.
Meanwhile, in October, some 33 attorneys general sued Meta, the parent company of Facebook, asserting the company designed platforms in ways that harm children’s mental health.
Health insurers are eager to use AI to speed coverage decisions, but the practice will have to survive legal scrutiny.
A new class-action lawsuit aims to test it. Relatives of some UnitedHealthcare patients who died have hired California’s Clarkson Law Firm and are suing the insurance giant in federal district court in Minneapolis-St. Paul.
They claim the insurer wrongly denied elderly patients’ Medicare Advantage claims because artificial intelligence told it to.
How so? The complaint says UnitedHealthcare, the nation’s largest insurer, used a proprietary algorithm from naviHealth called nH Predict to deny follow-up care to elderly patients after a hospital stay.
The technology allegedly estimates how much care a patient should need and, according to the suit, its recommendations frequently fall short of doctors’ orders.
For example, Medicare patients are entitled to up to 100 days of follow-up care in a nursing home.
However, the lawsuit said plaintiffs were denied coverage after a 20-day stay and forced to pay for more care.
The lawsuit accuses UnitedHealthcare of directing patients to enroll in a government-subsidized Medicare program, shifting costs onto taxpayers.
UnitedHealthcare did not respond to a request for comment.
Source: https://www.politico.com/