The FBI, DoD and the shock of (facial) recognition
With help from Mohar Chatterjee and Derek Robertson
In Beijing, some street corners have dozens of cameras to track citizens. Moscow has facial recognition payment systems at metro stations. London police use that software to “tackle serious crime.”
For most of the world, it’s clear: Facial recognition technology is here, it’s being used and it doesn’t matter that some nations – including our very own – scoff at the tech as an invasion of privacy.
What’s perhaps more surprising though, is that some of that scoffing might be … a little rich, if not downright hypocritical. A trove of documents obtained by the The Washington Post uncovered government efforts to do exactly what the U.S. said it wouldn’t: spy on citizens. For years, the FBI and Defense Department were actively involved in research and development of facial recognition tech, hoping it could be used in public places like subway stations and street corners to identify or track citizens without their consent.
And now lawmakers are taking note, and putting the technology squarely in their sights. While it’s hard to predict what Congress will do on any given topic, let alone cutting-edge tech, it’s the kind of bombshell news that could make waves in the policymaking realm for years to come.
Currently, there are no federal laws saying you can’t track citizens using facial recognition software. But several lawmakers — including Democratic Sen. Ed Markey of Massachusetts — want to change that. He’s planning to renew a push for legislation that would restrict the use of such software by federal agencies, a measure first introduced three years ago.
“We cannot allow the federal government to weave a web of surveillance that invades Americans’ privacy with facial recognition and biometric technology, treating every one of us like suspects in an unbridled investigation,” Markey told Digital Future Daily in a statement.
Co-sponsors include a roll call of liberal icons, like Sens. Bernie Sanders (I-Vt.) and Elizabeth Warren (D-Mass.), along with House Democratic stalwarts including Reps. Pramila Jayapal (D-Wash.), Earl Blumenauer (D-Ore.) and Barbara Lee (D-Calif.), to name a few. But, so far, no Republicans signed onto the bill.
The Facial Recognition and Biometric Technology Moratorium Act specifically targets facial recognition and other technologies as they pose “significant privacy and civil liberties issues and disproportionately harm marginalized communities,” the lawmakers wrote, citing police misconduct reports.
Longtime advocates of facial recognition limits agree it’s time to take action.
Rep. Yvette Clark (D-N.Y.) called reports of the government’s facial recognition efforts “deeply disturbing,” particularly as people of color and women have often been misidentified by the software in the past. She co-sponsored the Facial Recognition Act of 2022, which sought to limit or prohibit police from using facial recognition software.
“Without a comprehensive regulatory network in place, trusting this technology with law enforcement where the risk for misidentification can be extremely harmful is concerning,” Clark told Digital Future Daily in a statement. “Our communities must not be subjugated live under this often-incorrect microscope.”
It’s early days in the debate. And, as we noted above, it’s always a guessing game trying to triangulate what Capitol Hill will end up doing. But Democrats and Republicans have come together in the past to blast facial recognition technology. So, it seems possible that, as details emerge about the United States’ growing interest in developing it, Congress finally has a shot at enacting some guidelines.
Meanwhile, outside experts have been sounding the regulatory alarm for years.
Key guardrails are needed to make sure facial recognition software is used appropriately, as the Center for Strategic and International Studies’ James Andrew Lewis and William Crumpler outlined in a report. Those must address concerns about autonomous technology running amok, transparency with the public on how the software is used and proper oversight, among several other topics.
While drawing a line at using it on the public, federal officials have long argued that facial recognition is critical to combating terrorism and crime. There’s also the potential for it to spread in the private sector, where researchers are developing surveillance that can deduce who you’re friends with based on facial recognition.
Grassroots movements to enact limits on facial recognition use, like the ACLU’s lawsuit to obtain the documents, might take a while to produce results, but they’re “more likely to lead to policy outcomes in the long run that balance the interests of those across society” than initial government policy, the Center for a New American Security’s Paul Scharre writes in his new book Four Battlegrounds.
“Good governance is not always quick governance,” he writes.
Salesforce unveiled EinsteinGPT today — an AI product developed in collaboration with OpenAI, aimed at helping corporate salespeople talk to their existing customers and potentially find new ones (so-called "customer relationship management"). A group of tech reporters (your correspondent included) saw EinsteinGPT in action at a live demo Monday morning.
EinsteinGPT is able to source information about a company (including key contacts) from the internet or from a company’s own databases and generate text for emails and Slack messages. At each stage, Salesforce’s developers emphasized that a human needs to sign off on any action that EinsteinGPT took — like sending a customer outreach email.
I spoke to Dr. Brendan Keegan, a lecturer at Ireland’s Maynooth University who researches AI applications in marketing, about how Salesforce’s generative AI debut will change how businesses talk to each other about business things they’re doing (the proverbial B2B space).
In short: It’s not a slam dunk.
“B2B is about relationships,” said Dr. Keegan. Corporate buyers and suppliers have long-term contracts, and many of those relationships are built on human interaction, Dr. Keegan said. He referenced Hank Hill, the propane salesman from “King of the Hill,” to explain why a sales rep with decades of experience and expertise selling a particular product might find generative AI a difficult pill to swallow.
Being told that a new system is “basically going to do your job for you — it's going to communicate with customers, it will identify leads, it will target them, it will keep track of all the interactions” can cause a “disenchantment” with the technology among intended users, said Dr. Keegan.
And there’s the data privacy specter. “Trust is power,” said Dr. Keegan. Salesforce needs their own clients to trust the technology behind EinsteinGPT before they’re willing to use it.
The Salesforce developers at TrailblazerDX seemed aware of this particular hurdle, referencing a certain arachnid-empowered superhero from Queens multiple times during their EinsteinGPT demo: “With great power comes great responsibility,” said Clara Shih, the CEO of Salesforce Service Cloud. While EinsteinGPT is connected to Salesforce’s data cloud, Salesforce developers emphasized that their cloud architecture would allow their customers to mark their data as “private.” — Mohar Chatterjee
- China’s black market for AI-powered chatbots is causing some unexpected tension.
- FTX’s implosion revealed an “insurance black hole” in the crypto market.
- The auto market’s thirst for EV batteries is moving its geographical center south.
- Hey, why not: Let ChatGPT help write your wedding vows.
- How much better could GPT-4 really be than its predecessor?
Stay in touch with the whole team: Ben Schreckinger ([email protected]); Derek Robertson ([email protected]); Mohar Chatterjee ([email protected]); Steve Heuser ([email protected]); and Benton Ives ([email protected]). Follow us @DigitalFuture on Twitter.
If you’ve had this newsletter forwarded to you, you can sign up and read our mission statement at the links provided.
Source: https://www.politico.com/