The IRS/ID.me debacle: A teaching moment for tech

We’re excited to bring back Transform 2022 in person on July 19 and virtually July 20-28. Join AI and data leaders for insightful talks and exciting networking opportunities. Sign up today!


Last year, when the Internal Revenue Service (IRS) signed an $86 million contract with identity verification provider ID.me to provide biometric identity verification services, it was a monumental vote of confidence in the technology. Taxpayers can now verify their identities online using facial biometrics, a move intended to better ensure the handling of federal tax affairs by American taxpayers.

However, after strong opposition from privacy groups and bipartisan lawmakers who raised privacy concerns, the IRS in February took a radical turn and abandoned its plan. These critics took issue with the requirement that taxpayers submit their biometric data in the form of a selfie as part of the new identity verification program. Since then, both the IRS and ID.me have provided additional options that give taxpayers the option to choose to use the ID.me service or authenticate their identity through a live virtual video interview with an agent. While this measure may appease parties who raised concerns, including Sen. Jeff Merkley (D-OR), who had proposed the IRS No Facial Recognition Act (S. Bill 3668) at the height of the debate, the Public misunderstanding of The IRS settlement with ID.me has marred public opinion on biometric authentication technology and raised larger questions for the cybersecurity industry as a whole.

Although the IRS has since agreed to continue to offer ID.me’s facial matching biometric technology as an identity verification method for taxpayers with an opt-out option, confusion still exists. The high-profile complaints against the IRS settlement, at least for now, have unnecessarily weakened public confidence in biometric authentication technology and left fraudsters with much relief. However, there are lessons for both government agencies and technology providers to consider as the ID.me debacle fades in the rearview mirror.

Do not underestimate the political value of a controversy

This recent controversy highlights the need for better education and understanding of the nuances of biometric technology, the types of content that are potentially subject to facial recognition versus facial matching, the use cases, and the potential privacy issues that arise. of these technologies and the necessary regulations. to better protect the rights and interests of consumers.

For example, there is a wide discrepancy between the use of biometric data with the user’s explicit informed consent for a single, unique purpose that benefits the user, such as identity verification and authentication to protect the user’s identity from fraud, rather than extract biometric data in each identity verification transaction without permission or use it for unauthorized purposes, such as surveillance or even marketing purposes. Most consumers do not understand that their facial images on social media or other Internet sites can be collected for biometric databases without their explicit consent. When platforms like Facebook or Instagram expressly communicate such activity, it tends to get buried in the privacy policy, described in terms incomprehensible to the average user. In the case of ID.me, companies implementing this “scraping” technology should be required to educate users and obtain explicit informed consent for the use case they are enabling.

In other cases, different biometric technologies that appear to be performing the same function may not be created in the same way. Benchmarks such as NIST FRVT provide a rigorous evaluation of biometric matching technologies and a standardized means of comparing their functionality and ability to avoid problematic demographic performance biases on attributes such as skin tone, age, or gender. Biometric technology companies must be held accountable not only for the ethical use of biometrics, but also for the equitable use of biometrics that works well for the entire population they serve.

Politicians and privacy activists demand that biometric technology providers meet a high standard. And they should: the stakes are high and privacy matters. As such, these companies need to be transparent, clear and, perhaps most importantly, proactive in communicating the nuances of their technology to those audiences. A fiery and uninformed speech from a politician trying to win hearts during a campaign can undermine an otherwise coherent and focused consumer education effort. Senator Ron Wyden, a member of the Senate Finance Committee, proclaimed, “No one should be forced to submit to facial recognition to access critical government services.” And in doing so, he mischaracterized facial matching as facial recognition, and the damage had been done.

Perhaps Senator Wyden did not realize that millions of Americans undergo facial recognition every day when using critical services: at the airport, at government facilities, and in many workplaces. But by not engaging with this misunderstanding from the start, ID.me and the IRS allowed the public to be openly misinformed and present the agency’s use of facial matching. technology as unusual and nefarious.

Honesty is a business imperative

Against an avalanche of third-party misinformation, ID.me’s response was belated and convoluted, if not misleading. In January, CEO Blake Hall said in a statement that ID.me does not use 1:many facial recognition technology, the comparison of one face with others stored in a central repository. Less than a week later, the latest in a series of inconsistencies, Hall regressed, indicating that ID.me uses 1:many, but only once, during enrollment. An ID.me engineer referenced this inconsistency in a prescient post on the Slack channel:

“We could disable 1:many-face search, but then lose a valuable fraud-fighting tool. Or we could change our public stance on the use of 1:many-face search. But it seems we can’t keep doing one thing and saying another, as that will lead us into hot water.”

Transparent and consistent communication with the public and key influencers, using print and digital media, as well as other creative channels, will help counter misinformation and ensure that facial biometric technology, when used with explicit informed consent, protects to consumers, it is safer than legacy technology. alternatives.

Prepare for regulation

Rampant cybercrime has prompted more aggressive state and federal legislation, while legislators have placed themselves at the center of the tug-of-war between privacy and security, and must act from there. Agency heads may claim their legislative efforts are driven by a commitment to voter safety and privacy, but Congress and the White House must decide what sweeping regulations protect all Americans from the current threat landscape. cyber.

There is no shortage of regulatory precedent to refer to. The California Consumer Privacy Act (CCPA) and its historical European cousin, the General Data Protection Regulation (GDPR), model how to ensure that users understand the types of data that organizations collect from them, how it is used, measures to monitor and manage. that data and how to opt out of data collection. To date, officials in Washington have left the data protection infrastructure to the states. The Biometric Information Privacy Act (BIPA) in Illinois, as well as similar bills in Texas and Washington, regulate the collection and use of biometric data. These rules stipulate that organizations must obtain consent before collecting or disclosing a person’s likeness or biometric data. They must also store biometric data securely and destroy it in a timely manner. BIPA issues tickets for violating these rules.

If lawmakers were to write and pass legislation that combined the principles of the CCPA and GDPR regulations with the specific biometric rules outlined in BIPA, greater credibility could be established around the security and convenience of biometric authentication technology.

The future of biometrics

Biometric authentication providers and government agencies must be good shepherds of the technology they offer, and acquire, and more importantly, when it comes to educating the public. Some hide behind the apparent fear of giving cybercriminals too much information about how the technology works. The fortunes of those companies, not theirs, rest on the success of a particular implementation, and wherever there is a lack of communication and transparency, one will find opportunistic critics eager to publicly misrepresent facial biometric matching technology to promote their own agendas.

While several lawmakers have portrayed biometrics and facial recognition companies as bad actors, they have missed an opportunity to weed out the real criminals: cybercriminals and identity thieves.

Tom Thimot is CEO of authID.ai.

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including data techies, can share data-related ideas and innovations.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data technology, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read more about DataDecisionMakers

Leave a Comment