Just last week, researchers at Robust Intelligence were able to manipulate NVIDIA’s artificial intelligence software, the “NeMo Framework,” to ignore safety restraints and reveal private information. According to reports, it only took hours for the Robust Intelligence researchers to get the NeMo framework to release personally identifiable information from a database. Since these vulnerabilities were exposed, NVIDIA has expressed that it fixed one of the root causes of this flaw. The issues with NVIDIA’s AI software follow a significant data security incident concerning ChatGPT in March 2023. After this incident, OpenAI published a report explaining that a bug caused ChatGPT to expose users’ chat queries and personal information. Like NVIDIA’s response, OpenAI patched this bug once it was discovered. Relatedly, in April 2023, OpenAI announced a “bug bounty” program that invites “ethical hackers” to report vulnerabilities, bugs, or security flaws discovered in its systems.
These data security incidents occur against the backdrop of what has been termed an “AI Arms Race.” This “race” kicked off at the beginning of the year when the popularity of ChatGPT spurred an influx of consumer-facing artificial intelligence products. Popular examples of these emerging AI products include Microsoft’s Bing chatbot and Google’s Bard chatbot. The urgency to release these products to the public is exemplified by Microsoft’s CEO, Satya Nadella, who in February 2023 expressed that “[a] race starts today. . . We’re going to move, and move fast.” In light of the race to get artificial intelligence products on the market, one could argue that technology companies have opted for a “release first, fix later” approach to data security for these products.
Of course, bugs are inevitable in complex software. In fact, “bug bounty” programs are not new. Many prominent technology companies utilize these types of programs, and the FTC often requires that entities establish similar programs for fixing known security vulnerabilities. But are these programs sufficient to protect AI companies from liability for data security incidents? The ease at which hackers can exploit the vulnerabilities in these artificial intelligence products and the speed at which these products hit the market begs the question of whether the NVIDIA and OpenAI data leaks were preventable at the outset. Indeed, AI developers know that today’s AI products pose consumer privacy risks. This reality is underscored by testimony from Gary Marcus, a leader in the field, who expressed that the existing AI systems do not adequately protect our privacy. Because of the prevalence and acknowledgment of the lack of security in the current AI systems, it might be the case that these companies could be held liable for the vulnerabilities in their AI products – regardless of how the company reacts after these vulnerabilities are exploited.
In the past, companies have faced enforcement based at least partly on the lack of reasonable measures taken to prevent security vulnerabilities. Often, enforcement on these grounds is brought by the FTC. For example, in a 2023 complaint against Ring, the FTC admonished Ring’s “lax attitude toward privacy and security.” In a 2013 enforcement action, the FTC alleged that HTC America “introduced numerous security vulnerabilities … which, if exploited, provide third-party applications with unauthorized access to sensitive information and sensitive device functionality.”
Enforcement actions brought on similar grounds have already been mentioned in connection with today’s artificial intelligence products. FTC Commissioner Alvaro M. Bedoya warned in his April 2023 “Early Thoughts on Generative AI” remarks before the International Association of Privacy Professionals, that “[the FTC has] frequently brought actions against companies for the failure to take reasonable measures to prevent reasonably foreseeable risks.” This warning echoes the FTC’s urging that AI companies “take all reasonable precautions before [the generative AI product] hits the market” in a blog post about deceptive AI. While not directly addressing data privacy concerns, this blog post further provides that “deterrence measures should be durable, built-in features and not bug corrections or optional features that third parties can undermine via modification or removal.” Commissioner Alvaro M. Bedoya’s May 31, 2023 statement on the recent settlement with Amazon emphasizes that “[m]achine learning is no excuse to break the law,” sending an unequivocal message to technology companies. The FTC’s public statements concerning how AI companies should be safeguarding consumers from the potential harms caused by AI products might be a harbinger of privacy enforcement actions to come.
While there has yet to be a privacy enforcement action against a technology company concerning its AI product, it is likely only a matter of time. Indeed, in March, the Center for AI and Digital Policy filed a complaint with the FTC asking it to investigate OpenAI, stating that its GPT-4 product is a privacy and public safety risk. Amidst calls for regulation from industry leaders, the data security practices (or lack thereof) used by developers of these technologies will only face even greater scrutiny in the future.