The New Jersey attorney general recently made headlines when he made the decision on January 24, 2020 to have prosecutors immediately stop using a facial recognition app produced by Clearview AI (https://clearview.ai/). Clearview AI is an app that markets itself as helping to stop criminals. The Clearview AI website states: “Clearview helps to identify child molesters, murderers, suspected terrorists, and other dangerous people quickly, accurately, and reliably to keep our families and communities safe.” While the app purports to only be available to law enforcement agencies and select security professionals to use as an investigative tool, and it purports that its results contain only public information, the NJ attorney general cited concerns. Law360 reported that the AG’s office stated: “[W]e need to have a sound understanding of the practices of any company whose technology we use, as well as any privacy issues associated with their technology.”
Clearview AI’s database appears to have broader data than that of many of its competitors. While most facial recognition programs allow law enforcement to compare images of suspects to databases composed of mug shots, driver’s license photographs, and other government issued or –owned photos (and usually confined to the state in which they operate), Clearview’s data appears to be national in scope and contain information from social media sites as well—like Facebook, Twitter, Venmo, YouTube and elsewhere on the Internet. All told, the database contains more than three billion photos. And it is used by more than 600 law enforcement agencies, ranging from local police departments to the F.B.I. and the Department of Homeland Security.
Clearview’s broader set of data raises a number of questions. On the privacy front are questions as to whether the data was obtained legally. For example, we recently discussed the Facebook settlement of the BIPA class action. If Facebook biometric information was illegally obtained/used (i.e., without proper written consent under BIPA), then are all uses of that same illegally obtained information also violations of BIPA? Or is the recovery for the BIPA violation exhausted by virtue of the recent $550 million Facebook settlement? It appears, based on the plain language of the statute, that there would be no exhaustion here, i.e., that each unlawful collection of biometric identifiers/information would need written consent. Of course, it also appears that Clearview may be exempt under BIPA. See BIPA, Sec. 25(e) (“Nothing in this Act shall be construed to apply to a contractor, subcontractor, or agent of a State agency or local unit of government when working for that State or local unit of government.”).
Another possible Clearview privacy issue concerns data scraping. While it’s unknown how Clearview obtained its social media data, it appears that at least some of its data was obtained via web scraping. For example, Twitter sent Clearview a cease-and-desist letter on January 21, 2020, demanding that Clearview stop collecting images from Twitter and delete any data that it previously collected. In the press, as support for its cease and desist letter, Twitter has pointed to its terms of service, which state that “scraping the services without the prior consent of Twitter is expressly prohibited.”
Of course, it is unclear whether Twitter’s terms of service will hold water in providing a legal basis for stopping Clearview’s web scraping. For example, in a decision issued last fall, the Ninth Circuit in hiQ v. LinkedIn ruled in favor of a data scraper (albeit on different grounds).
On the flip side of the possible privacy violations – and the desire of many to know how Clearview is getting the data it has amassed and whether users have consented (e.g. the NJ attorney general, BIPA, etc) – is Clearview AI’s intellectual property rights. Is its data collection proprietary? Does its software and data collection processes contain trade secrets? And if so, who has a right to know what Clearview is doing, and if so, how far does that right extend?