The European Commission recently presented strategies for data and Artificial Intelligence (AI) focusing on promoting excellence in AI and building trust.  The Commission’s White Paper, “On Artificial Intelligence – A European approach to excellence and trust,” addresses the balance between the promotion of AI with regulation of its risks.  “Given the major impact that AI can have on our society and the need to build trust, it is vital that European AI is grounded in our values and fundamental rights such as human dignity and privacy protection.”  In addition to the benefits AI affords to individuals, the white paper notes the significant roles AI will have at a societal level, including in Europe’s Sustainable Development Goals, supporting democratic process, and social rights.  “Over the past three years, EU funding for research and innovation for AI has risen to €1.5 billion, i.e. a 70% increase compared to the previous period” (n.b. as compared to €12.1 billion in North America and €6.5 billion in Asia in 2016).  The white paper also addresses recent advances in quantum computing and how Europe can be at the forefront of this technology.

“An Ecosystem of Excellence.”  The White Paper outlines a few areas to build an ecosystem of excellence, including (a) working with member states, including revising the 2018 Coordinated Plan to foster the development and use of AI in Europe to be adopted by end of 2020; (b) focusing the efforts of the research and innovation community through the creation of excellence and testing centers; (c) skills, including establishing and supporting through the advanced skills pillar of the Digital Europe Programme networks of leading universities and higher education institutes to attract the best professors and scientists and offer world-leading masters programs in AI; (d) focus on SMEs, including ensuring that at least one digital innovation hub per member state has a high degree of specialization on AI and a pilot scheme of €100 million to provide equity financing for innovative developments in AI; (e) partnership with the private sector; (f) promoting the adoption of AI by the public sector; (g) secure access to data and computing infrastructures; (h) cooperate with international players.

“An Ecosystem of Trust: Regulatory Framework for AI.”  “The main risks related to the use of AI concern the application of rules designed to protect fundamental rights (including personal data and privacy protection and non-discrimination), as well as safety and liability-related issues.”  The White Paper notes that developers and deployers of AI are already subject to European legislation on fundamental rights (data protection, privacy, non-discrimination), consumer protection, and product safety and liability rules, but that certain features of AI may make the application and enforcement of such legislation more difficult.  With respect to a new regulatory framework, the White Paper proposes a “risk-based approach,” and sets forth two cumulative criteria to determine whether or not an AI application is “high-risk”: (a) the AI application is employed in a sector where significant risks can be expected to occur (e.g., healthcare, transport, energy and parts of the public sector); (b) the AI application in the sector in question is, in addition, used in such a manner that significant risks are likely to arise (e.g., will produce legal or significant effects for the risks of an individual or a company, that pose risk of injury, death, or significant damage, that produce effects that cannot reasonably be avoided by individuals or legal entities).

Does Europe’s risk-based approach adequately balance regulation v. AI innovation?  It is a difficult question that various jurisdictions have been grappling with globally.  The concept of a risk-based approach, however, should sound familiar to the private sector, particularly to large enterprises that have already embedded AI and other emerging technologies into their internal risk management framework.  For smaller scale experiments, perhaps there is room for more regulatory flexibility in encouraging innovation in AI.  For example, Arizona was early to welcome autonomous vehicle testing on its roads in 2015, and is experimenting with a “regulatory sandbox” for FinTech.

Last week, 23andMe, the direct-to-consumer genetic testing service, announced its strategic license agreement with Almirall, a leading global pharmaceutical company, for the rights to a bispecific monoclonal antibody designed to treat Il-36 cytokines in autoimmune and inflammatory diseases. The antibody was discovered by 23andMe’s Therapeutics team, using the genetic information from more than 8 million personal testing kits from customers who consented to use of their data for genetic research. While the finances surrounding this particular license agreement are currently unknown, 23andMe’s $300M deal with GlaxoSmithKline in 2018 for a four-year drug research and development partnership gives us some idea of the potential profit from this joint venture.

Putting aside the obvious health and research benefits from such a massive data pool, the use of millions of consumer saliva samples naturally raises the ethics question of who should be profiting from what is arguably your most intimate and valuable data – your DNA. Despite the lucrative partnerships with big pharma, consumers who provide their genetic blueprint to testing services aren’t getting a piece of pie. A review of the Terms of Service for 23andMe, for instance, along with those from other industry giants like Ancestry.com and FamilyTreeDNA, reveals that consenting users waive all potential claims to a share of the profits arising from the research or commercial products that may be developed.[1]

A larger issue, however, relates to the privacy and data security concerns with regard to these large databases of genetic data. In addition to finding out that you have a fourth cousin living just down the road or that your Italian grandmother is anything but, your DNA may also be used to identify you specifically, more so than your name or even your SSN. And depending on who has access to these databases – be it big pharma, insurance companies, hackers, law enforcement, or otherwise – even if the data is de-aggregated or de-identified, because your genetic makeup is a direct personal ID, there is always the potential for your identity and health data to be compromised or used against you. For instance, California’s notorious Golden State Killer was caught and arrested in 2018 after investigators matched a DNA sample from the crime scene to the results of a genetic testing kit uploaded to a public genealogy website by a relative of his. Notably, the Contra Costa County District Attorney’s office was able to obtain that genetic matching without any subpoena, warrant, or similar criminal legal process, which raises additional privacy and security concerns. And last December, the Pentagon issued a memo advising members of the military not to use consumer DNA kits, citing security risks.

Consumers seem to be getting wind of these privacy concerns, as evidenced by declining sales of genetic testing kits and genealogy service providers, including 23andMe, struggling to stay afloat.

[1] This arrangement tends to evoke an ethical inquiry similar to that surrounding Henrietta Lacks, who received no compensation for her bodily cells taken post-mortem (and without her consent) and which have been used for decades in the development of countless vaccines, medical research, and drug formulations.

The number of actions to enforce the European Union’s General Data Protection Regulation (GDPR) against a wide range of companies continues to rise.  Germany, a country where privacy enjoys strong legal protection, is establishing itself as a favorite jurisdiction for enforcement of the GDPR.  And, not surprisingly, Facebook is one of the companies in the crosshairs.

Last February, Germany’s Federal Cartel Office held that Facebook’s practice of combining data it collects across its suite of products, which include WhatsApp and Instagram, is anticompetitive, and ordered Facebook to stop.  That ruling was later overturned on appeal.  Last month (Jan. 2020), a state court in Berlin, assessing Facebook’s terms of service, determined that Facebook had violated the GDPR’s requirement that “informed consent” by a data subject be given before his or her personal information is collected.

Interestingly, this latest action against Facebook was brought by a consumer group, the Federation of German Consumer Organizations.  While this regional interpretation of the GDPR’s provisions regarding informed consent should be considered, the real impact may be reliance on this decision by consumer groups and other organizations to establish standing to seek legal enforcement of the GDPR without the involvement of an injured or affected data subject.

We will see how this state court ruling fares on review in Germany, and how, ultimately, other jurisdictions in the EU come out on this important issue of standing.  The standing provision of the CCPA has already been a challenge and the subject of debate.  As more states in the U.S. craft and pass privacy legislation, we can expect much debate and, most likely, litigation around this important issue.

The California Attorney General recently released modified CCPA guidance.  While the modified guidance offers additional examples for CCPA compliance and clarifies certain obligations, several open issues and ambiguities still remain. Below are highlights of the changes, and note that written comments are due by February 25, 2020.

Definitions: The modified guidance specify the definition of “household” to include a person or group of people who: (1) reside at the same address, (2) share a common device or the same service provided by a business, and (3) are identified by the business as sharing the same group account or unique identifier.

Interpretation of CCPA Definition of Personal Information: The modified guidance explains that the definition of personal information “depends on whether the business maintains information in a manner that ‘identifies, relates to, describes, is reasonably capable of being associated with, or could be reasonably linked, directly or indirectly, with a particular consumer or household.”  As an example, “if a business collects the IP addresses of visitors to its website but does not link the IP address to any particular consumer or household, and could not reasonably link the IP address with a particular consumer or household, then the IP address would not be ‘personal information.'”

Notice to Consumers: The guidance helpfully summarizes four scenarios where notice is required.

(1) Every business that must comply with the CCPA shall provide a privacy policy;

(2) A business that collects personal information from a consumer shall provide a notice at collection;

  • When collecting PI from a mobile device for a purpose that the customer “would not reasonable expect,” the business must provide a “just-in-time” notice explaining the categories of personal information being collected and a link to the full notice.

(3) A business that sells personal information shall provide a notice of right to opt-out; and

  • The guidance provides that an opt-out button may be used in addition to posting the notice of right to opt-out, and when it is used it shall appear to the left of the “Do Not Sell My Personal Information” or “Do Not Sell My Info” link and should be approximately the same size as other buttons on the business’s webpage.

(4) A business that offers a financial incentive or price or service difference shall provide a notice of financial incentive.

All  notices must be reasonably accessible to consumers with disabilities.

Consumer Requests:  The modified guidance provides that businesses may–rather than “shall”–use a two-step process for online requests to delete.  In addition to government issued identification numbers, financial account numbers, health insurance or medical identification number, account password, or security questions and answers, a business shall not at any time disclose in response to a request to know “unique biometric data generated from measurements or technical analysis of human characteristics.”

Service Providers:  A service provider shall not retain, use, or disclose personal information obtained in the course of providing services except: (1) to perform the services specified in the written contract with the business that provided the personal information, (2) to retain and employ another service provider as a subcontractor, (3) for internal use by the service provider to build or improve the quality of the service provided that the use does not include building or modifying household or consumer profiles, or clearing or augmenting data acquired from another source; (4) to detect data security incidents or protect against fraudulent or illegal activity; or (5) for purposes enumerated in Civil Code section 1798.145(a)(1)-(a)(4).  If the service provider receives a request to know or a request to delete from a consumer, the service provider shall act on behalf of the business in responding to the request or inform the consumer that the request cannot be acted upon because the request has been sent to a service provider.

Requests to Opt-Out:  The modified guidance provides that the methods for submitting requests to opt-out shall be easy, clearly communicate or signal that the consumer intends to opt-out of the sale of personal information, and that a business must respect the privacy control but may notify the customer if there is a conflict between the privacy control setting and a business-specific privacy setting or participation in a financial incentive program.

Requests to Access or Delete Household Information:  The modified guidance clarifies what conditions are required to honor a request to access or delete household information, including that the business must individually verify all members of the household and that each member making the request is currently a member of the household.

Verification:  A business cannot require the customer to pay a fee for verification of their request to know or delete (e.g., provide a notarized affidavit).

Discriminatory Practices: If the business is unable to calculate a good-faith estimate of the value of the consumer’s data or cannot show that the financial incentive or price or service difference is reasonably related to the value of the consumer’s data, that business shall not offer the financial incentive or price or service difference.

See any issues?  Get your comments in by February 25!

 

Another BIPA class action was filed this week – this time against Google.  Again.  Google has been sued under BIPA before, and for seemingly the same violations as here, i.e., creating “face prints” from photos stored in Google Photos without having obtained prior, informed written consent.   The Complaint that was filed this week alleges: “Google created, collected, and stored, in conjunction with its cloud-based ‘Google Photos’ service, millions of ‘face templates’ (or ‘face prints’) – highly detailed geometric maps of the face – from millions of Google Photos users.  Google creates the templates using sophisticated facial recognition technology that extracts and analyzes data from the points and contours of the faces that appear in photos taken on Google Android devices and uploaded to the cloud-based Google Photos service.  Each face template that Google extracts is unique to a particular individual, in the same way that a fingerprint or voiceprint uniquely identifies one and only one person.”  Like those that came before it, this BIPA case is about the fact that Google failed to obtain informed written consent prior to creating, collecting and storing the “face prints.”  “The Google Photos app, which comes pre-installed on all Google Android devices, is default to automatically upload all photos taken by the Android device user to the cloud-based Google Photos service.”  Google uses the “face prints” it creates to locate and group together photos for organizational purposes.  

Time will tell, but it’s possible that this case could trump the recent $550 million Facebook BIPA settlement as the biggest BIPA settlement of all time.

Christopher A. Ott, CIPP/US, former Supervisory Counterintelligence and Cyber Counsel to the National Security Division of the U.S. Department of Justice (DOJ), and most recently, a partner in private practice at Davis Wright Tremaine LLP advising clients on litigation and business strategy when facing data and privacy issues, has joined Rothwell Figg’s Privacy, Data Protection, and Cybersecurity practice as a partner in Washington, D.C.

At the DOJ for more than 13 years, Mr. Ott won more than 30 jury trials, conducted hundreds of sensitive investigations, including leading some of the largest white-collar investigations in DOJ history, and won dozens of oral arguments in federal appellate and trial courts throughout the U.S. In particular, he handled the hack of Yahoo by Russian intelligence operatives, the largest data breach in history, investigated and charged the largest known computer hacking and securities fraud scheme, and led the longest white-collar trial in the history of the Eastern District of New York, which ended with convictions on all counts. In his most recent role as Supervisory Counterintelligence and Cyber Counsel, Mr. Ott acted as lead counsel on multi-district and international cyber investigations involving state actors. He also served as Assistant U.S. Attorney in the Southern District of California, Assistant U.S. Attorney in the Eastern District of New York, and Senior Trial Counsel in the Business and Securities Fraud Unit in the Eastern District of New York.

Mr. Ott entered private practice a few years ago, bringing with him his investigative skills and unique background in successfully litigating complex data security matters. He handles business disputes related to data security, privacy, blockchain, and AI issues. In private practice, Chris has successfully handled civil litigation and appeals, which has further bolstered his ability to handle complex data security and privacy litigation for technology and media clients.

“Chris’ impressive litigation experience in data and privacy perfectly bolster the advising strength of our current privacy, data protection, and cybersecurity practice,” stated E. Anthony Figg, co-founder and member of the firm. “Combined, we are able to confidently offer clients the full package, from designing and implementing policies, best practices, compliance programs, and incident response plans, to litigation and enforcement, in the unfortunate circumstance that it arises.”

“Experience with litigation in privacy and data is rare, and not only does Chris have the experience, but he has years of successful – and notable – litigation and investigations under his belt. Adding Chris and his knowledge from his government work to our practice is an incredible asset to our clients,” said Steven Lieberman, a shareholder in the firm’s practice.

“I am very excited to join the attorneys at Rothwell Figg, where they live and breathe technology and media litigation. I know that the team of eminent attorneys here will help me to bring an active, problem-solving mindset to data protection and privacy matters where clients could previously only find compliance and counselling advice,” stated Mr. Ott.

To learn more about Mr. Ott, his background, and his practice, please visit: https://www.rothwellfigg.com/professionals/ottc.

Earlier this week we wrote about the NJ AG’s ban of the Clearview AI’s facial recognition app, which is marketed to law enforcement agencies to help stop criminals.  We hypothesized about a BIPA suit against Clearview AI and whether, for example, Facebook’s settlement of a BIPA class action would exhaust remedies, at least with respect to Clearview’s biometric information that was scraped from Facebook.  We concluded that while there would likely not be exhaustion, based on the plain language of the statute (i.e., each unlawful collection of biometric information/identifiers would need written consent), we raised the issue of whether Clearview could argue that it is exempt under BIPA because Section 25(e) provides: “Nothing in this Act shall be construed to apply to a contractor, subcontractor, or agent of a State agency or local unit of government when working for that State or local unit of government.”

We may not need to hypothesize any longer as to whether such claims will be filed.  Two days after our blog post, Anthony Hall filed a class action lawsuit against Clearview, alleging violation of BIPA.  The Complaint also claims violation of the Illinois Consumer Fraud and Deceptive Practices Act (“ICFA”) and civil conversion under Illinois common

We look forward to tracking this suit and seeing how the BIPA action plays out.  This lawsuit is also consistent with the prediction of ours, and others, that BIPA lawsuits will be on the rise in 2020, following the Facebook $500 million BIPA settlement.

“Reasonable” appears several times in the California Consumer Privacy Act (CCPA), and most notably in the section on the private right of action for a data breach resulting from “a business’s violation of the duty to implement and maintain reasonable security procedures and practices appropriate to the nature of the information to protect the personal information.”

But what is “reasonable?”  While not defined in the CCPA, there are a few benchmarks to follow.  In 2016, the California Office of the Attorney General issued a Data Breach Report that lists safeguards that the former Attorney General viewed as constituting reasonable security practices, including a set of twenty data security controls published by the Center for Internet Security, multi-factor authentication, and encryption of data in transit.  Compliance with an information security framework may also lend towards a finding of reasonable security, including the National Institute of Standards and Technology Cybersecurity Framework or the International Organization for Standardization 27001 series.  Guidance may also be taken from enforcement actions in other jurisdictions.  For example, the Federal Trade Commission, which enforces Section 5 of the FTC Act against unfair or deceptive acts or practices, routinely publishes resources and guidance on practical security measures an organization can take.  Additionally, the 2019 amendments to New York’s breach notification law offers certain benchmarks in defining reasonable security requirement.

In a data breach lawsuit, the question of “reasonableness” will likely play out among plaintiffs’ and defendants’ experts before a jury.  Accordingly, it is important to have appropriate-and justifiable-data security practices in place that are routinely updated and monitored for compliance, as well as an incident response plan, particularly in view of the private right of action for a data breach under the CCPA.  At the very least, keeping personal information encrypted or redacted whenever possible is a great first step to avoid a civil suit under the CCPA.

On February 3, 2020, Bernadette Barnes, a private resident of California (on behalf of herself and others similarly situated), brought the first data breach suit citing CCPA ever.  The suit named Hanna Andersson (company specializing in high end children’s apparel) and Salesforce.com (cloud technology service as a software (“SaaS”) company) as defendants, and brought claims of negligence, declaratory relief, and violation of the California Unfair Competition Law in connection with the widespread data breach that Hanna Andersson notified customers and state Attorneys General about on January 15, 2020, whereby hackers obtained access to (via Salesforce’s Commerce Cloud platform) and scraped customers’ personal information including names, addresses, payment card numbers, CVV codes, and expiration dates, and then made the information available for sale on the dark web. 

The citations to the CCPA were in Plaintiff’s negligence and state unfair competition law claims.

The negligence claim cited to both the CCPA and Section 5 of the FTC Act to establish Defendants’’ duty of care.  Specifically, CCPA requires that companies take reasonable steps and employ reasonable methods of safeguarding personally-identifiable information (Cal. Civ. Code Sec. 1798.81.5), and Section 5 of the FTC Act prohibits “unfair…practices in or affecting commerce” (15 U.S.C. Sec. 45(a)), which the FTC has enforced as including unfair practices of failing to use reasonable measures to protect personally identifiable information.

The state unfair competition claim alleged violation of Cal. Bus. & Prof. Code Sec. 17200 by engaging in unlawful acts and practices under the CCPA, specifically (1) “by establishing the sub-standard security practices and procedures described herein; by soliciting and collecting Plaintiffs’’ and California Class members’ PII with knowledge that the information would not be adequately protected; and by storing Plaintiffs’ and California Class members’ PII in an unsecure electronic environment in violation of Cal. Civ. Code Sec. 1798.81.5”; and (2) by failing to disclose the data breach to California class members in a timely and accurate manner, contrary to the duties imposed by Cal. Civ. Code 1798.82.  

The final paragraph of the complaint contained a reservation of plaintiffs’ rights to amend the Complaint to seek damages and relief under Cal. Civ. Code Sec. 1798.100 (which provides California residents with the right to seek up to $750 per consumer, per incident, for security breach incidents). 

This Complaint is noteworthy because it is the first to cite CCPA.  However, as it appears that the security breach at issue occurred in 2019 – not 2020 (CCPA went into effect on January 1, 2020) – it’s unclear how big of a role the citations to CCPA will play in the case (or that the final paragraph’s reservation would be upheld).  But additional cases like this one – based on breaches that have occurred since CCPA’s effective date – will undoubtedly follow. 

Of note, while the CCPA went into effect on January 1, 2020, and the private right of action (related to security breaches) became available on that date, the California attorney general will not begin enforcing CCPA until July 1, 2020.

 

ZDNet.com, relying on research by Forrester Research, recently reported that “GDPR enforcement is on fire!”  This is likely a foreshadowing of the prevalence of US privacy enforcement proceedings in the near future.  Indeed, it appears that if the FTC and AG offices are not able to keep up, plaintiffs in the United States are more than happy to file lawsuits.

While the US still does not have a national privacy law, and nor do many states, unfair and deceptive practices law will likely fill in at least some of these gaps until additional statutes and regulations are passed.  Moreover, the growing body of privacy laws, such as the CCPA, GDPR, and numerous federal privacy laws, will likely increasingly serve as “de facto” standards—even where these laws do not technically apply.

ZDNet.com, relying on research by Forrester Research, reported the following statistics regarding GDPR enforcement as of February 3, 2020:

  • DPAs have levied 190 fines and penalties to date (GDPR went into effect in May 2018)
  • Failures of data governance (rather than security breaches) have triggered the most fines and penalties.  That is, the most penalties and fines have resulted from issues with data accuracy and quality, and the fairness of processing (such as whether firms collect and process more than the minimum amount of data necessary for a specific purpose).
  • The biggest fines come not just from security breaches, but from the identification of “poor security arrangements,” including the lack of adequate authentication procedures, during investigations.
  •  Big fines have resulted from compromised data of a single user.  For example, Spain’s data protection regulator fined two telco providers for issues with a single customer.  One erroneously disclosed credentials of a third party to a customer, allowing the customer to access sensitive third-party data (resulting in a fine of 60K Euros) and the other processed a customer’s data without their consent (resulting in a fine of almost 40K Euros).  A hospital in Germany was also fined 150K Euros for GDPR violations associated with the misuse of data of a single patient.
  • Forrester expects that the next enforcement wave will come from failing to address individuals’ privacy rights—such as data access and deletion requests.  For example, a German company that archived customer data in a way that did not allow for data deletion was fined 14.5 million Euros.  Forrester also reported that while most of these enforcement actions have resulted from customer requests, there is also an increase in such requests from employees (with respect to delays/incomplete responses by employers to employee access requests).
  • It is expected that another big upcoming enforcement area for GDPR is third-party (e.g., vendor) management and due diligence issues.