On February 3, 2020, Bernadette Barnes, a private resident of California (on behalf of herself and others similarly situated), brought the first data breach suit citing CCPA ever.  The suit named Hanna Andersson (company specializing in high end children’s apparel) and Salesforce.com (cloud technology service as a software (“SaaS”) company) as defendants, and brought claims of negligence, declaratory relief, and violation of the California Unfair Competition Law in connection with the widespread data breach that Hanna Andersson notified customers and state Attorneys General about on January 15, 2020, whereby hackers obtained access to (via Salesforce’s Commerce Cloud platform) and scraped customers’ personal information including names, addresses, payment card numbers, CVV codes, and expiration dates, and then made the information available for sale on the dark web. 

The citations to the CCPA were in Plaintiff’s negligence and state unfair competition law claims.

The negligence claim cited to both the CCPA and Section 5 of the FTC Act to establish Defendants’’ duty of care.  Specifically, CCPA requires that companies take reasonable steps and employ reasonable methods of safeguarding personally-identifiable information (Cal. Civ. Code Sec. 1798.81.5), and Section 5 of the FTC Act prohibits “unfair…practices in or affecting commerce” (15 U.S.C. Sec. 45(a)), which the FTC has enforced as including unfair practices of failing to use reasonable measures to protect personally identifiable information.

The state unfair competition claim alleged violation of Cal. Bus. & Prof. Code Sec. 17200 by engaging in unlawful acts and practices under the CCPA, specifically (1) “by establishing the sub-standard security practices and procedures described herein; by soliciting and collecting Plaintiffs’’ and California Class members’ PII with knowledge that the information would not be adequately protected; and by storing Plaintiffs’ and California Class members’ PII in an unsecure electronic environment in violation of Cal. Civ. Code Sec. 1798.81.5”; and (2) by failing to disclose the data breach to California class members in a timely and accurate manner, contrary to the duties imposed by Cal. Civ. Code 1798.82.  

The final paragraph of the complaint contained a reservation of plaintiffs’ rights to amend the Complaint to seek damages and relief under Cal. Civ. Code Sec. 1798.100 (which provides California residents with the right to seek up to $750 per consumer, per incident, for security breach incidents). 

This Complaint is noteworthy because it is the first to cite CCPA.  However, as it appears that the security breach at issue occurred in 2019 – not 2020 (CCPA went into effect on January 1, 2020) – it’s unclear how big of a role the citations to CCPA will play in the case (or that the final paragraph’s reservation would be upheld).  But additional cases like this one – based on breaches that have occurred since CCPA’s effective date – will undoubtedly follow. 

Of note, while the CCPA went into effect on January 1, 2020, and the private right of action (related to security breaches) became available on that date, the California attorney general will not begin enforcing CCPA until July 1, 2020.

 

ZDNet.com, relying on research by Forrester Research, recently reported that “GDPR enforcement is on fire!”  This is likely a foreshadowing of the prevalence of US privacy enforcement proceedings in the near future.  Indeed, it appears that if the FTC and AG offices are not able to keep up, plaintiffs in the United States are more than happy to file lawsuits.

While the US still does not have a national privacy law, and nor do many states, unfair and deceptive practices law will likely fill in at least some of these gaps until additional statutes and regulations are passed.  Moreover, the growing body of privacy laws, such as the CCPA, GDPR, and numerous federal privacy laws, will likely increasingly serve as “de facto” standards—even where these laws do not technically apply.

ZDNet.com, relying on research by Forrester Research, reported the following statistics regarding GDPR enforcement as of February 3, 2020:

  • DPAs have levied 190 fines and penalties to date (GDPR went into effect in May 2018)
  • Failures of data governance (rather than security breaches) have triggered the most fines and penalties.  That is, the most penalties and fines have resulted from issues with data accuracy and quality, and the fairness of processing (such as whether firms collect and process more than the minimum amount of data necessary for a specific purpose).
  • The biggest fines come not just from security breaches, but from the identification of “poor security arrangements,” including the lack of adequate authentication procedures, during investigations.
  •  Big fines have resulted from compromised data of a single user.  For example, Spain’s data protection regulator fined two telco providers for issues with a single customer.  One erroneously disclosed credentials of a third party to a customer, allowing the customer to access sensitive third-party data (resulting in a fine of 60K Euros) and the other processed a customer’s data without their consent (resulting in a fine of almost 40K Euros).  A hospital in Germany was also fined 150K Euros for GDPR violations associated with the misuse of data of a single patient.
  • Forrester expects that the next enforcement wave will come from failing to address individuals’ privacy rights—such as data access and deletion requests.  For example, a German company that archived customer data in a way that did not allow for data deletion was fined 14.5 million Euros.  Forrester also reported that while most of these enforcement actions have resulted from customer requests, there is also an increase in such requests from employees (with respect to delays/incomplete responses by employers to employee access requests).
  • It is expected that another big upcoming enforcement area for GDPR is third-party (e.g., vendor) management and due diligence issues.

The Internet of Things (IoT) is often defined as a network of interconnected devices, such as sensors, smartphones, and wearables, or the transfer of data between everyday objects with computing capabilities. It’s where physical infrastructure meets the digital universe, and where machines can “talk” to one another. The IoT creates a connected world, and it’s growing exponentially. One forecast from The Economist predicts that there will be “a trillion connected computers by 2035, built into everything from food packaging to bridges and clothes.”

But what exactly will our smart cities look like, and how will IoT affect how we live and work? Here’s a glance at some recent technologies that may shape how we relate to technology – and to one another – in the not-so-distant future:

  • Modern Medicine– like the new Stanford Hospital, which opened its doors back in November, touting futuristic features including a fleet of robots programmed to deliver linens, take out the trash, and prepare individual-dose prescriptions for medication-dispensing systems throughout the hospital; the ability for patients to adjust the room temperature or order a meal via a bedside tablet; and its MyHealth app, which provides updated electronic health records and allows patients and their families to view test results, pay medical bills, or send a message to a physician.
  • Tender-less Bars- like Sestra System’s TapWise platform, which allows for automated self-service beer stations with controlled access (e.g., via an app, RFID, or pin code); precision pouring to avoid waste and deliver just the right amount of foam; and real-time data on inventory, event schedules, weather, and point-of-sale records.
  • Smart Farming- like Microsoft Azure’s FarmBeats, a platform which utilizes a system of sensors, drones, TV white spaces, and cloud solutions to provide robust data analytics in the agricultural sector, with features ranging from soil moisture maps to livestock tracking, ultimately increasing crop yield and reducing waste.
  • Next Generation Fashion- like Levi’s new jean jacket that incorporates Google’s Jacquard™ technology, allowing wearers to connect the jacket to their smartphone and perform actions (e.g., play music, take a photo, or send a message) through the interactive jacket cuff and receive LED-light notifications without looking at their screen.
  • Automated Waste Management- like Songdo, South Korea’s truck-free waste management system, comprised of pneumatic pipes and automated waste bins, which automatically sorts, recycles, buries, or burns the waste through a network of underground pipes leading to the city’s processing center.

While the world may be setting the stage for embracing 5G and the benefits that come with more efficient energy usage, concerns about cybersecurity and hacking into connected devices loom on the horizon. With greater connectivity comes greater responsibility for privacy and data security, and tech companies should take heed to protect consumers.

The New Jersey attorney general recently made headlines when he made the decision on January 24, 2020 to have prosecutors immediately stop using a facial recognition app produced by Clearview AI (https://clearview.ai/).  Clearview AI is an app that markets itself as helping to stop criminals.  The Clearview AI website states: “Clearview helps to identify child molesters, murderers, suspected terrorists, and other dangerous people quickly, accurately, and reliably to keep our families and communities safe.”  While the app purports to only be available to law enforcement agencies and select security professionals to use as an investigative tool, and it purports that its results contain only public information, the NJ attorney general cited concerns.  Law360 reported that the AG’s office stated: “[W]e need to have a sound understanding of the practices of any company whose technology we use, as well as any privacy issues associated with their technology.”

Clearview AI’s database appears to have broader data than that of many of its competitors.  While most facial recognition programs allow law enforcement to compare images of suspects to databases composed of mug shots, driver’s license photographs, and other government issued or –owned photos (and usually confined to the state in which they operate), Clearview’s data appears to be national in scope and contain information from social media sites as well—like Facebook, Twitter, Venmo, YouTube and elsewhere on the Internet.   All told, the database contains more than three billion photos.  And it is used by more than 600 law enforcement agencies, ranging from local police departments to the F.B.I. and the Department of Homeland Security.

Clearview’s broader set of data raises a number of questions.  On the privacy front are questions as to whether the data was obtained legally.  For example, we recently discussed the Facebook settlement of the BIPA class action.  If Facebook biometric information was illegally obtained/used (i.e., without proper written consent under BIPA), then are all uses of that same illegally obtained information also violations of BIPA?  Or is the recovery for the BIPA violation exhausted by virtue of the recent $550 million Facebook settlement?  It appears, based on the plain language of the statute, that there would be no exhaustion here, i.e., that each unlawful collection of biometric identifiers/information would need written consent.  Of course, it also appears that Clearview may be exempt under BIPA.  See BIPA, Sec. 25(e) (“Nothing in this Act shall be construed to apply to a contractor, subcontractor, or agent of a State agency or local unit of government when working for that State or local unit of government.”).

Another possible Clearview privacy issue concerns data scraping.  While it’s unknown how Clearview obtained its social media data, it appears that at least some of its data was obtained via web scraping.  For example, Twitter sent Clearview a cease-and-desist letter on January 21, 2020, demanding that Clearview stop collecting images from Twitter and delete any data that it previously collected.  In the press, as support for its cease and desist letter, Twitter has pointed to its terms of service, which state that “scraping the services without the prior consent of Twitter is expressly prohibited.”

Of course, it is unclear whether Twitter’s terms of service will hold water in providing a legal basis for stopping Clearview’s web scraping.  For example, in a decision issued last fall, the Ninth Circuit in hiQ v. LinkedIn ruled in favor of a data scraper (albeit on different grounds).

On the flip side of the possible privacy violations – and the desire of many to know how Clearview is getting the data it has amassed and whether users have consented (e.g. the NJ attorney general, BIPA, etc) – is Clearview AI’s intellectual property rights.  Is its data collection proprietary?  Does its software and data collection processes contain trade secrets?  And if so, who has a right to know what Clearview is doing, and if so, how far does that right extend?

Last month, the UK Information Commissioner’s Office released a comprehensive Age Appropriate Design Code that seeks to protect children within the digital world.

The code sets out 15 flexible standards of age appropriate design for online services. These standards focus on providing “high privacy” default settings, transparency to children with regard to active location tracking and parental controls, and minimized data collection and use, while prohibiting nudge techniques that could encourage children to reveal more personal details and profiling that automatically recommends sexual or violent content to children based on their searches.

While there is some overlap between the UK’s code and our own Children’s Online Privacy Protection Act of 1998 (COPPA), the UK’s proposed regulation is far more extensive in scope. For instance, while COPPA only covers children under the age of 13, the UK code expands its protection to teenagers, covering all children under the age of 18. With regard to application, COPPA is directed toward websites or online services that knowingly collect and use data from children, whereas the Age Appropriate Design Code applies to “information society services likely to be accessed by children” in the UK. Such services include not just search engines, apps, online games, and social media platforms, but also connected toys and devices – a sweeping application likely to affect a number of big tech companies, some of whom think the code conflates the issue of inappropriate content with data protection.

The proposed regulation is issued under Britain’s Data Protection Act of 2018 and is set for Parliament’s approval. Once the code takes effect, online services within the scope of the code would have 12 months to become compliant. If implemented, violators could face fines of up to 4% of their global revenue.

Following up on our post of January 22, 2020 (“Big News in Biometrics – Supreme Court Declines to Weigh in on What Plaintiffs Must Show to Bring Biometric Privacy Suit”), Facebook has now agreed to pay $550 million to settle the BIPA class action lawsuit.  This is the largest BIPA settlement ever, and it will likely serve to encourage the filing of additional class action lawsuits.  If you are a business that engages with biometric information in Illinois beware… and revisit your written consent policies.

The National Institute of Standards and Technology (NIST) released version 1.0 of its Privacy Framework: A Tool for Improving Privacy Through Enterprise Risk Management, which follows the structure of the Framework for Improving Critical Infrastructure Cybersecurity (Cybersecurity Framework).  The Privacy Framework acknowledges that failure to manage privacy risks can have direct adverse consequences at both the individual and societal levels, with follow-on effects on organizations’ brands, bottom lines, and future prospects for growth.  “Finding ways to continue to derive benefits from data processing while simultaneously protecting individuals’ privacy is challenging, and not well-suited to one-size-fits-all solutions.”

The Framework includes three parts: Core, Profiles, and Implementation Tiers.

The “Core” is designed to enable a dialogue among the various stakeholders-executive level to implementation/operations level-and sets forth activities and outcomes, including:

  • Functions: organize foundational privacy activities, including Identify-P, Govern-P, Control-P, Communicate-P, and Protect-P
  • Categories: subdivision of a Function into groups of privacy outcomes
  • Subcategories: subdivision of a Category into specific outcomes of technical and/or management activities

Profiles can be used to describe the current state and the desired target state of specific privacy activities.  They are designed to enable the prioritization of the outcomes and activities that best meet organizational privacy values, mission or business needs, and risk.

Implementation tiers support organizational decision-making and communication about how to manage privacy risk by taking into account the nature of the privacy risks engendered by an organization and the sufficiency of the organization’s processes and resources to manage such risks.   The Framework specifies four Tiers that recognize a progression in managing privacy risks: (1) Partial, (2) Risk Informed, (3) Repeatable, (4) Adaptive.

The NIST Framework offers flexible and useful practices that can be adopted as appropriate by entities engaging in personal data processing activities.  In its accompanying Roadmap for advancing the Privacy Framework, NIST seeks continued collaboration with its stakeholders from government, academia, and industry on privacy risk management, including in the following priority areas for development, alignment, and collaboration: (1) Privacy Risk Assessment, (2)  Mechanisms to Provide Confidence, (3) Emerging Technologies (IoT and AI), (4) De-Identification Techniques and Re-identification Risks, (5) Inventory and Mapping, (6) Technical Standards, (7) Privacy Workforce, and (8) International and Regulatory Aspects, Impacts and Alignment.

Take note that while privacy standards are still in their infancy, they can be useful tools for showing that an entity is committed to privacy and has engaged in industry best practices.  Additional privacy systems management standards include ISO/IEC 27701 (Security techniques for privacy information management), ISO/PC 317 (Consumer protection: privacy by design for consumer goods and services), IEEE P7002 (Data Privacy Practices), and the International Association of Privacy Professionals has a Privacy Engineering Section. Which one(s) will you follow?

Since the GDPR went into effect in May 2018, there’s been a noticeable surge in the display of cookie consent notices (or cookie banners), with large variations in text, the choices presented to users, and even the position of the notice on the webpage.

But how are users actually interacting with these cookie banners, if at all? And more importantly, which cookie notice format ensures that users can make free and informed choices under the law?

A study conducted last Fall by the University of Michigan and the Ruhr-University Bochum in Germany (available here) examined how EU users interact with and respond to cookie consent notices. In summary, the study found the following:

  • Position: Users are most likely to interact with a cookie consent notice when it is in the form of a cookie banner displayed on the bottom of the screen (for mobile devices) or bottom left of the screen (for desktop computers).
  • Choice Mechanism: Users are more willing to accept cookie tracking given a binary choice (accept or decline), as opposed to selecting or deselecting the types of cookies in checkboxes (e.g., necessary, personalization & design, analytics, social media, marketing) or the types of vendors who may use the tracked cookies (e.g., FB, YouTube, Google Analytics, Google Fonts, Ionic, Google Ads).
  • “Nudging” (or using pre-checked boxes or highlighted “accept” buttons) has a large effect on the choices users make and increases acceptance rates.
  • Language: The use of technical language such as “cookies” in the notice (as opposed to “your data” or other) is more likely to result in a user declining cookie tracking.

To obtain valid consent for the processing of personal data, Recital 32 of the GDPR requires “a clear affirmative act” that is “freely given, [purpose-]specific, informed and unambiguous indication of […] agreement to the processing of personal data.” The regulation further specifies that “pre-ticked boxes or inactivity should not…constitute consent” and that “the request must be clear, concise, and not unnecessarily disruptive to the use of the service for which it is provided”…“When the processing has multiple purposes, consent should be given for all of them.”

According to the study, and pending further regulation or guidance on how to obtain clear, freely-given, and purpose-specific consent, “opt-in” cookie banners presenting users with a number of informed and unambiguous choices (which are not pre-selected) are predicted to be the trend for both multinational and US companies.

Thus, if you give a user a cookie notice, make sure it presents a clear and meaningful choice. If the cookie notice presents a clear and meaningful choice, is the user more likely to accept cookies? Possibly. But that’s another story.

One notable difference between the California Consumer Privacy Act (CCPA) and Europe’s General Data Privacy Regulation (GDPR) is that only the latter provides the right for individuals to not be subjected to automated decision-making, including profiling, which has legal or other significant effects on that individual.

But, the CCPA still creates issues for covered entities operating in the artificial intelligence (AI) and machine learning (ML) space.  For example, how does one comply with an individual’s request to delete their data–the so-called right to be forgotten–with respect to a “black box” ML model that used that individual’s personal information as training data?  When is a consumer’s data sufficiently “aggregated” or “deidentified” such that its use in a ML model escapes the CCPA’s scope?

If one thing is certain, it is far better to take a proactive approach and address these questions early in the design and development of new products and services.  Be sure to invite the appropriate stakeholders to that conversation, including your attorney!

The California Consumer Privacy Act (CCPA) went into effect on January 1, 2020.  The CCPA is front page news, and rightfully so.  While the major focus has been, and continues to be, on the CCPA, another piece of legislation went into effect on January 1, 2020, which deserves attention.

Bill SB-327 – “Information Privacy: Connected Devices” — is California’s new Internet of Things (IoT) security law, and it requires “manufacturers” of “connected devices” to equip those devices with “reasonable security features.”  While the language of SB-327 gives some guidance on what may deemed “reasonable,” there is plenty of gray and likely much more in the way of interpretation up ahead.

The applicability of CA’s IoT law is massive.  The definition of “connected device” is “any device, or other physical object that is capable of connecting to the Internet, directly or indirectly, and that is assigned an Internet Protocol address or Bluetooth address.”  Use your imagination… just think about all of those connected devices that you have on you right now, in your office, waiting for you at home.  And the law applies to all connected devices sold or offered for sale in California, regardless of where manufactured.

The law is clear that there is no private right of action and that the CA Attorney General, a city attorney, a county counsel, or a district attorney have exclusive authority to enforce the law.  Good.  What is absent from the legislation and, to date unknown, is the penalty for violating the law.  Bad.

Stay tuned.