Coronavirus disease, which is also known as COVID-19, poses a number of significant challenges to business organizations.  Many businesses plan to address these challenges by encouraging or even requiring remote work by implementing telecommuting or work-from-home (WFH) programs. Organizations as disparate as hotel chains, major universities, and even the Internet Corporation for Assigned Names and Numbers (ICANN), which administers many of the domains that make up the web, are all de-centralizing their activities and requiring people to work remotely. For many organizations, this will be the first time that they have deployed widespread telecommuting or WFH programs, and it will present a significant technical challenge. However, in the rush to make such remote work programs operate at scale, organizations should also consider the significant privacy, data protection, and cybersecurity risks that they will face.

Baked-in online vulnerabilities could expose data and communications

The primary remote work privacy, data protection, and cybersecurity risks flow from the fact that the organization’s entire data and communications infrastructure is now being examined remotely by numerous people.  In a normal work day, even if most of one’s proprietary data is being hosted in the cloud, this data is usually accessed and utilized within the secure confines of the business itself. In the crisis telecommuting circumstances in which we find ourselves, this same information is being pulled in hundreds or thousands of directions.

At each workers’ home, the security of their own setup will eclipse the organization’s data security. The more individualized, personal networks that are rushed into service by one’s employees, the more likely it is that these networks will be unsecure. For example, the security settings on an employee’s home wifi network may be non-existent or they may piggyback onto a public wifi system. Any of these unsecure options could enable a third party who has compromised that network to gain access to the organization’s supposedly bullet-proof network and expose data to opportunistic cyber eavesdroppers who exfiltrate data, steal passwords, and engage in much more mischief. These strained, varied security circumstances increase the conceptual surface area of an organization and concomitantly increases the risks of accidental breach. Moreover, these risks extend beyond data at rest and may extend to businesses’ internal communications.

In a conventional office, workers communicate in onsite meetings and brainstorm in the break room. If an organization utilizes a work-focused electronic collaboration platform (such as Slack or Microsoft Teams) that messaging will generally only be viewed within the physical and conceptual space of the company itself.  In a remote work world, all of those communications will be read and distributed outside of the business. Suddenly, the types of candid, unpolished conversations that make work operate faster may be at risk of exposure.

Unfamiliar new online tools could also confound privacy, data protection, and cybersecurity efforts

Remote workers use a variety of software and hardware that can be different from the standardized equipment when they are physically located in the office. In many cases, this means there are no set policies and procedures for using the new equipment.  Even where there are well-established policies, the newly-minted remote workers have no experience using those policies, if they have ever read them at all.  Even if remote workers have consumed and understood all of the relevant policies, they are simply not used to using the equipment. This unfamiliarity can lead to enormous, if innocent, operational security failures.

Mixing business with pleasure can multiply security concerns

Remote workers often utilize the same software and hardware to manage their personal and work lives. This can multiply the risks posed by the workers’ personal online lives.  All of a sudden, online dating and clicking on memes can carry heightened operation data risks. Moreover, while the company could implement patches and updates remotely, one will likely have to rely upon the remote worker to enact necessary security tweaks and upgrades. Remote workers may not focus on enacting these security tweaks.  Sometimes this failure may result from simple inattentiveness but other times the remote worker may feel their personal privacy is threatened by the process.

Different expectations of privacy between work and home

Remote work can blur the lines between home and work privacy.  Employees generally have more limited privacy rights “at the office” – in workspaces that belong to the employer – than in their private homes.  These workplace limitations on privacy may also extend to employer owned computers and devices. Companies routinely monitor or search work computers and phones, consistent with their policies.  Remote workers may not appreciate that they are bringing their employer and its policies home with them.  Similarly, employers’ policies and practices may implicate the privacy interests of its employees—as well as their families and cohabitants—when newly applied in remote personal spaces.

Privacy, data protection, and cybersecurity litigation risks

The worst case scenario will involve novel litigation about WFH privacy, data protection, and cybersecurity risks.  While that potential litigation is worth contemplating, the next pragmatic step for organizations involves actually minimizing those risks.

Best practices and hacks

Remote workers must take the following steps to minimize privacy and cybersecurity risks:

  • First, always keep work and leisure separate. Where possible, they should use separate machines. Therefore, it would make a lot of sense to do all of one’s social media or web browsing on a mobile device and all of their work on a laptop. Often people use the same device for both, which can work but requires intellectual discipline, training, and clear employer policies. This work/play separation should also be true for passwords and logins. Always keep personal passwords separate and distinct from work passwords.
  • Second, do not use unsecured wifi or Bluetooth. A public Wi-Fi network or Bluetooth is inherently less secure, because you don’t know who set it up, or who else is connecting to it. In a perfect world, one should never use public wifi. However, for the times that’s not practical, you can still limit the potential damage by: (1) sticking to well-known public networks that are more likely to adhere to certain standards, (2) look for password protected networks, (3) limit your data use across those networks, (4) stick to “HTTPS” connections rather than unsecure HTTP connections, and (5) utilize a VPN, which will encrypt your data traffic.
  • Third, take steps to minimize the impact of a stolen or misplaced laptop. Are there remote features that can remotely brick the device or render the data unreadable? Is the cloud being used strategically to reduce the impact of a lost device?
  • Fourth, engage in regular information hygiene. This can vary by industry but, for example, after completing a project, ensure a client’s data has been encrypted, backed up to secure location, and completely erased from its local savepoint. Consider a policy that client data cannot be sent to and from a mobile device (including offsite laptop) unless it is encrypted.
  • Fifth, religiously install all security updates. To ensure this, companies should consider requiring that devices be brought into the office periodically for security hygiene “check-ups.”

From an organizational standpoint, there are plenty of best practices for security and remote working:

  • Develop and scrutinize your remote working policies and procedures, including how to utilize specific types of devices. One should also consider developing separate remote working v. employer-owned workspace policies
  • Only allow approved devices to connect to company networks.
  • Consider encryption policies for all data transfers.
  • Evaluate Mobile Device Management (MDM) and Mobile Application Management (MAM) platforms that can help to secure remote workers’ data and enforce the company’s security policies.
  • Scrutinize your access protocols. Best practices dictate use of two-factor authentication technology for accessing the organization’s networks, electronic mail, and data.
  • Consider drafting physical security protocols for remote workers. The fact is that many workers have relied upon the business’ physical security safeguards but now may place company assets at risk.
  • In light of the potential for extended office closures, the partially abandoned offices may now be at a new kind of risk. Organizations should ensure that proper security measures and access controls are in place to secure physical and information technology assets for the largely empty offices.

The Federal Trade Commission (“FTC”) released its 2019 Privacy and Data Security Update, highlighting its enforcement actions in 2019 directed to the protection of consumer privacy and data security.

In the roundup of 2019 Privacy Cases, the Update highlights the FTC’s and the Department of Justice’s record $5 billion penalty imposed on Facebook—the largest ever imposed on any company for violating consumer’s privacy–  concerning allegations that Facebook violated the FTC’s 2012 order against the company.  Other notable privacy cases include the enforcement action against data analytics company Cambridge Analytica and its former CEO, Alexander Nix, and app developer, Aleksandr Kogan, alleging that the defendants used false and deceptive tactics to harvest personal information from millions of Facebook users for voter profiling and targeting, as well as the FTC’s first action against a developer of a “stalking app,” Retina-X, alleging that the company’s practices enabled use of its apps for stalking and other illegitimate purposes.  The FTC’s enforcement activities spanned several other alleged abuses concerning spam, deception as to use of personal emails, purchase and collection of counterfeit and phantom debts, bogus credit repair services and student loan debt relief schemes, and deceptive lead generators.

With respect to the 2019 Data Security and Identity Theft Cases, the Update highlights the settlement with Equifax, with totals between $575-700M, as well as several enforcement actions alleging failure to store sensitive personal information, including Social Security numbers, in an encrypted format as well as failure to use reasonable, low-cost, and readily available security protections to safeguard personal information of clients.

The Update also highlights two actions involving violations of the Gramm-Leach-Bliley Safeguards rule; thirteen actions involving false claims of participation in Privacy Shield; four actions involving violations of the Children’s Online Privacy Protection Act, including a $170 million judgment against Google and its subsidiary YouTube, the largest civil penalty amount under COPPA; and eight actions enforcing the Do Not Call provisions against telemarketers.

As reflected in the Update, 2019 was a banner-year for the FTC, with its record-setting $5 billion penalty against Facebook and the $170 COPPA fine against Google and YouTube.  Will 2020 top it?

The European Commission recently presented strategies for data and Artificial Intelligence (AI) focusing on promoting excellence in AI and building trust.  The Commission’s White Paper, “On Artificial Intelligence – A European approach to excellence and trust,” addresses the balance between the promotion of AI with regulation of its risks.  “Given the major impact that AI can have on our society and the need to build trust, it is vital that European AI is grounded in our values and fundamental rights such as human dignity and privacy protection.”  In addition to the benefits AI affords to individuals, the white paper notes the significant roles AI will have at a societal level, including in Europe’s Sustainable Development Goals, supporting democratic process, and social rights.  “Over the past three years, EU funding for research and innovation for AI has risen to €1.5 billion, i.e. a 70% increase compared to the previous period” (n.b. as compared to €12.1 billion in North America and €6.5 billion in Asia in 2016).  The white paper also addresses recent advances in quantum computing and how Europe can be at the forefront of this technology.

“An Ecosystem of Excellence.”  The White Paper outlines a few areas to build an ecosystem of excellence, including (a) working with member states, including revising the 2018 Coordinated Plan to foster the development and use of AI in Europe to be adopted by end of 2020; (b) focusing the efforts of the research and innovation community through the creation of excellence and testing centers; (c) skills, including establishing and supporting through the advanced skills pillar of the Digital Europe Programme networks of leading universities and higher education institutes to attract the best professors and scientists and offer world-leading masters programs in AI; (d) focus on SMEs, including ensuring that at least one digital innovation hub per member state has a high degree of specialization on AI and a pilot scheme of €100 million to provide equity financing for innovative developments in AI; (e) partnership with the private sector; (f) promoting the adoption of AI by the public sector; (g) secure access to data and computing infrastructures; (h) cooperate with international players.

“An Ecosystem of Trust: Regulatory Framework for AI.”  “The main risks related to the use of AI concern the application of rules designed to protect fundamental rights (including personal data and privacy protection and non-discrimination), as well as safety and liability-related issues.”  The White Paper notes that developers and deployers of AI are already subject to European legislation on fundamental rights (data protection, privacy, non-discrimination), consumer protection, and product safety and liability rules, but that certain features of AI may make the application and enforcement of such legislation more difficult.  With respect to a new regulatory framework, the White Paper proposes a “risk-based approach,” and sets forth two cumulative criteria to determine whether or not an AI application is “high-risk”: (a) the AI application is employed in a sector where significant risks can be expected to occur (e.g., healthcare, transport, energy and parts of the public sector); (b) the AI application in the sector in question is, in addition, used in such a manner that significant risks are likely to arise (e.g., will produce legal or significant effects for the risks of an individual or a company, that pose risk of injury, death, or significant damage, that produce effects that cannot reasonably be avoided by individuals or legal entities).

Does Europe’s risk-based approach adequately balance regulation v. AI innovation?  It is a difficult question that various jurisdictions have been grappling with globally.  The concept of a risk-based approach, however, should sound familiar to the private sector, particularly to large enterprises that have already embedded AI and other emerging technologies into their internal risk management framework.  For smaller scale experiments, perhaps there is room for more regulatory flexibility in encouraging innovation in AI.  For example, Arizona was early to welcome autonomous vehicle testing on its roads in 2015, and is experimenting with a “regulatory sandbox” for FinTech.

Last week, 23andMe, the direct-to-consumer genetic testing service, announced its strategic license agreement with Almirall, a leading global pharmaceutical company, for the rights to a bispecific monoclonal antibody designed to treat Il-36 cytokines in autoimmune and inflammatory diseases. The antibody was discovered by 23andMe’s Therapeutics team, using the genetic information from more than 8 million personal testing kits from customers who consented to use of their data for genetic research. While the finances surrounding this particular license agreement are currently unknown, 23andMe’s $300M deal with GlaxoSmithKline in 2018 for a four-year drug research and development partnership gives us some idea of the potential profit from this joint venture.

Putting aside the obvious health and research benefits from such a massive data pool, the use of millions of consumer saliva samples naturally raises the ethics question of who should be profiting from what is arguably your most intimate and valuable data – your DNA. Despite the lucrative partnerships with big pharma, consumers who provide their genetic blueprint to testing services aren’t getting a piece of pie. A review of the Terms of Service for 23andMe, for instance, along with those from other industry giants like Ancestry.com and FamilyTreeDNA, reveals that consenting users waive all potential claims to a share of the profits arising from the research or commercial products that may be developed.[1]

A larger issue, however, relates to the privacy and data security concerns with regard to these large databases of genetic data. In addition to finding out that you have a fourth cousin living just down the road or that your Italian grandmother is anything but, your DNA may also be used to identify you specifically, more so than your name or even your SSN. And depending on who has access to these databases – be it big pharma, insurance companies, hackers, law enforcement, or otherwise – even if the data is de-aggregated or de-identified, because your genetic makeup is a direct personal ID, there is always the potential for your identity and health data to be compromised or used against you. For instance, California’s notorious Golden State Killer was caught and arrested in 2018 after investigators matched a DNA sample from the crime scene to the results of a genetic testing kit uploaded to a public genealogy website by a relative of his. Notably, the Contra Costa County District Attorney’s office was able to obtain that genetic matching without any subpoena, warrant, or similar criminal legal process, which raises additional privacy and security concerns. And last December, the Pentagon issued a memo advising members of the military not to use consumer DNA kits, citing security risks.

Consumers seem to be getting wind of these privacy concerns, as evidenced by declining sales of genetic testing kits and genealogy service providers, including 23andMe, struggling to stay afloat.

[1] This arrangement tends to evoke an ethical inquiry similar to that surrounding Henrietta Lacks, who received no compensation for her bodily cells taken post-mortem (and without her consent) and which have been used for decades in the development of countless vaccines, medical research, and drug formulations.

The number of actions to enforce the European Union’s General Data Protection Regulation (GDPR) against a wide range of companies continues to rise.  Germany, a country where privacy enjoys strong legal protection, is establishing itself as a favorite jurisdiction for enforcement of the GDPR.  And, not surprisingly, Facebook is one of the companies in the crosshairs.

Last February, Germany’s Federal Cartel Office held that Facebook’s practice of combining data it collects across its suite of products, which include WhatsApp and Instagram, is anticompetitive, and ordered Facebook to stop.  That ruling was later overturned on appeal.  Last month (Jan. 2020), a state court in Berlin, assessing Facebook’s terms of service, determined that Facebook had violated the GDPR’s requirement that “informed consent” by a data subject be given before his or her personal information is collected.

Interestingly, this latest action against Facebook was brought by a consumer group, the Federation of German Consumer Organizations.  While this regional interpretation of the GDPR’s provisions regarding informed consent should be considered, the real impact may be reliance on this decision by consumer groups and other organizations to establish standing to seek legal enforcement of the GDPR without the involvement of an injured or affected data subject.

We will see how this state court ruling fares on review in Germany, and how, ultimately, other jurisdictions in the EU come out on this important issue of standing.  The standing provision of the CCPA has already been a challenge and the subject of debate.  As more states in the U.S. craft and pass privacy legislation, we can expect much debate and, most likely, litigation around this important issue.

The California Attorney General recently released modified CCPA guidance.  While the modified guidance offers additional examples for CCPA compliance and clarifies certain obligations, several open issues and ambiguities still remain. Below are highlights of the changes, and note that written comments are due by February 25, 2020.

Definitions: The modified guidance specify the definition of “household” to include a person or group of people who: (1) reside at the same address, (2) share a common device or the same service provided by a business, and (3) are identified by the business as sharing the same group account or unique identifier.

Interpretation of CCPA Definition of Personal Information: The modified guidance explains that the definition of personal information “depends on whether the business maintains information in a manner that ‘identifies, relates to, describes, is reasonably capable of being associated with, or could be reasonably linked, directly or indirectly, with a particular consumer or household.”  As an example, “if a business collects the IP addresses of visitors to its website but does not link the IP address to any particular consumer or household, and could not reasonably link the IP address with a particular consumer or household, then the IP address would not be ‘personal information.'”

Notice to Consumers: The guidance helpfully summarizes four scenarios where notice is required.

(1) Every business that must comply with the CCPA shall provide a privacy policy;

(2) A business that collects personal information from a consumer shall provide a notice at collection;

  • When collecting PI from a mobile device for a purpose that the customer “would not reasonable expect,” the business must provide a “just-in-time” notice explaining the categories of personal information being collected and a link to the full notice.

(3) A business that sells personal information shall provide a notice of right to opt-out; and

  • The guidance provides that an opt-out button may be used in addition to posting the notice of right to opt-out, and when it is used it shall appear to the left of the “Do Not Sell My Personal Information” or “Do Not Sell My Info” link and should be approximately the same size as other buttons on the business’s webpage.

(4) A business that offers a financial incentive or price or service difference shall provide a notice of financial incentive.

All  notices must be reasonably accessible to consumers with disabilities.

Consumer Requests:  The modified guidance provides that businesses may–rather than “shall”–use a two-step process for online requests to delete.  In addition to government issued identification numbers, financial account numbers, health insurance or medical identification number, account password, or security questions and answers, a business shall not at any time disclose in response to a request to know “unique biometric data generated from measurements or technical analysis of human characteristics.”

Service Providers:  A service provider shall not retain, use, or disclose personal information obtained in the course of providing services except: (1) to perform the services specified in the written contract with the business that provided the personal information, (2) to retain and employ another service provider as a subcontractor, (3) for internal use by the service provider to build or improve the quality of the service provided that the use does not include building or modifying household or consumer profiles, or clearing or augmenting data acquired from another source; (4) to detect data security incidents or protect against fraudulent or illegal activity; or (5) for purposes enumerated in Civil Code section 1798.145(a)(1)-(a)(4).  If the service provider receives a request to know or a request to delete from a consumer, the service provider shall act on behalf of the business in responding to the request or inform the consumer that the request cannot be acted upon because the request has been sent to a service provider.

Requests to Opt-Out:  The modified guidance provides that the methods for submitting requests to opt-out shall be easy, clearly communicate or signal that the consumer intends to opt-out of the sale of personal information, and that a business must respect the privacy control but may notify the customer if there is a conflict between the privacy control setting and a business-specific privacy setting or participation in a financial incentive program.

Requests to Access or Delete Household Information:  The modified guidance clarifies what conditions are required to honor a request to access or delete household information, including that the business must individually verify all members of the household and that each member making the request is currently a member of the household.

Verification:  A business cannot require the customer to pay a fee for verification of their request to know or delete (e.g., provide a notarized affidavit).

Discriminatory Practices: If the business is unable to calculate a good-faith estimate of the value of the consumer’s data or cannot show that the financial incentive or price or service difference is reasonably related to the value of the consumer’s data, that business shall not offer the financial incentive or price or service difference.

See any issues?  Get your comments in by February 25!

 

Another BIPA class action was filed this week – this time against Google.  Again.  Google has been sued under BIPA before, and for seemingly the same violations as here, i.e., creating “face prints” from photos stored in Google Photos without having obtained prior, informed written consent.   The Complaint that was filed this week alleges: “Google created, collected, and stored, in conjunction with its cloud-based ‘Google Photos’ service, millions of ‘face templates’ (or ‘face prints’) – highly detailed geometric maps of the face – from millions of Google Photos users.  Google creates the templates using sophisticated facial recognition technology that extracts and analyzes data from the points and contours of the faces that appear in photos taken on Google Android devices and uploaded to the cloud-based Google Photos service.  Each face template that Google extracts is unique to a particular individual, in the same way that a fingerprint or voiceprint uniquely identifies one and only one person.”  Like those that came before it, this BIPA case is about the fact that Google failed to obtain informed written consent prior to creating, collecting and storing the “face prints.”  “The Google Photos app, which comes pre-installed on all Google Android devices, is default to automatically upload all photos taken by the Android device user to the cloud-based Google Photos service.”  Google uses the “face prints” it creates to locate and group together photos for organizational purposes.  

Time will tell, but it’s possible that this case could trump the recent $550 million Facebook BIPA settlement as the biggest BIPA settlement of all time.

Christopher A. Ott, CIPP/US, former Supervisory Counterintelligence and Cyber Counsel to the National Security Division of the U.S. Department of Justice (DOJ), and most recently, a partner in private practice at Davis Wright Tremaine LLP advising clients on litigation and business strategy when facing data and privacy issues, has joined Rothwell Figg’s Privacy, Data Protection, and Cybersecurity practice as a partner in Washington, D.C.

At the DOJ for more than 13 years, Mr. Ott won more than 30 jury trials, conducted hundreds of sensitive investigations, including leading some of the largest white-collar investigations in DOJ history, and won dozens of oral arguments in federal appellate and trial courts throughout the U.S. In particular, he handled the hack of Yahoo by Russian intelligence operatives, the largest data breach in history, investigated and charged the largest known computer hacking and securities fraud scheme, and led the longest white-collar trial in the history of the Eastern District of New York, which ended with convictions on all counts. In his most recent role as Supervisory Counterintelligence and Cyber Counsel, Mr. Ott acted as lead counsel on multi-district and international cyber investigations involving state actors. He also served as Assistant U.S. Attorney in the Southern District of California, Assistant U.S. Attorney in the Eastern District of New York, and Senior Trial Counsel in the Business and Securities Fraud Unit in the Eastern District of New York.

Mr. Ott entered private practice a few years ago, bringing with him his investigative skills and unique background in successfully litigating complex data security matters. He handles business disputes related to data security, privacy, blockchain, and AI issues. In private practice, Chris has successfully handled civil litigation and appeals, which has further bolstered his ability to handle complex data security and privacy litigation for technology and media clients.

“Chris’ impressive litigation experience in data and privacy perfectly bolster the advising strength of our current privacy, data protection, and cybersecurity practice,” stated E. Anthony Figg, co-founder and member of the firm. “Combined, we are able to confidently offer clients the full package, from designing and implementing policies, best practices, compliance programs, and incident response plans, to litigation and enforcement, in the unfortunate circumstance that it arises.”

“Experience with litigation in privacy and data is rare, and not only does Chris have the experience, but he has years of successful – and notable – litigation and investigations under his belt. Adding Chris and his knowledge from his government work to our practice is an incredible asset to our clients,” said Steven Lieberman, a shareholder in the firm’s practice.

“I am very excited to join the attorneys at Rothwell Figg, where they live and breathe technology and media litigation. I know that the team of eminent attorneys here will help me to bring an active, problem-solving mindset to data protection and privacy matters where clients could previously only find compliance and counselling advice,” stated Mr. Ott.

To learn more about Mr. Ott, his background, and his practice, please visit: https://www.rothwellfigg.com/professionals/ottc.

Earlier this week we wrote about the NJ AG’s ban of the Clearview AI’s facial recognition app, which is marketed to law enforcement agencies to help stop criminals.  We hypothesized about a BIPA suit against Clearview AI and whether, for example, Facebook’s settlement of a BIPA class action would exhaust remedies, at least with respect to Clearview’s biometric information that was scraped from Facebook.  We concluded that while there would likely not be exhaustion, based on the plain language of the statute (i.e., each unlawful collection of biometric information/identifiers would need written consent), we raised the issue of whether Clearview could argue that it is exempt under BIPA because Section 25(e) provides: “Nothing in this Act shall be construed to apply to a contractor, subcontractor, or agent of a State agency or local unit of government when working for that State or local unit of government.”

We may not need to hypothesize any longer as to whether such claims will be filed.  Two days after our blog post, Anthony Hall filed a class action lawsuit against Clearview, alleging violation of BIPA.  The Complaint also claims violation of the Illinois Consumer Fraud and Deceptive Practices Act (“ICFA”) and civil conversion under Illinois common

We look forward to tracking this suit and seeing how the BIPA action plays out.  This lawsuit is also consistent with the prediction of ours, and others, that BIPA lawsuits will be on the rise in 2020, following the Facebook $500 million BIPA settlement.

“Reasonable” appears several times in the California Consumer Privacy Act (CCPA), and most notably in the section on the private right of action for a data breach resulting from “a business’s violation of the duty to implement and maintain reasonable security procedures and practices appropriate to the nature of the information to protect the personal information.”

But what is “reasonable?”  While not defined in the CCPA, there are a few benchmarks to follow.  In 2016, the California Office of the Attorney General issued a Data Breach Report that lists safeguards that the former Attorney General viewed as constituting reasonable security practices, including a set of twenty data security controls published by the Center for Internet Security, multi-factor authentication, and encryption of data in transit.  Compliance with an information security framework may also lend towards a finding of reasonable security, including the National Institute of Standards and Technology Cybersecurity Framework or the International Organization for Standardization 27001 series.  Guidance may also be taken from enforcement actions in other jurisdictions.  For example, the Federal Trade Commission, which enforces Section 5 of the FTC Act against unfair or deceptive acts or practices, routinely publishes resources and guidance on practical security measures an organization can take.  Additionally, the 2019 amendments to New York’s breach notification law offers certain benchmarks in defining reasonable security requirement.

In a data breach lawsuit, the question of “reasonableness” will likely play out among plaintiffs’ and defendants’ experts before a jury.  Accordingly, it is important to have appropriate-and justifiable-data security practices in place that are routinely updated and monitored for compliance, as well as an incident response plan, particularly in view of the private right of action for a data breach under the CCPA.  At the very least, keeping personal information encrypted or redacted whenever possible is a great first step to avoid a civil suit under the CCPA.