A week ago the headlines of major press reported that a number of countries, like China, Israel, Singapore, and South Korea, were using surveillance to track COVID-19 in their countries.  The surveillance efforts being used depended on the country.  Surveillance techniques included everything from drones, cameras, smartphone location data, apps (e.g., the “TraceTogether” application being used in Israel), and tracking devices (e.g., wristbands linked to a smartphone app are being used in HongKong) to ensure that people were not violating quarantine orders.

Meanwhile, there was a general feeling among many in the United States that such surveillance techniques would be “un-American” and would not fly in this country.

Now, a week later, the Government has announced that it is using location data from millions of cell phones in the United States to better understand the movements of Americans during the coronavirus pandemic.  The federal government (through the CDC) and state and local governments have started to receive data from the mobile advertising industry, with the goal of creating a portal comprising data from as many as 500 U.S. cities, for government officials at all levels to use to help plan the epidemic response.

Is this legal?

It depends.  It depends on what the data shows, if the data may legally be shared, and what it is being used for.  If the data is truly anonymized, may legally be shared, and it is being used solely to show trends and locations where people are gathering (without connecting individuals to those locations), then it could very well be legal under current U.S. privacy laws and the privacy laws of most states.  But there are several hiccups.

First, is it possible to truly anonymize the data?

A report published on March 25, 2013 called “Unique in the Crowd: The privacy bounds of human mobility” in Scientific Reports, and authored by Yves-Alexandre de Montjoye, Cesar A. Hdalgo, Michel Verleysen and Vincent D. Blondel (https://www.nature.com/articles/srep01376), while dated,  is on-point.  In this study, the researchers looked at fifteen months of human mobility data for one and a half million individuals and found that human mobility traces are so highly unique that, using outside information, one can link anonymized location data back up to individuals (i.e., re-identification).

Another issue with anonymization is that, as technologies continue to improve (consider, for example, the development of quantum computers), what it takes to truly anonymize data gets more and more difficult.  Thus, data that is sufficiently anonymized today may be re-identifiable in ten years.

The limitations in the degree to which location data can be anonymized can be mitigated in other ways.  For example, privacy concerns can be greatly reduced (or eliminated?) if the location data is aggregated in such a manner where an individuals’ data cannot reasonably be separated from the aggregated data.

Second, are there are other legal requirements or restrictions in place regarding that data? These requirements or restrictions could come from several sources, such as federal or state legislation, a company’s privacy policy, or contractual terms.  For example, a statute may require user consent (opt-in) to share location data.  A privacy policy or contract may guarantee that location data will never be shared unless certain safeguards are in place.  A user may have requested deletion of their person information, and thus, the entity sharing the information should not even have the data (let alone be sharing it).

Third, there is the question of what the data is being used for.  In a number of countries, surveillance and location data is being used to “police” specific individuals to determine if they are violating quarantine orders.  So far the United States appears to be using the data for a more general purpose—i.e., to assess trends and whether there are gatherings of people at specific locations.  The implication, at least so far, is that nobody is going to go after the individuals who are gathering.  Instead, the data is being aggregated and used merely to help inform orders and for health-planning purposes.

But the question on many people’s minds is not what the data is being used for now, but rather, what the data will be used for down the road.  For example, currently the government does not have access to location data maintained by third parties, like cell providers, ad tech companies, and social media operators.  And in order for the government to obtain that data, it needs a warrant.  See Carpenter v. U.S., 138 S. Ct. 2206 (2018) (holding that the Court Amendment of the U.S. Constitution protects privacy interests and expectations of privacy in one’s time-stamped cell-cite location information (CSLI), notwithstanding that the information has been shared with a third party (i.e., one’s cellular provider), and thus, for government to acquire such information, it must obtain a search warrant supported by probable cause).  Is this going to change once the coronavirus pandemic is over, at least with respect to the location data to which the government has already been provided access?  What requires the government to delete the information later?  Or to not use the data for other purposes?  Presumably there are contracts in place between the government and the third party companies that are sharing their location data – where are these contracts and who has a right to see them?  Are we all third party beneficiaries of those contracts, in that we all stand to benefit from the coronavirus response efforts that result?  And if so, to the extent those contracts limit the government’s ability to use the shared data for other purposes, should individuals have a right to later enforce those limitations (as third party beneficiaries)?

 

 

By now, most of us have participated in at least one videoconference from the comfort of our homes, be it for a work meeting, a fitness class, or a virtual happy hour with friends across the country. Easing the transition from business-as-usual to social distancing and sheltering-in-place, these video communications platforms and apps have no doubt helped us stay connected and productive as we settle into the new normal of staying indoors indefinitely. But just as more and more people are turning to videoconferencing, more hackers and cybercriminals are exploiting the surge in teleworking, and the privacy practices of videoconferencing platforms are quickly coming under scrutiny, with pressure for increased transparency and data security.

Zoom, one of the most popular and prosperous video platforms, has seen an exponential increase of global active users since the start of the year. The number of users continued to soar after Zoom CEO Eric Yuan announced in early March that he was removing the time limit from video chats in regions affected by the outbreak and was offering free services to K-12 schools around the world. Yet at the height of its popularity, Zoom has become one of the most targeted apps for cyberattacks and cybercrime (including dozens of new fake Zoom-themed domain registrations and phishing websites, intended to lure users into providing credit card details and other sensitive data and/or infiltrate malware), ultimately illuminating holes in the platform’s data protection and privacy policies and inviting a firestorm of criticism and challenges.

A rising phenomenon referred to as “zoom-bombing,” where hackers hijack Zoom meetings and use the screen-sharing feature to disseminate disruptive and often obscene or inappropriate material to the meeting attendees has been particularly concerning to the community. There have been a number of reported Zoom hacks in virtual conferences through schools, churches, and political meetings. Particularly disheartening are those virtual classrooms that have been interrupted with pornographic images and racial slurs, and religious services attacked with uploads of anti-Semitic propaganda. Many school districts have prohibited educators from using Zoom for distance learning, citing concerns about child data privacy (for a full discussion of how FERPA applies to videoconferencing, stay tuned for our next post). The FBI is making efforts to curtail Zoom-bombing, and advises zoom-bombing victims to report such incidents via the FBI’s Internet Crime Complaint Center.

Zoom has also been forced to make changes to its privacy policy and sign-in configuration after it was discovered that the app was sending some analytics data to Facebook (e.g., when the user opened the app, their timezone, city, and device details). Privacy activists noticed that there was nothing in the Zoom privacy policy that addressed this transfer of data. With regard to data security, consumers are also questioning whether Zoom actually implements end-to-end (“E2E”) encryption, as it claims, potentially teeing up a claim for unfair or deceptive trade practices before the FTC. With E2E encryption, the video and audio content can be decrypted only by the meeting participants, such that the service provider does not have the technical ability to listen in on your private meetings and use the content for ad targeting.

Due to the lack of clarity with regard to exactly what data Zoom is collecting from its users and what is does with that data, on March 18th, human rights group Access Now published an open letter calling on Zoom to release a transparency report to help users better understand the company’s data protection efforts. On March 30th, New York Attorney General, Letitia James, sent a similar letter to Zoom, requesting information about its security measures in light of user concerns about data privacy and zoom-bombing. While Zoom stated that it would readily comply with the AG’s request, this will not be the last fire to put out. Just yesterday, a class action (case no. 5:20-cv-02155) was filed against Zoom in the Northern District of California, citing violations of California’s Unfair Competition Law, Consumers Legal Remedies Act, and the CCPA by using inadequate security measures, permitting the unauthorized disclosure of personal data to third parties like Facebook, and failing to provide adequate notice before collecting and using personal data.

While the repercussions of Zoom’s privacy and data security transgressions remain to be seen, users of the videoconferencing platform can take actions to minimize the risks of zoom-bombing and data breaches by disabling certain features of the conference and abiding by the company’s best practices for securing a virtual classroom.

Right now, the world wrestles with a colossal viral outbreak. In response to the crisis, hundreds of millions of people are staying home to reduce their personal risks and to flatten the curve for society overall.

From this mass sheltering, businesses face inverted demand curves that appear so steep and transformative that they are facing a similar scenario: close their doors and stay home. However, business cannot isolate without consequences, and the consequences can be devastating.

When a business chooses to close its doors, its obligations remain. A crisis-closed restaurant retains its instant and upcoming obligations to suppliers and employees. Businesses throughout all sectors will face similar challenges as those posed to this hypothetical restaurant: (1) pending and accruing bills from suppliers and contractors; (2) pending and accruing employee costs; (3) pending and future real estate costs; as well as (4) ongoing credit costs.

Because no further income is being generated for the near future, someone from that cost matrix will likely not get paid. For many businesses, these types of catastrophic events have an intuitive fix: This is why we invest in insurance.

However, a different type of virality — large-scale cyberinfections — reveals why this type of business hedge is rife with litigation risk. Sophisticated cyberactors and nation states exploit cybervulnerabilities to steal money, corrupt information and otherwise covertly disrupt business services. In many instances, the risks include the shutdown of entire industries. All of this disruption is, in a word, expensive.

Cyberinsurance could provide one avenue toward the reduction of these costs, just as existing insurance coverage would hopefully cover the current COVID-19 crisis. However, recent history suggests that rather than result in insurance payouts, gigantic cyberinfections lead to equally enormous litigation.

NotPetya and a Digital Attacks by Nation States

Businesses hoping to understand their COVID-19 litigation risks can learn from recent complicated privacy and data litigation. Often, this litigation, as with COVID-19, involves massive disruptions to industries. Indeed, certain malware attacks have halted entire industries and crippled supply chains, which causes problems that should be familiar to all COVID-19-affected businesses.

Insurance policies typically exclude coverage for extraordinary events; including but not limited to invasion, revolution and acts of terrorism. Theoretically, a state-sponsored hack could be considered either an attack, consistent with the much-maligned Gerasimov doctrine,[1] or a criminal act. Given the amount of money involved, it appears inevitable that insurance carriers will invoke the extraordinary event exclusions. Indeed, they already have.

NotPetya was a ransomware attack that, beginning in 2017, caused more than $10 billion in global damages. In February 2018, the U.S. and other Western nations issued coordinated statements publicly attributing the NotPetya malware to the Russian government. Nontraditional warfare targets from many industries throughout the world suffered enormous NotPetya-related losses.

The NotPetya cybervirus victims were diverse and often were not traditional warfare targets. For example, Mondelez International Inc., a global snack company, claimed that it suffered more than $100 million in damages to its computers and disrupted supply chains.[2] Mondelez had purchased cyberinsurance. This foresight appears to have provided little comfort, however.

In Mondelez International Inc. v. Zurich American Insurance Co., the plaintiff asked an Illinois state court to determine whether the hostile or warlike action exception in its Zurich cyberinsurance policy affected its claim for NotPetya-related losses.[3] Apparently, Zurich was reluctant to pay because experts attributed NotPetya to the Russian government. Thus, the very sophistication of the cybercriminal — a nation state hacker — actually counseled against invocation of the insurance coverage.

How Outbreaks of War and Viruses Similarly Reference Principles of Impracticability and Fairness

Regardless of whether it ultimately succeeds, Zurich’s theory, which combines the ancient principles of war clauses with bleeding-edge technology and turbulent international politics, has much to teach us. The COVID-19 crisis provides a similarly potent blend of complex disciplines.

Most contracts contain a force majeure provision or somehow internalize the concept of impracticability.[4] These principles incorporate centuries of business practices and hundreds of cases but all orbit around the concept that some occurrences are so big and so unlikely that it would be unfair to enforce a contract.

Thus, while the war interpretation is unlikely to appear in post-COVID-19 litigation, the core struggle of emergent impracticability remains the same. In either case, in the short term, those disputes will be resolved by pitched litigation.

The litigation will be intense specifically because the stakes will be so high. Outbreaks of war and viruses both involve complete shutdowns of industries. Thus, the costs of these crises are astronomical. They are also notable because they involve responses to very quickly developing crises; indeed the growth curve to both threats can righteously be described as viral.

Also, their core mechanisms are eerily similar: Both involve the unwanted injection of code (either the genetic payload of an infectious agent or the malignant delivery of computer code) into a healthy system (either a living cell or an otherwise functional computer system). It is therefore unsurprising that the two threats bear so many litigation similarities.

Courts Could First Gravitate to the Simplest Interpretation

The problems caused by these events are too expensive and complex to submit to easy fixes. The courts will likely face these issues before contractual, regulatory or legislative fixes can be addressed. The cyberthreat landscape is much broader and deeper than NotPetya, which cost billions of dollars by itself.

When it comes to COVID-19, the losses appear to be in the trillions of dollars. Accordingly, courts will be faced with high-stakes disputes and little in the way of legislative or regulatory guidance. Still, lessons can be learned from the high-stakes cybercases.

Typically, these data security and privacy disputes present courts with misleadingly straightforward questions:

  1. Was a state sponsored cyberattack directed by a nation state?
  2. Was COVID-19 an unavoidable force majeure event?

These questions superficially appear to present binary choices, which is to say they are simple yes or no propositions. History teaches that this is a trap.

Based upon a surface-level analysis, some case law suggests that cyberwar must be military in character. NotPetya escaped into the cyberwilderness and wreaked massive damages of dubious military value. On the opposite side, but with the opposite outcome, a court may look at NotPetya and determine that the action is definitely an act of war because it constituted an act of aggression by a sovereign state.

Courts interpreting the impracticability of contracts due to COVID-19 shutdowns will also be faced with simple binary choices. All superficial approaches, however, could result in bad outcomes.

The Integrative Path Forward in Interpreting Viral Impracticability in Contracts

In the absence of any specifically negotiated definitions for impracticability, these viral cyberdisputes involve three inquiries: (1) the factual details of the mechanics of the event; (2) the evidentiary reliability of the event’s attribution to governmental actors;[5] and (3) how the details of that attribution affect the impracticability of the contract, if at all.

This nonexclusive list, which was derived from data security and privacy litigation such as that surrounding NotPetya, provides a framework for critically analyzing risk in the the upcoming COVID-19 disputes.

The first question flows from the pragmatic concern that the lines between disaster, war and misfortune are frustratingly (and often intentionally) blurry. In the 2014 Yahoo! Inc. breach, criminal hackers were working at the behest of Russian intelligence to perform intelligence gathering while also generating criminal profits.

Many of the Chinese hacks, such as the steel and aerospace industry hacking campaigns, were undertaken by Chinese military and intelligence officers to fraudulently aid Chinese companies in the western markets. Although digital, these attacks were not purely criminal or warfare.

Similarly, the rollout of COVID-19 shutdowns was not centrally coordinated via the federal government but rather represented the accretion of hundreds of state, local, business and personal crisis decisions. To properly navigate these facts, businesses will need to prepare to marshal the broadest-based authorities possible to paint a complex constellation of events as a straight line.

The second question involves the provenance of the attribution. Courts have struggled to differentiate between consensus attribution, based upon verifiable facts, and mere groupthink. The quality of this attribution necessarily varies in each event and depends upon factors as wide ranging as the quality of the science, political realities and business needs.

Many cases required full-fledged evidentiary hearings. Other cases solely involved judicial notice of significant relevant facts. Any evidentiary option will involved high-level litigation skills to communicate the finer technical details against the backdrop of a broader sociopolitical backdrop.

The third question, impracticability, underlines how the pragmatic question is never as narrow as the nature of warfare or pandemic. Uniform Commercial Code Section 2-615 excuses commercial performance for commercial impracticability where basic assumptions of contract contemplate that an occurrence rendered performance impracticable.

In past cybercases, the courts have had to wrestle with core issues about the expectations of individual contracts. What are the expectations of a restaurant supply contract? Or an employment contract? Or a long-term lease? No matter, the specific contractual context, answering this third question has required a highly fact-intensive inquiry that will build upon the answers to the first two questions.

The correct answer to all three questions involves digging into the details about what happened and why. The best manner in which to persuasively present these facts involves an integrative approach to law and science.

Next Steps

Viral litigation in light of cyber or COVID-19 events requires a broad base of litigation skills. The shape of these presentations, whether cyber or purely COVID-19, will be eerily similar.

To be truly persuasive, companies should prepare to present a deep, holistic set of facts surrounding: the external history of their closure; the internal audit trail of their corporate decision making; technical descriptions of how the complex event unfolded against the backdrop of their decisions; and dynamic, but unadorned, courtroom presentations.

This similarity should prove comforting to businesses; these are threats and issues that have been met and addressed by businesses in the past. Learning the lessons of those past viral threats can help a business stay ahead of the next threats looming on the horizon. If you can prepare, you can internalize the risks and prepare to fight smartly.

[1] https://www.nytimes.com/2019/03/02/world/europe/russia-hybrid-war-gerasimov.html

[2] https://www.nytimes.com/2019/04/15/technology/cyberinsurance-notpetya-attack.html

[3] https://www.scribd.com/document/397265756/Mondelez-Zurich

[4] See generally U.C.C. Section 2-615.

[5] As discussed in this article, “attribution” for COVID-19 does not mean that a government caused the virus itself but rather whether a government caused the associated shutdown. As the shutdown orders unfold in real time from the various state and local authorities as this article is written, it seems that attribution of that type will not prove simple.

 

This article was originally published in Law360’s Expert Analysis section on March 30, 2020. To read the article on Law360’s site, please visit: https://www.law360.com/articles/1257624.

There is no doubt that we are generating, processing, and transferring more data RIGHT NOW than we ever have before.  It is almost certain that our data generation, processing and transmission is many, many times that today than it was this same time last week—not to mention last month—because of “work-around” efforts due to the novel coronavirus.

While companies compiling this data had a pretty good sense of who you were before, just think what they know about you now.  Every minute you work, you are online.  Every school lesson your child learns is transmitted online.  Extracurricular activities are sent online – karate lessons; speech therapy lessons; dance lessons; music lessons; you name it.  Food shopping—online.  Other shopping for needed items—online.  Prescription refill—online.  You want to get together with friends and play a game?  You set up a “Zoom” chat and play trivial pursuit over your iPhones.  You’re bored?  You browse online.  You read about the news.  You read about your hobbies.  You read about your profession.  You shop.  You text with your friends.

Meanwhile, your home—filled with sensors about your everyday habits—went from recording you before and after work, and on weekends, to having you around 24/7.  Your thermostat, your home security cameras, your refrigerator, your Alexa, your television, your doorbell, and the list goes on—they are all working overtime, compiling way more data today than they ever have before.  And your Apple watch/Fitbit will tell you whether—in view of all of these changes—you are standing up, moving, and sleeping more or less than you did before all of these changes started taking place.  Your phone is likely reporting that your “screentime” is up from prior weeks.

And yet, while the data companies are obtaining (and storing and processing) about you is increasing exponentially, at the same time, a question is being asked whether enforcement of the California Consumer Privacy Act (CCPA) should be delayed.

CCPA took effect on January 1, 2020.  However, California’s attorney general was prohibited from bringing enforcement actions until July 1, 2020.  As a result, a number of companies assumed some risk and delayed compliance measures, with the goal of complying by July 1 instead of January 1, 2020.

Now companies are asking California’s attorney general to delay enforcement of CCPA even longer because of the novel coronavirus, which causes the disease known as COVID-19.  In a letter sent on Tuesday, March 17, 2020, the California Chamber of Commerce, United Parcel Service (UPS), the Internet Coalition, the Association of National Advertisers and 30 others requested that the CCPA enforcement deadline be pushed back to January 2, 2021.

On one hand, it makes total sense that enforcement should be delayed.  Many companies have instituted work-from-home measures to limit community spread of the COVID-19 disease, and it would be difficult for these companies to come into compliance when there are no (or limited) on-site staff to build and test the new systems and processes that are implemented to comply with CCPA.  Further, companies have a lot of other financial, business, and personnel issues to deal with right now.

On the other hand, would the delay of enforcement of CCPA signal that personal privacy, data protection, and cybersecurity issues are less important than everything else?  Yes, companies have a lot of other financial, business and personnel issues to deal with, but they are likely also collecting personal information at the same time, and as long as they are collecting personal information, shouldn’t they also comply with the laws regarding that collection of personal information?

One potential solution is a “split the baby” approach.  Perhaps there should be a grace period on enforcement of some parts of CCPA (particularly those that may be overly burdensome for companies to implement in the current situation) like data subject access requests, but no delay in enforcement of other parts of CCPA that are arguably more critical in this period of increased data (and increased “bad actors” trying to improperly access/use that data) such as the CCPA’s “reasonable security procedures” requirement.

At the end of the day, regardless of whether California’s attorney general decides to delay enforcement of CCPA, it is in every company’s best interest to ensure they are taking “reasonable security” measures.  This includes things like ensuring personal information data is encrypted or redacted, ensuring your network is secure (such as, inter alia, two-factor (or better) authentication for end-users, anti-virus protection, and ensuring software is routinely updated and patched), ensuring your document retention policy is not overbroad, and ensuring good email security (including training employees to recognize email hoaxes).

This will protect your customers, your employees, and your company.

And if CCPA applies to you, you are legally obligated to take such “reasonable security” measures, as CCPA provides individuals with the right to obtain statutory damages of $100 to $750 per defined data breach if “reasonable security procedures” are not in place.

More people than ever before are teleworking.  Schools are closed.  Events are cancelled.  Live sports are cancelled.  Restaurants in many areas are closed.  So, what are a lot of people doing?  They are reading the news.  And a lot of them are looking for the latest information on the COVID-19 outbreak.

And scammers know this.  So in this time of heightened vulnerability, please be safe…not just physically, but also technologically.

For example, there is a dangerous coronavirus map link/website that is being circulated, pretending to offer information on the spread of the virus, but in reality installs malware on victims’ computers.  The malware, AZORult, steals user information, largely credential-based (including browsing history, cookies, ID/passwords, cryptocurrency information)and uses this for targeting other cites, contacts, financial platforms, and enterprises.  AZORult also acts as a download of other malware.  A variant of this malware was also able to create a new, hidden administrator account on a machine to set a registry key to establish a Remote Desktop Protocol (RDP) connection.

There is also a dangerous application that promises to provide users with a Coronavirus map tracker and statistical information, as well as a real-time screen lock alert when a known COVID-19 patient is nearby, but the application instead provides a “CovidLock” ransomware, which locks users’ devices and demands a $100 Bitcoin payment within 48 hours.  It threatens users that do not provide the bitcoin payment with a total erasure of the device’s data.

There are many, many other scams, as well.  Everything from fake emails, where when you download attachments a Trojan downloader is installed on your computer (exposing your computer to software that steals your credentials and data, and spies on you) to malware-laced texts that install malware on your iPhone when you click on a link, stealing your information and credentials. Also, because so many interactions will now be virtual, expect a bloom in Business Email Compromise and “Man-in-the-middle” scams.

The best way to avoid COVID-scammers is to exercise rigorous information hygiene. Only download applications offered from the Apple App Store and Android Play Store; visit only known, official websites; do not click on links in suspicious texts and emails; and do not download files from suspicious emails.  Wherever possible, follow existing payment protocols and require multiple confirmations from known persons by telephone. Also, if something looks suspicious, search the Internet based on a description of what you are seeing (often times you will find out information on a scam that way), or if it purports to be from a legitimate source, call the source and ask about it before you click.

Stay safe.

Coronavirus disease, which is also known as COVID-19, poses a number of significant challenges to business organizations.  Many businesses plan to address these challenges by encouraging or even requiring remote work by implementing telecommuting or work-from-home (WFH) programs. Organizations as disparate as hotel chains, major universities, and even the Internet Corporation for Assigned Names and Numbers (ICANN), which administers many of the domains that make up the web, are all de-centralizing their activities and requiring people to work remotely. For many organizations, this will be the first time that they have deployed widespread telecommuting or WFH programs, and it will present a significant technical challenge. However, in the rush to make such remote work programs operate at scale, organizations should also consider the significant privacy, data protection, and cybersecurity risks that they will face.

Baked-in online vulnerabilities could expose data and communications

The primary remote work privacy, data protection, and cybersecurity risks flow from the fact that the organization’s entire data and communications infrastructure is now being examined remotely by numerous people.  In a normal work day, even if most of one’s proprietary data is being hosted in the cloud, this data is usually accessed and utilized within the secure confines of the business itself. In the crisis telecommuting circumstances in which we find ourselves, this same information is being pulled in hundreds or thousands of directions.

At each workers’ home, the security of their own setup will eclipse the organization’s data security. The more individualized, personal networks that are rushed into service by one’s employees, the more likely it is that these networks will be unsecure. For example, the security settings on an employee’s home wifi network may be non-existent or they may piggyback onto a public wifi system. Any of these unsecure options could enable a third party who has compromised that network to gain access to the organization’s supposedly bullet-proof network and expose data to opportunistic cyber eavesdroppers who exfiltrate data, steal passwords, and engage in much more mischief. These strained, varied security circumstances increase the conceptual surface area of an organization and concomitantly increases the risks of accidental breach. Moreover, these risks extend beyond data at rest and may extend to businesses’ internal communications.

In a conventional office, workers communicate in onsite meetings and brainstorm in the break room. If an organization utilizes a work-focused electronic collaboration platform (such as Slack or Microsoft Teams) that messaging will generally only be viewed within the physical and conceptual space of the company itself.  In a remote work world, all of those communications will be read and distributed outside of the business. Suddenly, the types of candid, unpolished conversations that make work operate faster may be at risk of exposure.

Unfamiliar new online tools could also confound privacy, data protection, and cybersecurity efforts

Remote workers use a variety of software and hardware that can be different from the standardized equipment when they are physically located in the office. In many cases, this means there are no set policies and procedures for using the new equipment.  Even where there are well-established policies, the newly-minted remote workers have no experience using those policies, if they have ever read them at all.  Even if remote workers have consumed and understood all of the relevant policies, they are simply not used to using the equipment. This unfamiliarity can lead to enormous, if innocent, operational security failures.

Mixing business with pleasure can multiply security concerns

Remote workers often utilize the same software and hardware to manage their personal and work lives. This can multiply the risks posed by the workers’ personal online lives.  All of a sudden, online dating and clicking on memes can carry heightened operation data risks. Moreover, while the company could implement patches and updates remotely, one will likely have to rely upon the remote worker to enact necessary security tweaks and upgrades. Remote workers may not focus on enacting these security tweaks.  Sometimes this failure may result from simple inattentiveness but other times the remote worker may feel their personal privacy is threatened by the process.

Different expectations of privacy between work and home

Remote work can blur the lines between home and work privacy.  Employees generally have more limited privacy rights “at the office” – in workspaces that belong to the employer – than in their private homes.  These workplace limitations on privacy may also extend to employer owned computers and devices. Companies routinely monitor or search work computers and phones, consistent with their policies.  Remote workers may not appreciate that they are bringing their employer and its policies home with them.  Similarly, employers’ policies and practices may implicate the privacy interests of its employees—as well as their families and cohabitants—when newly applied in remote personal spaces.

Privacy, data protection, and cybersecurity litigation risks

The worst case scenario will involve novel litigation about WFH privacy, data protection, and cybersecurity risks.  While that potential litigation is worth contemplating, the next pragmatic step for organizations involves actually minimizing those risks.

Best practices and hacks

Remote workers must take the following steps to minimize privacy and cybersecurity risks:

  • First, always keep work and leisure separate. Where possible, they should use separate machines. Therefore, it would make a lot of sense to do all of one’s social media or web browsing on a mobile device and all of their work on a laptop. Often people use the same device for both, which can work but requires intellectual discipline, training, and clear employer policies. This work/play separation should also be true for passwords and logins. Always keep personal passwords separate and distinct from work passwords.
  • Second, do not use unsecured wifi or Bluetooth. A public Wi-Fi network or Bluetooth is inherently less secure, because you don’t know who set it up, or who else is connecting to it. In a perfect world, one should never use public wifi. However, for the times that’s not practical, you can still limit the potential damage by: (1) sticking to well-known public networks that are more likely to adhere to certain standards, (2) look for password protected networks, (3) limit your data use across those networks, (4) stick to “HTTPS” connections rather than unsecure HTTP connections, and (5) utilize a VPN, which will encrypt your data traffic.
  • Third, take steps to minimize the impact of a stolen or misplaced laptop. Are there remote features that can remotely brick the device or render the data unreadable? Is the cloud being used strategically to reduce the impact of a lost device?
  • Fourth, engage in regular information hygiene. This can vary by industry but, for example, after completing a project, ensure a client’s data has been encrypted, backed up to secure location, and completely erased from its local savepoint. Consider a policy that client data cannot be sent to and from a mobile device (including offsite laptop) unless it is encrypted.
  • Fifth, religiously install all security updates. To ensure this, companies should consider requiring that devices be brought into the office periodically for security hygiene “check-ups.”

From an organizational standpoint, there are plenty of best practices for security and remote working:

  • Develop and scrutinize your remote working policies and procedures, including how to utilize specific types of devices. One should also consider developing separate remote working v. employer-owned workspace policies
  • Only allow approved devices to connect to company networks.
  • Consider encryption policies for all data transfers.
  • Evaluate Mobile Device Management (MDM) and Mobile Application Management (MAM) platforms that can help to secure remote workers’ data and enforce the company’s security policies.
  • Scrutinize your access protocols. Best practices dictate use of two-factor authentication technology for accessing the organization’s networks, electronic mail, and data.
  • Consider drafting physical security protocols for remote workers. The fact is that many workers have relied upon the business’ physical security safeguards but now may place company assets at risk.
  • In light of the potential for extended office closures, the partially abandoned offices may now be at a new kind of risk. Organizations should ensure that proper security measures and access controls are in place to secure physical and information technology assets for the largely empty offices.

The Federal Trade Commission (“FTC”) released its 2019 Privacy and Data Security Update, highlighting its enforcement actions in 2019 directed to the protection of consumer privacy and data security.

In the roundup of 2019 Privacy Cases, the Update highlights the FTC’s and the Department of Justice’s record $5 billion penalty imposed on Facebook—the largest ever imposed on any company for violating consumer’s privacy–  concerning allegations that Facebook violated the FTC’s 2012 order against the company.  Other notable privacy cases include the enforcement action against data analytics company Cambridge Analytica and its former CEO, Alexander Nix, and app developer, Aleksandr Kogan, alleging that the defendants used false and deceptive tactics to harvest personal information from millions of Facebook users for voter profiling and targeting, as well as the FTC’s first action against a developer of a “stalking app,” Retina-X, alleging that the company’s practices enabled use of its apps for stalking and other illegitimate purposes.  The FTC’s enforcement activities spanned several other alleged abuses concerning spam, deception as to use of personal emails, purchase and collection of counterfeit and phantom debts, bogus credit repair services and student loan debt relief schemes, and deceptive lead generators.

With respect to the 2019 Data Security and Identity Theft Cases, the Update highlights the settlement with Equifax, with totals between $575-700M, as well as several enforcement actions alleging failure to store sensitive personal information, including Social Security numbers, in an encrypted format as well as failure to use reasonable, low-cost, and readily available security protections to safeguard personal information of clients.

The Update also highlights two actions involving violations of the Gramm-Leach-Bliley Safeguards rule; thirteen actions involving false claims of participation in Privacy Shield; four actions involving violations of the Children’s Online Privacy Protection Act, including a $170 million judgment against Google and its subsidiary YouTube, the largest civil penalty amount under COPPA; and eight actions enforcing the Do Not Call provisions against telemarketers.

As reflected in the Update, 2019 was a banner-year for the FTC, with its record-setting $5 billion penalty against Facebook and the $170 COPPA fine against Google and YouTube.  Will 2020 top it?

The European Commission recently presented strategies for data and Artificial Intelligence (AI) focusing on promoting excellence in AI and building trust.  The Commission’s White Paper, “On Artificial Intelligence – A European approach to excellence and trust,” addresses the balance between the promotion of AI with regulation of its risks.  “Given the major impact that AI can have on our society and the need to build trust, it is vital that European AI is grounded in our values and fundamental rights such as human dignity and privacy protection.”  In addition to the benefits AI affords to individuals, the white paper notes the significant roles AI will have at a societal level, including in Europe’s Sustainable Development Goals, supporting democratic process, and social rights.  “Over the past three years, EU funding for research and innovation for AI has risen to €1.5 billion, i.e. a 70% increase compared to the previous period” (n.b. as compared to €12.1 billion in North America and €6.5 billion in Asia in 2016).  The white paper also addresses recent advances in quantum computing and how Europe can be at the forefront of this technology.

“An Ecosystem of Excellence.”  The White Paper outlines a few areas to build an ecosystem of excellence, including (a) working with member states, including revising the 2018 Coordinated Plan to foster the development and use of AI in Europe to be adopted by end of 2020; (b) focusing the efforts of the research and innovation community through the creation of excellence and testing centers; (c) skills, including establishing and supporting through the advanced skills pillar of the Digital Europe Programme networks of leading universities and higher education institutes to attract the best professors and scientists and offer world-leading masters programs in AI; (d) focus on SMEs, including ensuring that at least one digital innovation hub per member state has a high degree of specialization on AI and a pilot scheme of €100 million to provide equity financing for innovative developments in AI; (e) partnership with the private sector; (f) promoting the adoption of AI by the public sector; (g) secure access to data and computing infrastructures; (h) cooperate with international players.

“An Ecosystem of Trust: Regulatory Framework for AI.”  “The main risks related to the use of AI concern the application of rules designed to protect fundamental rights (including personal data and privacy protection and non-discrimination), as well as safety and liability-related issues.”  The White Paper notes that developers and deployers of AI are already subject to European legislation on fundamental rights (data protection, privacy, non-discrimination), consumer protection, and product safety and liability rules, but that certain features of AI may make the application and enforcement of such legislation more difficult.  With respect to a new regulatory framework, the White Paper proposes a “risk-based approach,” and sets forth two cumulative criteria to determine whether or not an AI application is “high-risk”: (a) the AI application is employed in a sector where significant risks can be expected to occur (e.g., healthcare, transport, energy and parts of the public sector); (b) the AI application in the sector in question is, in addition, used in such a manner that significant risks are likely to arise (e.g., will produce legal or significant effects for the risks of an individual or a company, that pose risk of injury, death, or significant damage, that produce effects that cannot reasonably be avoided by individuals or legal entities).

Does Europe’s risk-based approach adequately balance regulation v. AI innovation?  It is a difficult question that various jurisdictions have been grappling with globally.  The concept of a risk-based approach, however, should sound familiar to the private sector, particularly to large enterprises that have already embedded AI and other emerging technologies into their internal risk management framework.  For smaller scale experiments, perhaps there is room for more regulatory flexibility in encouraging innovation in AI.  For example, Arizona was early to welcome autonomous vehicle testing on its roads in 2015, and is experimenting with a “regulatory sandbox” for FinTech.

Last week, 23andMe, the direct-to-consumer genetic testing service, announced its strategic license agreement with Almirall, a leading global pharmaceutical company, for the rights to a bispecific monoclonal antibody designed to treat Il-36 cytokines in autoimmune and inflammatory diseases. The antibody was discovered by 23andMe’s Therapeutics team, using the genetic information from more than 8 million personal testing kits from customers who consented to use of their data for genetic research. While the finances surrounding this particular license agreement are currently unknown, 23andMe’s $300M deal with GlaxoSmithKline in 2018 for a four-year drug research and development partnership gives us some idea of the potential profit from this joint venture.

Putting aside the obvious health and research benefits from such a massive data pool, the use of millions of consumer saliva samples naturally raises the ethics question of who should be profiting from what is arguably your most intimate and valuable data – your DNA. Despite the lucrative partnerships with big pharma, consumers who provide their genetic blueprint to testing services aren’t getting a piece of pie. A review of the Terms of Service for 23andMe, for instance, along with those from other industry giants like Ancestry.com and FamilyTreeDNA, reveals that consenting users waive all potential claims to a share of the profits arising from the research or commercial products that may be developed.[1]

A larger issue, however, relates to the privacy and data security concerns with regard to these large databases of genetic data. In addition to finding out that you have a fourth cousin living just down the road or that your Italian grandmother is anything but, your DNA may also be used to identify you specifically, more so than your name or even your SSN. And depending on who has access to these databases – be it big pharma, insurance companies, hackers, law enforcement, or otherwise – even if the data is de-aggregated or de-identified, because your genetic makeup is a direct personal ID, there is always the potential for your identity and health data to be compromised or used against you. For instance, California’s notorious Golden State Killer was caught and arrested in 2018 after investigators matched a DNA sample from the crime scene to the results of a genetic testing kit uploaded to a public genealogy website by a relative of his. Notably, the Contra Costa County District Attorney’s office was able to obtain that genetic matching without any subpoena, warrant, or similar criminal legal process, which raises additional privacy and security concerns. And last December, the Pentagon issued a memo advising members of the military not to use consumer DNA kits, citing security risks.

Consumers seem to be getting wind of these privacy concerns, as evidenced by declining sales of genetic testing kits and genealogy service providers, including 23andMe, struggling to stay afloat.

[1] This arrangement tends to evoke an ethical inquiry similar to that surrounding Henrietta Lacks, who received no compensation for her bodily cells taken post-mortem (and without her consent) and which have been used for decades in the development of countless vaccines, medical research, and drug formulations.

The number of actions to enforce the European Union’s General Data Protection Regulation (GDPR) against a wide range of companies continues to rise.  Germany, a country where privacy enjoys strong legal protection, is establishing itself as a favorite jurisdiction for enforcement of the GDPR.  And, not surprisingly, Facebook is one of the companies in the crosshairs.

Last February, Germany’s Federal Cartel Office held that Facebook’s practice of combining data it collects across its suite of products, which include WhatsApp and Instagram, is anticompetitive, and ordered Facebook to stop.  That ruling was later overturned on appeal.  Last month (Jan. 2020), a state court in Berlin, assessing Facebook’s terms of service, determined that Facebook had violated the GDPR’s requirement that “informed consent” by a data subject be given before his or her personal information is collected.

Interestingly, this latest action against Facebook was brought by a consumer group, the Federation of German Consumer Organizations.  While this regional interpretation of the GDPR’s provisions regarding informed consent should be considered, the real impact may be reliance on this decision by consumer groups and other organizations to establish standing to seek legal enforcement of the GDPR without the involvement of an injured or affected data subject.

We will see how this state court ruling fares on review in Germany, and how, ultimately, other jurisdictions in the EU come out on this important issue of standing.  The standing provision of the CCPA has already been a challenge and the subject of debate.  As more states in the U.S. craft and pass privacy legislation, we can expect much debate and, most likely, litigation around this important issue.