Alabama, North Dakota, and South Carolina are the first states to announce that they will use Apple-Google’s exposure notification technology for their state COVID-19 tracking apps.  Several countries in Europe have already agreed to use the technology.

The Apple-Google technology uses Bluetooth to aid in COVID-19 exposure notification.  A user’s device with the technology enabled will send out a Bluetooth beacon that includes a random Bluetooth identifier associated with the user of the device.  The identifier is changed every 10-20 minutes.  When one user’s device is within range of another user’s device, each device will receive the other device’s beacon and store that received beacon on the device.  If a person tests positive for COVID-19, that person may upload his or her diagnosis using a state run app, and with his or her consent, the relevant device beacon(s) will be added to a positive diagnosis list.  At least once per day, each device will check the list of beacons it has recorded against the list of positive diagnosis beacons on the positive diagnosis list.  If there is a match, the user may be notified that they have come into contact with an individual that tested positive for COVID-19, and the system will share the day the contact occurred, how long it lasted, and the Bluetooth signal strength (e.g., proximity) of that contact.  More details on the technology can be found here.

On May 20, 2020, Apple and Google announced the launch of its exposure notification technology to public health agencies around the world, including a new software update that enables the Bluetooth technology.  Not surprisingly, some public health authorities are saying that the technology is too restrictive because the decentralized processing of data on devices prevents aggregate analysis of infection hot spots and rates, and the technology excludes location data.  On the flip-side, privacy advocates have raised privacy and civil liberty concerns about contact tracing and government surveillance generally in the wake of the pandemic.  In the Frequently Asked Questions about the exposure notification technology, Apple and Google pledged that there will be no monetization from this project by Apple or Google, and that the system will only be used for contact tracing by public health authorities apps.

Have Apple and Google struck an appropriate balance between efficacy and civil liberties?  Stay tuned.

ADT LLC, a security company that offers customers, inter alia, video monitoring of their homes, has been sued in Florida federal court due to employee accessing and viewing footage over the course of several years from in-home security cameras of 220 customers.  The rogue employee was a technician for the defendant and in charge of installing security systems in customers’ homes in the Dallas-Forth Worth metro area.  He apparently added his own personal email address to customers’ accounts, which allowed him to access the accounts through the ADT application and internet portal.  The lawsuits against ADT claim breach of contract, negligence, intrusion upon seclusion, negligent hiring and intentional infliction of emotional distress.

While this case concerns actions that took place over a time period of several years and is not directly related to the COVID-19 pandemic, the case should have executives at companies scrutinizing their own policies and procedures with regard to customer information, including image and video data.  Especially with so many employees working from home and accessing sensitive information within the confines of their own homes, it is more important now than ever before to ensure that adequate safeguards are in place to protect customer information and to protect a company for potential liability.  For example, make sure that only employees that need access to sensitive information have access; review programs and procedures for loopholes; and review corporate policies governing employee behavior.

A plaintiff recently lost her battle with Shutterfly in the Northern District of Illinois when the Court ruled that Shutterfly’s arbitration clause was binding, notwithstanding Shutterfly’s unilateral amendments to its Terms of Use, including adding an arbitration provision after plaintiff clicked “Accept.”  The case is now stayed pending the outcome of arbitration.

The plaintiff was a Shutterfly user who had clicked “Accept” upon registering in 2014, thereby agreeing to Shutterfly’s then-existent Terms of Use (which did not include an arbitration provision).

Since 2014, Shutterfly has updated its Terms of Use numerous times.  In 2015, Shutterfly added an arbitration provision to its Terms of Use.  Since then, all of Shutterfly’s Terms of Use have had an arbitration provision, stating, inter alia, “you and Shutterfly agree that any dispute, claim, or controversy arising out of or relating in any way to the Shutterfly service, these Terms of Use and this Arbitration Agreement, shall be determined by binding arbitration.”

The plaintiff brought sued in 2019 alleging that Shutterfly violated the Biometric Information Privacy Act (BIPA), by using facial recognition technology to “tag” individuals and by “selling, leasing, trading, or otherwise profiting from Plaintiffs’ and Class Members’ biometric identifiers and/or biometric information.”

In September 2019, after plaintiff’s suit was filed, Shutterfly sent an email notice to users informing them that Shutterfly’s Terms of Use had again been updated.  The email included numerous policies unrelated to arbitration, and then stated: “We also updated our Terms of Use to clarify your legal rights in the event of a dispute and how disputes will be resolved in arbitration.”  It further stated: “If you do not contact us to close your account by October 1, 2019, or otherwise continue to use our websites and/or mobile applications, you accept these updated terms.”

In the lawsuit, Shutterfly moved to compel arbitration, pursuant to the agreement it had entered into with the plaintiff, and which Shutterfly unilaterally agreed, and the Court agreed with Shutterfly on the following grounds:

  • Illinois Courts allow parties to agree to authorize one party to modify a contract unilaterally, and have repeatedly recognized the enforceability of arbitration provisions added via a unilateral change-in-terms clause (notwithstanding the lack of a notice provision).
  • The Terms of Use plaintiff agreed to in 2014 included a change-in-terms provision (i.e., “YOUR CONTINUED USE OF ANY OF THE SITES AND APPS AFTER WE POST ANY CHANGES WILL CONSTITUTE YOUR ACCEPTANCE OF SUCH CHANGES …”)
  • After Shutterfly added an arbitration provision in 2015, plaintiff placed four orders for products (between 2015 and 2018).

The Court was further unbothered by Shutterfly’s post-Complaint email on grounds that plaintiff agreed to arbitrate her claims in 2015 – well before her lawsuit was filed.

Notably this same case may have had  a different outcome if it concerned a California privacy statute (or non-privacy statute), instead of BIPA.  One of plaintiff’s defenses – the McGill Rule – provides that plaintiffs cannot waive their right to public injunctive relief in any forum, including in arbitration.  McGill v. Citibank, 2 Cal. 5th 945, 215 Cal. Rptr. 627, 393 P.3d 85 (2017) (note: whether the McGill Rule is preempted by the Federal Arbitration Act is the subject of a currently pending petition for certiorari before the Supreme Court, which is fully briefed and scheduled for consideration at the Court’s May 28, 2020 conference, see Blair v. Rent-A-Center).  In response to this argument, Shutterfly argued that the McGill rule only applies to claims arising under California’s consumer protection laws, and the plaintiffs in the case were seeking a private injunction, not a public one.  The Court did not address the private vs. public injunction argument, but agreed with Shutterfly that because the plaintiff’s claim arose under BIPA, and not a California consumer protection law, the McGill Rule was inapplicable.

Last week NASA reported that it awarded contracts to three companies to build spacecraft capable of landing  humans on the moon—Blue Origin (owned by Jeff Bezos); Dynetics (a Leidos subsidiary); and SpaceX (owned by Elon Musk).  The current plan is purportedly to, in 2024, fly astronauts to the Orion spacecraft, built by Lockheed Martin, to lunar orbit, where it would meet up and dock with the lander, which would take them to the moon’s surface.  But Congress has yet to sign off.

Meanwhile, SpaceX is also purportedly on target to also take astronauts to the International Space System next month, on May 27, 2020, on the Demo-2 test flight.  If it goes, this will be the first orbital human spaceflight to launch from American soil since NASA’s space shuttle fleet retired in 2011.  Although, SpaceX did fly an uncrewed mission, Demo-1, to the ISS last March.

Back here on Earth, us at Rothwell Figg are wondering – what about the data that is generated and processed in outer space?  My partner, Marty Zoltick, and I, wrote a chapter on this in the International Comparative Legal Guide (ICLG’s) Data Protection 2019 publication, which you are read here.  Our conclusion was that the existing legal frameworks (including privacy laws and outer space treaties) do not sufficiently address which laws apply to personal data in airspace and outer space, and as such, a new international treaty or set of rules and/or regulatory standards is needed to fill this gap. 

We are continuing to assess this exciting issue which continues to develop.

 

There was a sense among many that websites whose data was being scraped may have lost a claim against data scrapers last year—specifically, violation of the Computer Fraud and Abuse Act (CFAA)—when the Northern District of California, and then the United States Court of Appeals for the Ninth Circuit, sided with data scraper, hiQ, in a case brought by LinkedIn in 2017.  However, now that is not so clear, as the Supreme Court has indicated an interest in possibly hearing the case.  [Notably, there are numerous other causes of action available to websites whose data is being scraped other than the CFAA, such as breach of contract, copyright infringement, common law misappropriation, unfair competition, trespass and conversion, DMCA anti-circumvention provisions, violation of FTC, Section 5, and violation of state UDAP laws.  Indeed, the availability of other claims – beyond CFAA – was expressly acknowledged by the Ninth Circuit Court of Appeals in its decision, hiQ Labs, Inc. v. LinkedIn Corp., 938 F.3d 985 (9th Cir 2019).]

In the case at issue, hiQ sought a preliminary injunction against LinkedIn, preventing LinkedIn from blocking hiQ’s bots which gather data from LinkedIn’s publicly available information (and then analyze the information, determine which employees are at risk of being poached, and sell the findings to employers).  The district court granted hiQ’s motion and the Ninth Circuit affirmed on grounds that, inter alia, hiQ’s business could suffer irreparable harm if precluded from accessing LinkedIn’s information, and LinkedIn was unlikely to prevail on its CFAA claim because LinkedIn’s website is publicly accessible, i.e., no password is required, and thus, there was no “authorization” that was required or could be revoked.  The CFAA expressly requires access without authorization.  See 18 U.S. Code § 1030(a) (providing for access without authorization, or the exceeding of authorized access). 

In March, LinkedIn filed a petition for certiorari in the Supreme Court, arguing, inter alia: “The decision below has extraordinary and adverse consequences for the privacy interests of the hundreds of millions of users of websites that make at least some user data publicly accessible.”  “The decision below casts aside the interests of LinkedIn members in controlling who has access to their data, the privacy of that data, and the desire to protect personal information from abuse by third parties, and it has done so in the service of hiQ’s narrow business interests.”  “The decision below wrongly requires websites to choose between allowing free riders to abuse their users’ data and slamming the door on the benefits to their users of the Internet’s public forum.”

hiQ did not respond. 

However, now the Supreme Court has expressly requested hiQ to respond, and has given it until May 26 to do so.  This has been seen by many as a signal of the Supreme Court’s potential interest in hearing the case. 

A Supreme Court decision in this area could be extremely helpful because, despite many seeing the Ninth Circuit’s decision as a possible “death” of the CFAA, other circuits, such as the First Circuit, have held that publicly available websites can rely on the CFAA to go after data scrapers, particularly where the website expressly bans data scraping.  See EF Cultural Travel BV v. Zefer Corp., 318 F.3d 58, 63 (1st Cir. 2003) (“If EF wants to ban scrapers, let it say so on the webpage or a link clearly marked as containing restrictions.”).  Thus, public website providers and the people who use them – as well as those who wish to scrape those sites – would benefit from the Supreme Court weighing in.

 

There are a number of state student privacy laws of which schools and technology companies whose programs and services are being used for educational purposes during the Coronavirus pandemic should be aware.

For example, a number of states have student online personal information protection acts (SOPIPAs) which prohibit website, online/cloud service, and application vendors from sharing student data and using that data for targeted advertising on students for a non-educational purpose.  See, e.g., Arizona (SB 1314), Arkansas (HB 1961), California (SB 1177), Colorado (HB 1294), Connecticut (HB 5469), Delaware (see DE SB79 and SB 208), District of Columbia (B21-0578), Georgia (SB 89), Hawaii (SB 2607), Illinois (SB 1796), Iowa (HF 2354), Kansas (HB 2008), Maine (LD454/SP 183), Maryland (HB 298), Michigan (SB 510), Nebraska (LB 512), Nevada (SB 463), New Hampshire (HB 520), North Carolina (HB 632), Oregon (SB 187), Tennessee (HB 1931/SB 1900), Texas (HB 2087), Utah (HB 358), Virginia (HB 1612), and Washington (SB 5419/HB 1495).  Companies whose websites, online/cloud services, or applications are now – in view of the pandemic and remote learning situation – being used by K-12 students should make themselves aware of and compliant with these laws.

A number of states also have statutes regulating contracts between education institutions and third parties, including lists of required provisions.  See, e.g., California (AB 1584), Connecticut (Conn HB 5469 (Connecticut’s SOPIPA statute)), Colorado (HB 1423), Louisiana (HB 1076) and Utah (SB 207).  It is important that parties that rushed into remote learning situations, and relationships with third parties to make remote learning possible, review their contracts to ensure compliance with these statutes.

We discuss both of the aforementioned statutes below.

SOPIPA Statutes

SOPIPA statutes, such as California SB 1177, apply to website, online/cloud service, and application vendors with actual knowledge that their site/service/application is used primarily for K-12 school purposes and was designed and marketed for K-12 school purposes.  The statutes (1) prohibit the website, online/cloud service, and application operators (hereinafter “Operators”) from sharing covered information; (2) require the Operators to protect covered information (i.e., secure storage and transmission); and (3) require the Operators to delete covered information upon the school district’s request.  “Covered information” is defined broadly in SOPIPA statutes, such as California SB 1177, to include any information or materials (1) provided by the student or the student’s guardian, in the course of the student’s or guardian’s use of the site or application; (2) created or provided by an employee or agent of the educational institution; or (3) gathered by the site or application that is descriptive of a student or otherwise identified a student.  Therefore, the scope of “covered information” under the SOPIPA statutes is much broader than the scope of protected information under FERPA.

It is unclear if an Operator that was in existence before the coronavirus pandemic but not used for K-12 school purposes, such as WhatsApp, but is being used for K-12 school purposes now (in view of the pandemic), would need to comply with SOPIPA statutes.  An argument could be made that such operators are not used “primarily” for K-12 school purposes, and the website/service/application was not “designed and marketed” for K-12 school purposes.  But the meaning of terms like “primarily,” “designed,” and “marketed” are vague.  And further, to the extent such applications are being used “primarily” for education purposes now, and are being technologically tweaked and marketed for educational purposes now, the argument that SOPIPA does not apply gets weaker.  Thus, it is in companies’ best interest – if they know their website/service/application is being used by K-12 students in view of remote learning situations and the pandemic – to comply with SOPIPA statutes.

Statutes Regarding Contracts with Education Agencies/School Districts/Schools

Another set of state statutes that Operators whose products are suddenly being used for educational/remote learning purposes should be aware of are statutes governing contracts with education agencies, school districts, schools, etc.  California AB 1584 is an example of such a statute, which governs contracts that “local education agencies” or LEAs (defined as including “school districts, county offices of education, and charter schools”) enter into with third parties, including digital storage services and digital education software.

Under California AB 1584, a LEA that enters into a contract with a third party for purposes of providing digital storage/management of records (e.g., cloud-based services) or digital education software must ensure the contract contains, inter alia:

  1. a statement that pupil records continue to be the property of and under the control of the LEA;
  2. a prohibition against the third party using personally identifiable information in individual pupil records for commercial or advertising purposes;
  3. a prohibition against the third party using any information in the pupil record for any purpose other than for the requirements of the contract;
  4. a description of the procedures by which a parent/guardian/the pupil may review the pupil’s records and correct erroneous information;
  5. a description of the third party’s actions to ensure the security of pupil records;
  6. a description of the procedures for notification in the event of unauthorized disclosure;
  7. a certification that the pupil’s records shall not be retained or available to the third party upon completion of the terms of the contract;
  8. a description of how the LEA and third party will jointly ensure compliance with FERPA and COPPA; and
  9. a provision providing that a contract that fails to comply with the aforementioned requirements shall be voidable and all pupil records in the third party’s possession shall be returned to the LEA.

Under California AB 1584, “personally identifiable information” is defined as “information that may be used on its own or with other information to identify an individual pupil” and “pupil records” is defined as both (i) any information directly related to a pupil that is maintained by the LEA, and (ii) any information acquired directly from the pupil through the use of instructional software or applications assigned to the pupil by a teacher or other LEA employee.

Other states, including at least Connecticut (Conn HB 5469), Colorado (HB 1423), Louisiana (HB 1076) and Utah (SB 207) have similar laws regulating contracts with third party vendors and operators of websites and applications who utilize student information, records, and student-generated content.

In view of these statutes, schools/school districts, and companies that provide (i) storage services, (ii) records management services, and/or (iii) software that is/are now being used for educational purposes should review their contracts to ensure that they contain the required provisions.  Additionally, companies that provide (i) storage services, (ii) records management services, and/or (iii) software that is/are now being used for educational purposes should review their practices to ensure compliance.

 

Just last year the public was scrutinizing Big Tech for its collection and use of extraordinary amounts of data about people’s activities, from real-world location tracking to virtual lingering and clicks.  This scrutiny led to the landmark California Consumer Privacy Act, among other general privacy and data protection laws around the world. Will Big Tech now put that data to good use in the fight against COVID-19?

Google recently announced the launch of its publicly available COVID-19 Community Mobility Reports, which are based on Google Maps’ “aggregated, anonymized data showing how busy certain types of places are.”  Google explains that the “reports used aggregated, anonymized data to chart movement trends over time by geography, across different high-level categories of places such as retail and recreation, groceries and pharmacies, parks, transit stations, workplaces, and residential.”  These reports are very high level, showing percentage point increases or decreases in visits to areas of interest such as “grocery & pharmacy,” “parks,” and “transit stations,” among others.  In order to protect people’s privacy, Google states that it does “not share the absolute number of visits,” and “no personally identifiable information, like an individual’s location, contacts or movement, is made available at any point.”

Facebook has also been sharing location data in aggregated and anonymized form with academic and nonprofit researchers around the world, and Microsoft worked with the University of Washington to create data visualizations aiming to predict the virus’ peak in each state.

Thus far, Big Tech’s release of aggregated and anonymized data strikes a sensible, if not conservative, policy that favors individual privacy protections as well as Big Tech’s ownership interests in its datasets.  But can, and should, Big Tech go farther in releasing more granular and personalized information as infections continue to climb globally?  Who gets to decide the balance between the need for data in combating the COVID-19 crisis versus the private interests in the data, the companies themselves or government?  With ongoing concerns about putting location data in the hands of government—or having government collect that data itself, let’s hope for the time being that Big Tech will continue to take initiative in putting its data to good use.

A week ago the headlines of major press reported that a number of countries, like China, Israel, Singapore, and South Korea, were using surveillance to track COVID-19 in their countries.  The surveillance efforts being used depended on the country.  Surveillance techniques included everything from drones, cameras, smartphone location data, apps (e.g., the “TraceTogether” application being used in Israel), and tracking devices (e.g., wristbands linked to a smartphone app are being used in HongKong) to ensure that people were not violating quarantine orders.

Meanwhile, there was a general feeling among many in the United States that such surveillance techniques would be “un-American” and would not fly in this country.

Now, a week later, the Government has announced that it is using location data from millions of cell phones in the United States to better understand the movements of Americans during the coronavirus pandemic.  The federal government (through the CDC) and state and local governments have started to receive data from the mobile advertising industry, with the goal of creating a portal comprising data from as many as 500 U.S. cities, for government officials at all levels to use to help plan the epidemic response.

Is this legal?

It depends.  It depends on what the data shows, if the data may legally be shared, and what it is being used for.  If the data is truly anonymized, may legally be shared, and it is being used solely to show trends and locations where people are gathering (without connecting individuals to those locations), then it could very well be legal under current U.S. privacy laws and the privacy laws of most states.  But there are several hiccups.

First, is it possible to truly anonymize the data?

A report published on March 25, 2013 called “Unique in the Crowd: The privacy bounds of human mobility” in Scientific Reports, and authored by Yves-Alexandre de Montjoye, Cesar A. Hdalgo, Michel Verleysen and Vincent D. Blondel (https://www.nature.com/articles/srep01376), while dated,  is on-point.  In this study, the researchers looked at fifteen months of human mobility data for one and a half million individuals and found that human mobility traces are so highly unique that, using outside information, one can link anonymized location data back up to individuals (i.e., re-identification).

Another issue with anonymization is that, as technologies continue to improve (consider, for example, the development of quantum computers), what it takes to truly anonymize data gets more and more difficult.  Thus, data that is sufficiently anonymized today may be re-identifiable in ten years.

The limitations in the degree to which location data can be anonymized can be mitigated in other ways.  For example, privacy concerns can be greatly reduced (or eliminated?) if the location data is aggregated in such a manner where an individuals’ data cannot reasonably be separated from the aggregated data.

Second, are there are other legal requirements or restrictions in place regarding that data? These requirements or restrictions could come from several sources, such as federal or state legislation, a company’s privacy policy, or contractual terms.  For example, a statute may require user consent (opt-in) to share location data.  A privacy policy or contract may guarantee that location data will never be shared unless certain safeguards are in place.  A user may have requested deletion of their person information, and thus, the entity sharing the information should not even have the data (let alone be sharing it).

Third, there is the question of what the data is being used for.  In a number of countries, surveillance and location data is being used to “police” specific individuals to determine if they are violating quarantine orders.  So far the United States appears to be using the data for a more general purpose—i.e., to assess trends and whether there are gatherings of people at specific locations.  The implication, at least so far, is that nobody is going to go after the individuals who are gathering.  Instead, the data is being aggregated and used merely to help inform orders and for health-planning purposes.

But the question on many people’s minds is not what the data is being used for now, but rather, what the data will be used for down the road.  For example, currently the government does not have access to location data maintained by third parties, like cell providers, ad tech companies, and social media operators.  And in order for the government to obtain that data, it needs a warrant.  See Carpenter v. U.S., 138 S. Ct. 2206 (2018) (holding that the Court Amendment of the U.S. Constitution protects privacy interests and expectations of privacy in one’s time-stamped cell-cite location information (CSLI), notwithstanding that the information has been shared with a third party (i.e., one’s cellular provider), and thus, for government to acquire such information, it must obtain a search warrant supported by probable cause).  Is this going to change once the coronavirus pandemic is over, at least with respect to the location data to which the government has already been provided access?  What requires the government to delete the information later?  Or to not use the data for other purposes?  Presumably there are contracts in place between the government and the third party companies that are sharing their location data – where are these contracts and who has a right to see them?  Are we all third party beneficiaries of those contracts, in that we all stand to benefit from the coronavirus response efforts that result?  And if so, to the extent those contracts limit the government’s ability to use the shared data for other purposes, should individuals have a right to later enforce those limitations (as third party beneficiaries)?

 

 

By now, most of us have participated in at least one videoconference from the comfort of our homes, be it for a work meeting, a fitness class, or a virtual happy hour with friends across the country. Easing the transition from business-as-usual to social distancing and sheltering-in-place, these video communications platforms and apps have no doubt helped us stay connected and productive as we settle into the new normal of staying indoors indefinitely. But just as more and more people are turning to videoconferencing, more hackers and cybercriminals are exploiting the surge in teleworking, and the privacy practices of videoconferencing platforms are quickly coming under scrutiny, with pressure for increased transparency and data security.

Zoom, one of the most popular and prosperous video platforms, has seen an exponential increase of global active users since the start of the year. The number of users continued to soar after Zoom CEO Eric Yuan announced in early March that he was removing the time limit from video chats in regions affected by the outbreak and was offering free services to K-12 schools around the world. Yet at the height of its popularity, Zoom has become one of the most targeted apps for cyberattacks and cybercrime (including dozens of new fake Zoom-themed domain registrations and phishing websites, intended to lure users into providing credit card details and other sensitive data and/or infiltrate malware), ultimately illuminating holes in the platform’s data protection and privacy policies and inviting a firestorm of criticism and challenges.

A rising phenomenon referred to as “zoom-bombing,” where hackers hijack Zoom meetings and use the screen-sharing feature to disseminate disruptive and often obscene or inappropriate material to the meeting attendees has been particularly concerning to the community. There have been a number of reported Zoom hacks in virtual conferences through schools, churches, and political meetings. Particularly disheartening are those virtual classrooms that have been interrupted with pornographic images and racial slurs, and religious services attacked with uploads of anti-Semitic propaganda. Many school districts have prohibited educators from using Zoom for distance learning, citing concerns about child data privacy (for a full discussion of how FERPA applies to videoconferencing, stay tuned for our next post). The FBI is making efforts to curtail Zoom-bombing, and advises zoom-bombing victims to report such incidents via the FBI’s Internet Crime Complaint Center.

Zoom has also been forced to make changes to its privacy policy and sign-in configuration after it was discovered that the app was sending some analytics data to Facebook (e.g., when the user opened the app, their timezone, city, and device details). Privacy activists noticed that there was nothing in the Zoom privacy policy that addressed this transfer of data. With regard to data security, consumers are also questioning whether Zoom actually implements end-to-end (“E2E”) encryption, as it claims, potentially teeing up a claim for unfair or deceptive trade practices before the FTC. With E2E encryption, the video and audio content can be decrypted only by the meeting participants, such that the service provider does not have the technical ability to listen in on your private meetings and use the content for ad targeting.

Due to the lack of clarity with regard to exactly what data Zoom is collecting from its users and what is does with that data, on March 18th, human rights group Access Now published an open letter calling on Zoom to release a transparency report to help users better understand the company’s data protection efforts. On March 30th, New York Attorney General, Letitia James, sent a similar letter to Zoom, requesting information about its security measures in light of user concerns about data privacy and zoom-bombing. While Zoom stated that it would readily comply with the AG’s request, this will not be the last fire to put out. Just yesterday, a class action (case no. 5:20-cv-02155) was filed against Zoom in the Northern District of California, citing violations of California’s Unfair Competition Law, Consumers Legal Remedies Act, and the CCPA by using inadequate security measures, permitting the unauthorized disclosure of personal data to third parties like Facebook, and failing to provide adequate notice before collecting and using personal data.

While the repercussions of Zoom’s privacy and data security transgressions remain to be seen, users of the videoconferencing platform can take actions to minimize the risks of zoom-bombing and data breaches by disabling certain features of the conference and abiding by the company’s best practices for securing a virtual classroom.

Right now, the world wrestles with a colossal viral outbreak. In response to the crisis, hundreds of millions of people are staying home to reduce their personal risks and to flatten the curve for society overall.

From this mass sheltering, businesses face inverted demand curves that appear so steep and transformative that they are facing a similar scenario: close their doors and stay home. However, business cannot isolate without consequences, and the consequences can be devastating.

When a business chooses to close its doors, its obligations remain. A crisis-closed restaurant retains its instant and upcoming obligations to suppliers and employees. Businesses throughout all sectors will face similar challenges as those posed to this hypothetical restaurant: (1) pending and accruing bills from suppliers and contractors; (2) pending and accruing employee costs; (3) pending and future real estate costs; as well as (4) ongoing credit costs.

Because no further income is being generated for the near future, someone from that cost matrix will likely not get paid. For many businesses, these types of catastrophic events have an intuitive fix: This is why we invest in insurance.

However, a different type of virality — large-scale cyberinfections — reveals why this type of business hedge is rife with litigation risk. Sophisticated cyberactors and nation states exploit cybervulnerabilities to steal money, corrupt information and otherwise covertly disrupt business services. In many instances, the risks include the shutdown of entire industries. All of this disruption is, in a word, expensive.

Cyberinsurance could provide one avenue toward the reduction of these costs, just as existing insurance coverage would hopefully cover the current COVID-19 crisis. However, recent history suggests that rather than result in insurance payouts, gigantic cyberinfections lead to equally enormous litigation.

NotPetya and a Digital Attacks by Nation States

Businesses hoping to understand their COVID-19 litigation risks can learn from recent complicated privacy and data litigation. Often, this litigation, as with COVID-19, involves massive disruptions to industries. Indeed, certain malware attacks have halted entire industries and crippled supply chains, which causes problems that should be familiar to all COVID-19-affected businesses.

Insurance policies typically exclude coverage for extraordinary events; including but not limited to invasion, revolution and acts of terrorism. Theoretically, a state-sponsored hack could be considered either an attack, consistent with the much-maligned Gerasimov doctrine,[1] or a criminal act. Given the amount of money involved, it appears inevitable that insurance carriers will invoke the extraordinary event exclusions. Indeed, they already have.

NotPetya was a ransomware attack that, beginning in 2017, caused more than $10 billion in global damages. In February 2018, the U.S. and other Western nations issued coordinated statements publicly attributing the NotPetya malware to the Russian government. Nontraditional warfare targets from many industries throughout the world suffered enormous NotPetya-related losses.

The NotPetya cybervirus victims were diverse and often were not traditional warfare targets. For example, Mondelez International Inc., a global snack company, claimed that it suffered more than $100 million in damages to its computers and disrupted supply chains.[2] Mondelez had purchased cyberinsurance. This foresight appears to have provided little comfort, however.

In Mondelez International Inc. v. Zurich American Insurance Co., the plaintiff asked an Illinois state court to determine whether the hostile or warlike action exception in its Zurich cyberinsurance policy affected its claim for NotPetya-related losses.[3] Apparently, Zurich was reluctant to pay because experts attributed NotPetya to the Russian government. Thus, the very sophistication of the cybercriminal — a nation state hacker — actually counseled against invocation of the insurance coverage.

How Outbreaks of War and Viruses Similarly Reference Principles of Impracticability and Fairness

Regardless of whether it ultimately succeeds, Zurich’s theory, which combines the ancient principles of war clauses with bleeding-edge technology and turbulent international politics, has much to teach us. The COVID-19 crisis provides a similarly potent blend of complex disciplines.

Most contracts contain a force majeure provision or somehow internalize the concept of impracticability.[4] These principles incorporate centuries of business practices and hundreds of cases but all orbit around the concept that some occurrences are so big and so unlikely that it would be unfair to enforce a contract.

Thus, while the war interpretation is unlikely to appear in post-COVID-19 litigation, the core struggle of emergent impracticability remains the same. In either case, in the short term, those disputes will be resolved by pitched litigation.

The litigation will be intense specifically because the stakes will be so high. Outbreaks of war and viruses both involve complete shutdowns of industries. Thus, the costs of these crises are astronomical. They are also notable because they involve responses to very quickly developing crises; indeed the growth curve to both threats can righteously be described as viral.

Also, their core mechanisms are eerily similar: Both involve the unwanted injection of code (either the genetic payload of an infectious agent or the malignant delivery of computer code) into a healthy system (either a living cell or an otherwise functional computer system). It is therefore unsurprising that the two threats bear so many litigation similarities.

Courts Could First Gravitate to the Simplest Interpretation

The problems caused by these events are too expensive and complex to submit to easy fixes. The courts will likely face these issues before contractual, regulatory or legislative fixes can be addressed. The cyberthreat landscape is much broader and deeper than NotPetya, which cost billions of dollars by itself.

When it comes to COVID-19, the losses appear to be in the trillions of dollars. Accordingly, courts will be faced with high-stakes disputes and little in the way of legislative or regulatory guidance. Still, lessons can be learned from the high-stakes cybercases.

Typically, these data security and privacy disputes present courts with misleadingly straightforward questions:

  1. Was a state sponsored cyberattack directed by a nation state?
  2. Was COVID-19 an unavoidable force majeure event?

These questions superficially appear to present binary choices, which is to say they are simple yes or no propositions. History teaches that this is a trap.

Based upon a surface-level analysis, some case law suggests that cyberwar must be military in character. NotPetya escaped into the cyberwilderness and wreaked massive damages of dubious military value. On the opposite side, but with the opposite outcome, a court may look at NotPetya and determine that the action is definitely an act of war because it constituted an act of aggression by a sovereign state.

Courts interpreting the impracticability of contracts due to COVID-19 shutdowns will also be faced with simple binary choices. All superficial approaches, however, could result in bad outcomes.

The Integrative Path Forward in Interpreting Viral Impracticability in Contracts

In the absence of any specifically negotiated definitions for impracticability, these viral cyberdisputes involve three inquiries: (1) the factual details of the mechanics of the event; (2) the evidentiary reliability of the event’s attribution to governmental actors;[5] and (3) how the details of that attribution affect the impracticability of the contract, if at all.

This nonexclusive list, which was derived from data security and privacy litigation such as that surrounding NotPetya, provides a framework for critically analyzing risk in the the upcoming COVID-19 disputes.

The first question flows from the pragmatic concern that the lines between disaster, war and misfortune are frustratingly (and often intentionally) blurry. In the 2014 Yahoo! Inc. breach, criminal hackers were working at the behest of Russian intelligence to perform intelligence gathering while also generating criminal profits.

Many of the Chinese hacks, such as the steel and aerospace industry hacking campaigns, were undertaken by Chinese military and intelligence officers to fraudulently aid Chinese companies in the western markets. Although digital, these attacks were not purely criminal or warfare.

Similarly, the rollout of COVID-19 shutdowns was not centrally coordinated via the federal government but rather represented the accretion of hundreds of state, local, business and personal crisis decisions. To properly navigate these facts, businesses will need to prepare to marshal the broadest-based authorities possible to paint a complex constellation of events as a straight line.

The second question involves the provenance of the attribution. Courts have struggled to differentiate between consensus attribution, based upon verifiable facts, and mere groupthink. The quality of this attribution necessarily varies in each event and depends upon factors as wide ranging as the quality of the science, political realities and business needs.

Many cases required full-fledged evidentiary hearings. Other cases solely involved judicial notice of significant relevant facts. Any evidentiary option will involved high-level litigation skills to communicate the finer technical details against the backdrop of a broader sociopolitical backdrop.

The third question, impracticability, underlines how the pragmatic question is never as narrow as the nature of warfare or pandemic. Uniform Commercial Code Section 2-615 excuses commercial performance for commercial impracticability where basic assumptions of contract contemplate that an occurrence rendered performance impracticable.

In past cybercases, the courts have had to wrestle with core issues about the expectations of individual contracts. What are the expectations of a restaurant supply contract? Or an employment contract? Or a long-term lease? No matter, the specific contractual context, answering this third question has required a highly fact-intensive inquiry that will build upon the answers to the first two questions.

The correct answer to all three questions involves digging into the details about what happened and why. The best manner in which to persuasively present these facts involves an integrative approach to law and science.

Next Steps

Viral litigation in light of cyber or COVID-19 events requires a broad base of litigation skills. The shape of these presentations, whether cyber or purely COVID-19, will be eerily similar.

To be truly persuasive, companies should prepare to present a deep, holistic set of facts surrounding: the external history of their closure; the internal audit trail of their corporate decision making; technical descriptions of how the complex event unfolded against the backdrop of their decisions; and dynamic, but unadorned, courtroom presentations.

This similarity should prove comforting to businesses; these are threats and issues that have been met and addressed by businesses in the past. Learning the lessons of those past viral threats can help a business stay ahead of the next threats looming on the horizon. If you can prepare, you can internalize the risks and prepare to fight smartly.

[1] https://www.nytimes.com/2019/03/02/world/europe/russia-hybrid-war-gerasimov.html

[2] https://www.nytimes.com/2019/04/15/technology/cyberinsurance-notpetya-attack.html

[3] https://www.scribd.com/document/397265756/Mondelez-Zurich

[4] See generally U.C.C. Section 2-615.

[5] As discussed in this article, “attribution” for COVID-19 does not mean that a government caused the virus itself but rather whether a government caused the associated shutdown. As the shutdown orders unfold in real time from the various state and local authorities as this article is written, it seems that attribution of that type will not prove simple.

 

This article was originally published in Law360’s Expert Analysis section on March 30, 2020. To read the article on Law360’s site, please visit: https://www.law360.com/articles/1257624.