Are targeted ads the result of wiretapping?  Companies track your browsing history all the time through the use of, inter alia, cookies, and then mine the data they receive for purposes like targeted advertising.  Because the cookies make the users computer send electronic communications without the users’ knowledge is this wiretapping?

Put differently, can a defendant “wiretap” a communication that it receives directly from a plaintiff ?  This is the question that Facebook is asking the United States Court of Appeals for the Ninth Circuit to consider in its Petition for Panel Rehearing and for Rehearing En Banc in In re Facebook Internet Tracking Litigation, No. 17-17-486.

The Wiretaps laws have an interesting history that begins with listening in on private calls facilitated by the telephone companies but has long also embraced data transmissions across the internet. In 1986, Congress enacted the Electronic Communications Privacy Act (ECPA) to extend restrictions on government wire taps of telephone calls to include transmissions of electronic data by computer (18 U.S.C. § 2510 et seq.) and to add new provisions prohibiting access to stored electronic communications, i.e., the Stored Communications Act (SCA, 18 U.S.C. § 2701 et seq.). The ECPA has been amended by the Communications Assistance for Law Enforcement Act (CALEA) of 1994, the USA PATRIOT Act (2001), the USA PATRIOT reauthorization acts (2006), and the FISA Amendments Act (2008). Despite these amendments, Title I of the ECPA protects wire, oral, and electronic communications while in transit, requiring heightened search warrants that are more stringent than in other settings. Commentators have long wondered whether browsing and similar communications, which are routinely “listened-to” by ad technology, constitute a protected communication.

While both ECPA (commonly referred to as the federal WireTap Act) and the California Invasion of Privacy Act (CIPA) impose civil and criminal penalties on a person who “intercepts” an “electronic communication,” both statutes contain an exemption from liability for a person who is a “party” to the communication.  Thus, the question raised by Facebook’s petition is whether a company that installs code on users’ computers, such that the users’ computers automatically send information back to the company regarding the users’ browsing history (to be used for, inter alia, targeted advertising), constitutes a “party” that falls under the Wiretap laws’ exception.

Some of the relevant facts that plaintiffs in the In re Facebook Internet Tracking Litigation allege are as follows:

  • During a 16 month period, when plaintiffs visited third-party websites that contained Facebook “plug-ins” (such as its “Like” button), the code would direct plaintiffs’ browsers to send a copy of the URL to the visited page (known as a “referrer header”) to Facebook.
  • Facebook used “cookies” to compile these reference headers into personal profiles, and then used that data to improve targeting for advertisements.
  • Facebook never promised not to collect this data – but its disclosures suggested that it would not receive referrer headers from logged-out users.
  • Facebook tracked logged-out users’ browsing activities and sold that information to advertisers without the users’ knowledge.

The Northern District of California dismissed plaintiffs’ wiretapping claims pursuant to the “party” exception of the federal Wiretap Act and CIPA because Facebook received the data at issue directly from plaintiffs (more precisely, plaintiff’s computers, pursuant to the Facebook “plug-ins”).

The Ninth Circuit, relying on decisions from the First and Seventh Circuits (that “implicitly assumed” the “party” exception is inapplicable when the sender is unaware of the transmission), vacated the district court’s dismissal of the wiretapping claims, holding that “entities that surreptitiously duplicate transmissions between two parties are not parties to communications” under the wiretapping statutes.

Facebook now argues that the Ninth Circuit should grant rehearing because the panel’s April decision conflicts with precedent and purportedly “fundamentally changes the definition of ‘wiretapping’ under the Federal Wiretap Act and the California Invasion of Privacy Act (CIPA), both of which have not just civil but also criminal penalties.  Notably, the Ninth Circuit’s decision conflicts not just with precedent of other circuits (i.e., the Second, Third, Fifth, and Sixth Circuits), but it also conflicts with a prior ruling of the Ninth Circuit.

The prior Ninth Circuit ruling on this issue was in Konop v. Hawaiian Airlines, 302 F.3d 868 (9th Cir. 2003).  In this case, the Court focused on the “interception” element instead of the “party” exception.  The Court held that a defendant “intercept[s]” a communication under the Wiretap Act only if it “stop[s], seize[s], or interrupt[s]” a communication “in progress or course before arrival” at its destination,” and obtaining a communication directly from a sender – even if the sender did not have knowledge of the communication – is not an interception.

Among the other circuit cases that the Ninth Circuit Facebook decision conflicts with is In re Google Cookie Placement, 806 F.3d 125 (3d Cir. 2015), a Third Circuit case based on similar facts.  In In re Google Cookie Placement, the plaintiffs alleged that Google violated the Wiretap Act and CIPA by acquiring referrer headers “that the plaintiffs sent directly to the defendants.”  There, the court held that a direct “recipient of a communication is necessarily one of its parties,” and when it comes to wiretapping statute, whether the communication was obtained by deceit upon the sender is irrelevant.  The Third Circuit relied on opinions of the Second, Fifth, and Sixth Circuits, and concluded, based on the text and history of the federal Wiretap Act, that the applicability of the “party” exception does not turn on the sender’s knowledge or intent.

Any company that tracks browsing histories should be paying close to attention to this issue.  This case should also serve as a reminder for companies to revisit their terms and policies to ensure they accurately describe web tracking and data collection activities.

Partners Martin ZoltickJenny Colgate, and Christopher Ott will present a webinar with the Northern Virginia Technology Council (NVTC) on “Employee and Customer Health Data: Back to Work in COVID-19” on Friday, June 5, 2020, at 10 a.m. EDT. The virtual event is open to all interested in attending.

As organizations open their doors again, it’s important that they do so with certain safety measures in place to minimize COVID-19 risks for both employees and customers. With our unique perspective on privacy risks and complex, high-technology litigation, we will explore some safety measures being considered by employers with an eye toward privacy, data protection, and cybersecurity. For example, your workplace may be considering temperature checks to ensure those entering your workplace are temperature free; video monitoring and surveillance to ensure social distancing rules are followed; or questionnaires to evaluate people’s health condition and symptoms, contacts, and travel histories.

In the first of NVTC’s “Getting Back to Work” virtual roundtable series, attorneys from Rothwell Figg will discuss best practices for obtaining, storing, sharing, and disposing of this data, as well as how organizations can manage workplace privacy and security through the adoption of reasonable and effective practices while at the same time taking measures to protect employees and customers from the transmission of COVID-19.

The event is open to all. To register, please visit the page below, create an account or login to your existing NVTC account.

https://www.nvtc.org/NVTC/Events/Event_Display.aspx?EventKey=GBTW01

Alabama, North Dakota, and South Carolina are the first states to announce that they will use Apple-Google’s exposure notification technology for their state COVID-19 tracking apps.  Several countries in Europe have already agreed to use the technology.

The Apple-Google technology uses Bluetooth to aid in COVID-19 exposure notification.  A user’s device with the technology enabled will send out a Bluetooth beacon that includes a random Bluetooth identifier associated with the user of the device.  The identifier is changed every 10-20 minutes.  When one user’s device is within range of another user’s device, each device will receive the other device’s beacon and store that received beacon on the device.  If a person tests positive for COVID-19, that person may upload his or her diagnosis using a state run app, and with his or her consent, the relevant device beacon(s) will be added to a positive diagnosis list.  At least once per day, each device will check the list of beacons it has recorded against the list of positive diagnosis beacons on the positive diagnosis list.  If there is a match, the user may be notified that they have come into contact with an individual that tested positive for COVID-19, and the system will share the day the contact occurred, how long it lasted, and the Bluetooth signal strength (e.g., proximity) of that contact.  More details on the technology can be found here.

On May 20, 2020, Apple and Google announced the launch of its exposure notification technology to public health agencies around the world, including a new software update that enables the Bluetooth technology.  Not surprisingly, some public health authorities are saying that the technology is too restrictive because the decentralized processing of data on devices prevents aggregate analysis of infection hot spots and rates, and the technology excludes location data.  On the flip-side, privacy advocates have raised privacy and civil liberty concerns about contact tracing and government surveillance generally in the wake of the pandemic.  In the Frequently Asked Questions about the exposure notification technology, Apple and Google pledged that there will be no monetization from this project by Apple or Google, and that the system will only be used for contact tracing by public health authorities apps.

Have Apple and Google struck an appropriate balance between efficacy and civil liberties?  Stay tuned.

ADT LLC, a security company that offers customers, inter alia, video monitoring of their homes, has been sued in Florida federal court due to employee accessing and viewing footage over the course of several years from in-home security cameras of 220 customers.  The rogue employee was a technician for the defendant and in charge of installing security systems in customers’ homes in the Dallas-Forth Worth metro area.  He apparently added his own personal email address to customers’ accounts, which allowed him to access the accounts through the ADT application and internet portal.  The lawsuits against ADT claim breach of contract, negligence, intrusion upon seclusion, negligent hiring and intentional infliction of emotional distress.

While this case concerns actions that took place over a time period of several years and is not directly related to the COVID-19 pandemic, the case should have executives at companies scrutinizing their own policies and procedures with regard to customer information, including image and video data.  Especially with so many employees working from home and accessing sensitive information within the confines of their own homes, it is more important now than ever before to ensure that adequate safeguards are in place to protect customer information and to protect a company for potential liability.  For example, make sure that only employees that need access to sensitive information have access; review programs and procedures for loopholes; and review corporate policies governing employee behavior.

A plaintiff recently lost her battle with Shutterfly in the Northern District of Illinois when the Court ruled that Shutterfly’s arbitration clause was binding, notwithstanding Shutterfly’s unilateral amendments to its Terms of Use, including adding an arbitration provision after plaintiff clicked “Accept.”  The case is now stayed pending the outcome of arbitration.

The plaintiff was a Shutterfly user who had clicked “Accept” upon registering in 2014, thereby agreeing to Shutterfly’s then-existent Terms of Use (which did not include an arbitration provision).

Since 2014, Shutterfly has updated its Terms of Use numerous times.  In 2015, Shutterfly added an arbitration provision to its Terms of Use.  Since then, all of Shutterfly’s Terms of Use have had an arbitration provision, stating, inter alia, “you and Shutterfly agree that any dispute, claim, or controversy arising out of or relating in any way to the Shutterfly service, these Terms of Use and this Arbitration Agreement, shall be determined by binding arbitration.”

The plaintiff brought sued in 2019 alleging that Shutterfly violated the Biometric Information Privacy Act (BIPA), by using facial recognition technology to “tag” individuals and by “selling, leasing, trading, or otherwise profiting from Plaintiffs’ and Class Members’ biometric identifiers and/or biometric information.”

In September 2019, after plaintiff’s suit was filed, Shutterfly sent an email notice to users informing them that Shutterfly’s Terms of Use had again been updated.  The email included numerous policies unrelated to arbitration, and then stated: “We also updated our Terms of Use to clarify your legal rights in the event of a dispute and how disputes will be resolved in arbitration.”  It further stated: “If you do not contact us to close your account by October 1, 2019, or otherwise continue to use our websites and/or mobile applications, you accept these updated terms.”

In the lawsuit, Shutterfly moved to compel arbitration, pursuant to the agreement it had entered into with the plaintiff, and which Shutterfly unilaterally agreed, and the Court agreed with Shutterfly on the following grounds:

  • Illinois Courts allow parties to agree to authorize one party to modify a contract unilaterally, and have repeatedly recognized the enforceability of arbitration provisions added via a unilateral change-in-terms clause (notwithstanding the lack of a notice provision).
  • The Terms of Use plaintiff agreed to in 2014 included a change-in-terms provision (i.e., “YOUR CONTINUED USE OF ANY OF THE SITES AND APPS AFTER WE POST ANY CHANGES WILL CONSTITUTE YOUR ACCEPTANCE OF SUCH CHANGES …”)
  • After Shutterfly added an arbitration provision in 2015, plaintiff placed four orders for products (between 2015 and 2018).

The Court was further unbothered by Shutterfly’s post-Complaint email on grounds that plaintiff agreed to arbitrate her claims in 2015 – well before her lawsuit was filed.

Notably this same case may have had  a different outcome if it concerned a California privacy statute (or non-privacy statute), instead of BIPA.  One of plaintiff’s defenses – the McGill Rule – provides that plaintiffs cannot waive their right to public injunctive relief in any forum, including in arbitration.  McGill v. Citibank, 2 Cal. 5th 945, 215 Cal. Rptr. 627, 393 P.3d 85 (2017) (note: whether the McGill Rule is preempted by the Federal Arbitration Act is the subject of a currently pending petition for certiorari before the Supreme Court, which is fully briefed and scheduled for consideration at the Court’s May 28, 2020 conference, see Blair v. Rent-A-Center).  In response to this argument, Shutterfly argued that the McGill rule only applies to claims arising under California’s consumer protection laws, and the plaintiffs in the case were seeking a private injunction, not a public one.  The Court did not address the private vs. public injunction argument, but agreed with Shutterfly that because the plaintiff’s claim arose under BIPA, and not a California consumer protection law, the McGill Rule was inapplicable.

Last week NASA reported that it awarded contracts to three companies to build spacecraft capable of landing  humans on the moon—Blue Origin (owned by Jeff Bezos); Dynetics (a Leidos subsidiary); and SpaceX (owned by Elon Musk).  The current plan is purportedly to, in 2024, fly astronauts to the Orion spacecraft, built by Lockheed Martin, to lunar orbit, where it would meet up and dock with the lander, which would take them to the moon’s surface.  But Congress has yet to sign off.

Meanwhile, SpaceX is also purportedly on target to also take astronauts to the International Space System next month, on May 27, 2020, on the Demo-2 test flight.  If it goes, this will be the first orbital human spaceflight to launch from American soil since NASA’s space shuttle fleet retired in 2011.  Although, SpaceX did fly an uncrewed mission, Demo-1, to the ISS last March.

Back here on Earth, us at Rothwell Figg are wondering – what about the data that is generated and processed in outer space?  My partner, Marty Zoltick, and I, wrote a chapter on this in the International Comparative Legal Guide (ICLG’s) Data Protection 2019 publication, which you are read here.  Our conclusion was that the existing legal frameworks (including privacy laws and outer space treaties) do not sufficiently address which laws apply to personal data in airspace and outer space, and as such, a new international treaty or set of rules and/or regulatory standards is needed to fill this gap. 

We are continuing to assess this exciting issue which continues to develop.

 

There was a sense among many that websites whose data was being scraped may have lost a claim against data scrapers last year—specifically, violation of the Computer Fraud and Abuse Act (CFAA)—when the Northern District of California, and then the United States Court of Appeals for the Ninth Circuit, sided with data scraper, hiQ, in a case brought by LinkedIn in 2017.  However, now that is not so clear, as the Supreme Court has indicated an interest in possibly hearing the case.  [Notably, there are numerous other causes of action available to websites whose data is being scraped other than the CFAA, such as breach of contract, copyright infringement, common law misappropriation, unfair competition, trespass and conversion, DMCA anti-circumvention provisions, violation of FTC, Section 5, and violation of state UDAP laws.  Indeed, the availability of other claims – beyond CFAA – was expressly acknowledged by the Ninth Circuit Court of Appeals in its decision, hiQ Labs, Inc. v. LinkedIn Corp., 938 F.3d 985 (9th Cir 2019).]

In the case at issue, hiQ sought a preliminary injunction against LinkedIn, preventing LinkedIn from blocking hiQ’s bots which gather data from LinkedIn’s publicly available information (and then analyze the information, determine which employees are at risk of being poached, and sell the findings to employers).  The district court granted hiQ’s motion and the Ninth Circuit affirmed on grounds that, inter alia, hiQ’s business could suffer irreparable harm if precluded from accessing LinkedIn’s information, and LinkedIn was unlikely to prevail on its CFAA claim because LinkedIn’s website is publicly accessible, i.e., no password is required, and thus, there was no “authorization” that was required or could be revoked.  The CFAA expressly requires access without authorization.  See 18 U.S. Code § 1030(a) (providing for access without authorization, or the exceeding of authorized access). 

In March, LinkedIn filed a petition for certiorari in the Supreme Court, arguing, inter alia: “The decision below has extraordinary and adverse consequences for the privacy interests of the hundreds of millions of users of websites that make at least some user data publicly accessible.”  “The decision below casts aside the interests of LinkedIn members in controlling who has access to their data, the privacy of that data, and the desire to protect personal information from abuse by third parties, and it has done so in the service of hiQ’s narrow business interests.”  “The decision below wrongly requires websites to choose between allowing free riders to abuse their users’ data and slamming the door on the benefits to their users of the Internet’s public forum.”

hiQ did not respond. 

However, now the Supreme Court has expressly requested hiQ to respond, and has given it until May 26 to do so.  This has been seen by many as a signal of the Supreme Court’s potential interest in hearing the case. 

A Supreme Court decision in this area could be extremely helpful because, despite many seeing the Ninth Circuit’s decision as a possible “death” of the CFAA, other circuits, such as the First Circuit, have held that publicly available websites can rely on the CFAA to go after data scrapers, particularly where the website expressly bans data scraping.  See EF Cultural Travel BV v. Zefer Corp., 318 F.3d 58, 63 (1st Cir. 2003) (“If EF wants to ban scrapers, let it say so on the webpage or a link clearly marked as containing restrictions.”).  Thus, public website providers and the people who use them – as well as those who wish to scrape those sites – would benefit from the Supreme Court weighing in.

 

There are a number of state student privacy laws of which schools and technology companies whose programs and services are being used for educational purposes during the Coronavirus pandemic should be aware.

For example, a number of states have student online personal information protection acts (SOPIPAs) which prohibit website, online/cloud service, and application vendors from sharing student data and using that data for targeted advertising on students for a non-educational purpose.  See, e.g., Arizona (SB 1314), Arkansas (HB 1961), California (SB 1177), Colorado (HB 1294), Connecticut (HB 5469), Delaware (see DE SB79 and SB 208), District of Columbia (B21-0578), Georgia (SB 89), Hawaii (SB 2607), Illinois (SB 1796), Iowa (HF 2354), Kansas (HB 2008), Maine (LD454/SP 183), Maryland (HB 298), Michigan (SB 510), Nebraska (LB 512), Nevada (SB 463), New Hampshire (HB 520), North Carolina (HB 632), Oregon (SB 187), Tennessee (HB 1931/SB 1900), Texas (HB 2087), Utah (HB 358), Virginia (HB 1612), and Washington (SB 5419/HB 1495).  Companies whose websites, online/cloud services, or applications are now – in view of the pandemic and remote learning situation – being used by K-12 students should make themselves aware of and compliant with these laws.

A number of states also have statutes regulating contracts between education institutions and third parties, including lists of required provisions.  See, e.g., California (AB 1584), Connecticut (Conn HB 5469 (Connecticut’s SOPIPA statute)), Colorado (HB 1423), Louisiana (HB 1076) and Utah (SB 207).  It is important that parties that rushed into remote learning situations, and relationships with third parties to make remote learning possible, review their contracts to ensure compliance with these statutes.

We discuss both of the aforementioned statutes below.

SOPIPA Statutes

SOPIPA statutes, such as California SB 1177, apply to website, online/cloud service, and application vendors with actual knowledge that their site/service/application is used primarily for K-12 school purposes and was designed and marketed for K-12 school purposes.  The statutes (1) prohibit the website, online/cloud service, and application operators (hereinafter “Operators”) from sharing covered information; (2) require the Operators to protect covered information (i.e., secure storage and transmission); and (3) require the Operators to delete covered information upon the school district’s request.  “Covered information” is defined broadly in SOPIPA statutes, such as California SB 1177, to include any information or materials (1) provided by the student or the student’s guardian, in the course of the student’s or guardian’s use of the site or application; (2) created or provided by an employee or agent of the educational institution; or (3) gathered by the site or application that is descriptive of a student or otherwise identified a student.  Therefore, the scope of “covered information” under the SOPIPA statutes is much broader than the scope of protected information under FERPA.

It is unclear if an Operator that was in existence before the coronavirus pandemic but not used for K-12 school purposes, such as WhatsApp, but is being used for K-12 school purposes now (in view of the pandemic), would need to comply with SOPIPA statutes.  An argument could be made that such operators are not used “primarily” for K-12 school purposes, and the website/service/application was not “designed and marketed” for K-12 school purposes.  But the meaning of terms like “primarily,” “designed,” and “marketed” are vague.  And further, to the extent such applications are being used “primarily” for education purposes now, and are being technologically tweaked and marketed for educational purposes now, the argument that SOPIPA does not apply gets weaker.  Thus, it is in companies’ best interest – if they know their website/service/application is being used by K-12 students in view of remote learning situations and the pandemic – to comply with SOPIPA statutes.

Statutes Regarding Contracts with Education Agencies/School Districts/Schools

Another set of state statutes that Operators whose products are suddenly being used for educational/remote learning purposes should be aware of are statutes governing contracts with education agencies, school districts, schools, etc.  California AB 1584 is an example of such a statute, which governs contracts that “local education agencies” or LEAs (defined as including “school districts, county offices of education, and charter schools”) enter into with third parties, including digital storage services and digital education software.

Under California AB 1584, a LEA that enters into a contract with a third party for purposes of providing digital storage/management of records (e.g., cloud-based services) or digital education software must ensure the contract contains, inter alia:

  1. a statement that pupil records continue to be the property of and under the control of the LEA;
  2. a prohibition against the third party using personally identifiable information in individual pupil records for commercial or advertising purposes;
  3. a prohibition against the third party using any information in the pupil record for any purpose other than for the requirements of the contract;
  4. a description of the procedures by which a parent/guardian/the pupil may review the pupil’s records and correct erroneous information;
  5. a description of the third party’s actions to ensure the security of pupil records;
  6. a description of the procedures for notification in the event of unauthorized disclosure;
  7. a certification that the pupil’s records shall not be retained or available to the third party upon completion of the terms of the contract;
  8. a description of how the LEA and third party will jointly ensure compliance with FERPA and COPPA; and
  9. a provision providing that a contract that fails to comply with the aforementioned requirements shall be voidable and all pupil records in the third party’s possession shall be returned to the LEA.

Under California AB 1584, “personally identifiable information” is defined as “information that may be used on its own or with other information to identify an individual pupil” and “pupil records” is defined as both (i) any information directly related to a pupil that is maintained by the LEA, and (ii) any information acquired directly from the pupil through the use of instructional software or applications assigned to the pupil by a teacher or other LEA employee.

Other states, including at least Connecticut (Conn HB 5469), Colorado (HB 1423), Louisiana (HB 1076) and Utah (SB 207) have similar laws regulating contracts with third party vendors and operators of websites and applications who utilize student information, records, and student-generated content.

In view of these statutes, schools/school districts, and companies that provide (i) storage services, (ii) records management services, and/or (iii) software that is/are now being used for educational purposes should review their contracts to ensure that they contain the required provisions.  Additionally, companies that provide (i) storage services, (ii) records management services, and/or (iii) software that is/are now being used for educational purposes should review their practices to ensure compliance.

 

Just last year the public was scrutinizing Big Tech for its collection and use of extraordinary amounts of data about people’s activities, from real-world location tracking to virtual lingering and clicks.  This scrutiny led to the landmark California Consumer Privacy Act, among other general privacy and data protection laws around the world. Will Big Tech now put that data to good use in the fight against COVID-19?

Google recently announced the launch of its publicly available COVID-19 Community Mobility Reports, which are based on Google Maps’ “aggregated, anonymized data showing how busy certain types of places are.”  Google explains that the “reports used aggregated, anonymized data to chart movement trends over time by geography, across different high-level categories of places such as retail and recreation, groceries and pharmacies, parks, transit stations, workplaces, and residential.”  These reports are very high level, showing percentage point increases or decreases in visits to areas of interest such as “grocery & pharmacy,” “parks,” and “transit stations,” among others.  In order to protect people’s privacy, Google states that it does “not share the absolute number of visits,” and “no personally identifiable information, like an individual’s location, contacts or movement, is made available at any point.”

Facebook has also been sharing location data in aggregated and anonymized form with academic and nonprofit researchers around the world, and Microsoft worked with the University of Washington to create data visualizations aiming to predict the virus’ peak in each state.

Thus far, Big Tech’s release of aggregated and anonymized data strikes a sensible, if not conservative, policy that favors individual privacy protections as well as Big Tech’s ownership interests in its datasets.  But can, and should, Big Tech go farther in releasing more granular and personalized information as infections continue to climb globally?  Who gets to decide the balance between the need for data in combating the COVID-19 crisis versus the private interests in the data, the companies themselves or government?  With ongoing concerns about putting location data in the hands of government—or having government collect that data itself, let’s hope for the time being that Big Tech will continue to take initiative in putting its data to good use.

A week ago the headlines of major press reported that a number of countries, like China, Israel, Singapore, and South Korea, were using surveillance to track COVID-19 in their countries.  The surveillance efforts being used depended on the country.  Surveillance techniques included everything from drones, cameras, smartphone location data, apps (e.g., the “TraceTogether” application being used in Israel), and tracking devices (e.g., wristbands linked to a smartphone app are being used in HongKong) to ensure that people were not violating quarantine orders.

Meanwhile, there was a general feeling among many in the United States that such surveillance techniques would be “un-American” and would not fly in this country.

Now, a week later, the Government has announced that it is using location data from millions of cell phones in the United States to better understand the movements of Americans during the coronavirus pandemic.  The federal government (through the CDC) and state and local governments have started to receive data from the mobile advertising industry, with the goal of creating a portal comprising data from as many as 500 U.S. cities, for government officials at all levels to use to help plan the epidemic response.

Is this legal?

It depends.  It depends on what the data shows, if the data may legally be shared, and what it is being used for.  If the data is truly anonymized, may legally be shared, and it is being used solely to show trends and locations where people are gathering (without connecting individuals to those locations), then it could very well be legal under current U.S. privacy laws and the privacy laws of most states.  But there are several hiccups.

First, is it possible to truly anonymize the data?

A report published on March 25, 2013 called “Unique in the Crowd: The privacy bounds of human mobility” in Scientific Reports, and authored by Yves-Alexandre de Montjoye, Cesar A. Hdalgo, Michel Verleysen and Vincent D. Blondel (https://www.nature.com/articles/srep01376), while dated,  is on-point.  In this study, the researchers looked at fifteen months of human mobility data for one and a half million individuals and found that human mobility traces are so highly unique that, using outside information, one can link anonymized location data back up to individuals (i.e., re-identification).

Another issue with anonymization is that, as technologies continue to improve (consider, for example, the development of quantum computers), what it takes to truly anonymize data gets more and more difficult.  Thus, data that is sufficiently anonymized today may be re-identifiable in ten years.

The limitations in the degree to which location data can be anonymized can be mitigated in other ways.  For example, privacy concerns can be greatly reduced (or eliminated?) if the location data is aggregated in such a manner where an individuals’ data cannot reasonably be separated from the aggregated data.

Second, are there are other legal requirements or restrictions in place regarding that data? These requirements or restrictions could come from several sources, such as federal or state legislation, a company’s privacy policy, or contractual terms.  For example, a statute may require user consent (opt-in) to share location data.  A privacy policy or contract may guarantee that location data will never be shared unless certain safeguards are in place.  A user may have requested deletion of their person information, and thus, the entity sharing the information should not even have the data (let alone be sharing it).

Third, there is the question of what the data is being used for.  In a number of countries, surveillance and location data is being used to “police” specific individuals to determine if they are violating quarantine orders.  So far the United States appears to be using the data for a more general purpose—i.e., to assess trends and whether there are gatherings of people at specific locations.  The implication, at least so far, is that nobody is going to go after the individuals who are gathering.  Instead, the data is being aggregated and used merely to help inform orders and for health-planning purposes.

But the question on many people’s minds is not what the data is being used for now, but rather, what the data will be used for down the road.  For example, currently the government does not have access to location data maintained by third parties, like cell providers, ad tech companies, and social media operators.  And in order for the government to obtain that data, it needs a warrant.  See Carpenter v. U.S., 138 S. Ct. 2206 (2018) (holding that the Court Amendment of the U.S. Constitution protects privacy interests and expectations of privacy in one’s time-stamped cell-cite location information (CSLI), notwithstanding that the information has been shared with a third party (i.e., one’s cellular provider), and thus, for government to acquire such information, it must obtain a search warrant supported by probable cause).  Is this going to change once the coronavirus pandemic is over, at least with respect to the location data to which the government has already been provided access?  What requires the government to delete the information later?  Or to not use the data for other purposes?  Presumably there are contracts in place between the government and the third party companies that are sharing their location data – where are these contracts and who has a right to see them?  Are we all third party beneficiaries of those contracts, in that we all stand to benefit from the coronavirus response efforts that result?  And if so, to the extent those contracts limit the government’s ability to use the shared data for other purposes, should individuals have a right to later enforce those limitations (as third party beneficiaries)?