Google agrees to pay a historic $391.5 million to settle with attorneys general from 40 U.S. states for misleading users about its location tracking and collection practices. The settlement is the largest ever attorneys general-led consumer privacy settlement.

The attorneys general opened the Google investigation following a 2018 Associated Press article that revealed Google “records your movements even when you explicitly tell it not to.” According to the article, Android users were misled into thinking that location tracking is turned off when the “Location History” setting is “paused” or disabled. Google, however, could continue to track a user’s location through other Google apps and settings. For example, another account setting—turned on by default and ambiguously named “Web & App Activity”—enabled the company to collect, store, and use the customers’ personally identifiable location data.

In addition to the fine, Google agreed to improve transparency regarding its location data tracking and collection practices. Specifically, Google must:

  1. Show additional information to users whenever they turn a location-related account setting “on” or “off”;
  2. Make key information about location tracking unavoidable for users (i.e., not hidden); and
  3. Give users detailed information about the types of location data Google collects and how it’s used at an enhanced “Location Technologies” webpage.

The Google settlement highlights the importance for companies to (1) be transparent in their data collection practices and (2) accurately convey data collection practices in a user-accessible manner.

Yesterday, October 12, 2022, was the first time that a case under the Illinois Biometric Information Privacy Act (BIPA) went to trial – and the result was a big win for the Plaintiffs, more than 44,000 truck drivers whose fingerprints were scanned for identity verification purposes without any informed permission or notice. BIPA is an Illinois state law that requires informed, written consent before personal biometric information is captured, used, and stored. The law also provides a private right of action — allowing individuals whose biometric information is captured, used, or stored without informed, written consent to bring suit. In 2019, in Rosenbach v. Six Flags, it was held that failure to comply with the statue alone constitutes harm sufficient to confer standing. This allows for suits when the statute is violated, regardless of whether the biometric information that is captured is misused in any way or the person whose information is captured experiences any real-world harm.  Since the Rosenbach ruling, many BIPA cases have been brought – but they always result in settlement.

The case, Richard Rogers v. BNSF Railway Company (Case No. 19-C-3083, N.D. Ill.), is noteworthy not only because it was the first trial in a BIPA case, but also because (1) of how damages were proved, and (2) the defendant was not itself the party that was capturing the data, rather it had contracted out identification verification to a third-party.

First, regarding damages, the jurors were asked only to indicate how many times defendant recklessly or intentionally violated the law. They answered consistent with the defense expert’s estimated number of drivers who had their fingerprints registered: 45,600 times. The Court then entered a judgment of $228 million, based on the jury’s finding of 45,600 violations, and the language of the statute which provides for up to $5,000 for every willful or reckless violation and $1,000 for every negligent violation.

Second, regarding the defendant, BNSF Railway was not the party that actually collected anyone’s fingerprints. Rather, BNSF hired a third party company – Remprex LLC – to process drivers at the gates of the railroad’s Illinois rail yards.  BNSF argued that it did not control the “method and manner” of Remprex’s work. Counsel for the Plaintiff argued that ignorance is not a defense to the law, and if BNSF did not know about the BIPA then it was acting recklessly (the railroad company has been around for 150 years in a highly regulated industry, and its subcontractor, Remprex, was just a two-person start-up when they were first hired). Counsel for Plaintiff also argued that a party cannot “contract out” its obligation to follow the law. Compellingly, Plaintiff’s counsel also pointed to the fact that BNSF continued its biometric information data processing activities even after suit was first filed in 2019.

Today, October 7, 2022, President Joe Biden signed an executive order implementing a new privacy framework for data being shared between Europe and the United States. The new framework is called the “Trans-Atlantic Data Privacy Framework,” and it will (hopefully) serve to replace the prior framework, known as “Privacy Shield”, which was struck down by the European Court of Justice in July 2022 (in a case called Schrems II) on grounds that it did not adequately protect EU citizens from U.S. government surveillance. We wrote about the Schrems II decision here, including how it not only struck down the “Privacy Shield” framework, but also potentially called into question all EU-U.S. data transfers.

The new framework was the result of over a year of detailed negotiations between the U.S. and EU, and it is believed to address the concerns raised by the Court of Justice of the European Union (CJEU) in the Schrems II decision. If the European Commission agrees and issues an adequacy decision, the framework will serve to re-enable the flow of data between the EU and U.S., a $7.1 trillion economic relationship. So, how did the U.S. address the CJEU’s concerns?  The key principles are:

  1.  a new set of rules and binding safeguards to limit access to data by U.S. intelligence authorities to what is necessary and proportionate to protect national security, and U.S. intelligence agencies will adopt procedures to ensure effective oversight of new privacy and civil liberties standards;
  2. a two-tier redress system is created to investigate and resolve complaints filed by EU citizens if they are concerned their personal information has been improperly collected by the U.S. intelligence community, including the establishment of a new data privacy court (a data protection review court) inside the Justice Department to investigate valid complaints; and
  3. the creation of strong obligations for companies processing data transferred from the EU, including a continued requirement to self-certify their adherence to the Principles through the U.S. Department of Commerce.

The next step is for the European Commission to assess the framework and (hopefully) issue an adequacy decision. This process could take many months.  Unless and until an adequacy decision is issued, businesses will have to continue to rely on other means for transferring EU personal data to the U.S., such as binding corporate rules or standard contractual clauses.

California Attorney General Rob Bonta announced yesterday a settlement reached with beauty product retailer, Sephora, Inc. (Sephora), resolving allegations that Sephora violated various provisions of the California Consumer Privacy Act (CCPA).  Specifically, it was alleged that Sephora failed to:

  • Disclose to consumers that it was selling their personal information
  • Process user requests to opt out of sale of personal information in accordance with the CCPA
  • Cure these violations within the 30-day period currently allowed by the CCPA.

Attorney General Bonta issued a press release saying: “I hope today’s settlement sends a strong message to businesses that are still failing to comply with California’s consumer privacy law.  My office is watching, and we will hold you accountable.”

In Sephora’s case, Sephora was allowing third-party companies to install tracking software on their website and in their app so that third parties could monitor customers as they shopped.  The third-parties were tracking, inter alia, what kind of computer the customer was using, what products/brands the user put in her shopping cart, and the user’s location.  Sephora was using the information obtained from these third-party trackers to more effectively target potential customers.  Sephora’s arrangement with these third-party customers constituted a “sale” under the CCPA, which required Sephora to allow customers to opt-out of such information-sharing.

Under the settlement agreement, Sephora agreed to:

  • Pay $1.2 million
  • Expressly disclose that it sells data
  • Provide opt-outs for the sale of personal information, including via the Global Privacy Control
  • Conform its service provider agreements to the CCPA’s requirements; and
  • Report to the AG on its sales of personal information, the status of its service provider relationships, and its efforts to honor Global Privacy Control

For more information on the Sephora settlement agreement, and on the Attorney General’s ongoing enforcement actions with respect to failures to process opt-out requests, please see the AG’s Press Release.

In July 2020, the Schrems II decision issued and the European Commission’s adequacy decision for the EU-US Privacy Shield Framework was invalidated.  Further, and broader than the invalidation of Privacy Shield adequacy decision, the Schrems II judgement found that US surveillance measures interfered with what are considered “fundamental rights” under EU law, i.e., the rights to respect for private and family life, including communications, and the protection of personal data.

Following Schrems II, companies reevaluated their policies and practices surrounding the transfer of personal information out of the EU, and the safeguards (under the GDPR) that they rely on for those cross-border data transfers.  While there has been some guidance, there has been no replacement for the EU-US Privacy Shield, and US surveillance practices remain a problem under the GDPR.   Since then, more decisions have issued, making it even harder for companies that thought  they had a solution.  For example, we reported a few weeks ago that while Google was transferring Google Analytics data to US servers for processing – purportedly under the belief that the data was not personal information, and thus did not fall under the GDPR – an Austrian data regulator recently found Google’s practice to violate the GDPR.  According to the Austrian data regulator, because Google uses IP addresses and cookie data identifiers to track information about web site visitors, that data is personal information.

This put a lot of the big US tech companies in a tough situation.  While some thought a solution to this was to move processing of personal information about European subjects to the EU, Meta has recently taken a different stance—stating in its annual report last Thursday, February 3, that it is considering shutting down Facebook and Instagam in Europe if it can’t keep transferring data back to the U.S.  The annual report states, on page 9:

In August 2020, we received a preliminary draft decision from the Irish Data Protection Commission (IDPC) that preliminarily concluded that Meta Platforms Ireland’s reliance on SCCs in respect of European user data does not achieve compliance with the General Data Protection Regulation (GDPR) and preliminarily proposed that such transfers of user data from the European Union to the United States should therefore be suspended.  We believe a final decision in this inquiry may issue as early as the first half of 2022.  If a new transatlantic data transfer framework is not adopted and we are unable to continue to rely on SCC’s or rely upon other alternative means of data transfers from Europe to the United States, we will likely be unable to offer a number of our most significant products and services, including Facebook and Instagram, in Europe, which would materially and adversely affect our business, financial condition, and results of operations.

Some news outlets have reported this as a “threat.”  In fact, a European lawmaker, Axel Voss, went so far as to call it “blackmail”: “#META cannot just blackmail the EU into giving up its data protection standards, leaving the EU would be their loss.”  That said, the statement does not read like a threat in the annual report.  It comes across as a matter-of-fact statement, i.e., if Meta cannot figure out any way to comply with the GDPR, it is going to have to stop transferring the restricted data from Europe to the United States.  It would seem that a lot of US companies would have similar statements in their annual reports – which is, we may have to stop transferring personal information from the EU to the US and there are only two ways to do this: (1) keep processing the data, but process it outside the US (in the EU, or a country without data surveillance issues like in the US), or (2) stop processing the data/serving the EU market.  Obviously the former takes a lot more time, effort, money and planning, even if it is a long-term solution for some entities.  It will be interesting to see how this plays out.

On Friday, January 28, 2022, the California Office of Attorney General issued a press release announcing that California DOJ sent notices alleging non-compliance with the California Consumer Privacy Act (CCPA) to a number of businesses operating loyalty programs in California.  The press release stated, inter alia:

“Under the CCPA, businesses that offer financial incentives, such as discounts, free items, and other rewards, in exchange for personal information must provide consumers with a notice of financial incentive.  This notice must clearly describe the material terms of the financial incentive program to the consumer before they opt in to the program.  Letters were sent today to major corporations in the retail, home improvement, travel, and food service industries, who have 30 days to cure and come into compliance with the law.”

The press release also quoted the California AG, Rob Bonta:

“In the digital age, it’s easy to forget that our data isn’t only collected when we go online.  It’s collected when we enter our phone number for a discount at the supermarket; when we use rewards for a free cup of coffee at our local coffee shop; and when we earn points to purchase items at our favorite clothing store… We may not always realize it, but these brick and mortar stores are collecting our data – and they’re finding out new ways to profit from it.”

Under the CCPA regulations, a “financial incentive” is defined broadly to mean “a program, benefit, or other offering, including payment to consumers, related to the collection, deletion, or sale of personal information.”  Cal. Code Regs.. tit. 11, Section 999.301(j).

Prior to these notices and a July 2021 press release regarding a similar notice, arguments had been made that loyalty programs were not offering financial incentives for the collection of personal information, and thus, they did were not covered by Section 1798.125 of the CCPA.  This argument seemingly hinged largely on the name itself — “loyalty program” – which implies that financial incentives are in recognition of repeat purchasing behavior.  Others have argued that just because loyalty programs are designed to reward loyal customers does not meet that they do not also provide important personal information to businesses (e.g., purchasing habits – who likes to shop where, when, and buy what).

CCPA requires that companies that offer financial incentives in exchange for personal information meet certain criteria, including: (1) notifying the customer of the financial incentive (CCPA 1798.125(b)(2) and 1798.135); (2) obtaining the customer’s “opt in consent” to the “material terms” of the financial incentive program (prior to opting in) (CCPA 1798.125(b)(3)); and (3) permitting the customer to revoke their consent at any time (id.).

The CCPA regulations provide more guidance.  A Notice of Financial Incentive must include the following:

  1. A succinct summary of the financial incentive or price or service difference offered;
  2. A description of the material terms of the financial incentive or price difference, including the categories of personal information that are implicated by the financial incentive or price or service difference and the value of the consumer’s data;
  3. How the consumer can opt-in to the financial incentive or price or service difference;
  4. A statement of the consumer’s right to withdraw from the financial incentive at any time and how the consumer may exercise that right; and
  5. An explanation of how the financial incentive or price or service difference is reasonably related to the value of the consumer’s data, including (a) a good-faith estimate of the value of the consumer’s data that forms the basis for offering the financial incentive or price or service difference; and (b) a description of the method the business used to calculate the value of the consumer’s data.

Cal. Code Regs. Tit. 11, Section 999.307 (emphasis added).

According to the quote from AG Bonta in Friday’s press release, it appears that at least some of the non-compliance notices may have targeted companies’ brick-and-mortar activities – e.g., entering phone numbers at check-out.  It is also noteworthy that a grocery store loyalty program was also expressly mentioned in the previously-mentioned July 2021 press release: “A grocery chain required consumers to provide personal information in exchange for participation in its company loyalty programs.  The company did not provide a Notice of Financial Incentive to participating customers.  After being notified of alleged noncompliance, the company amended its privacy policy to include a Notice of Financial Incentive.”

Under CCPA, businesses that receive notices of non-compliance have 30 days to cure or fix the alleged violation before an enforcement action can be initiated.

On August 13, 2018, the Associated Press published a story: “Google tracks your movements, like it or not.” According to the article, computer-science researchers at Princeton confirmed findings that “many Google services on Android devices and iPhones store your location data even if you’re using a privacy setting that says it will prevent Google from doing so.”  The article featured a map showing the locations that Google had tracked a researcher as having traveled to over several days, even though the researcher had his “Location History” turned off the whole time.  Google apparently explained this away on grounds that turning “Location History” off only prevented Google from adding movements to the “timeline” (its visualization of a user’s daily travels), but it did not stop Google from collecting location data.  To stop the collection of location data, another setting – called “Web and App Activity” – had to be turned off.  Notwithstanding this, the AP reported that Google’s support page stated at the time: “You can turn off Location History at any time.  With Location History off, the places you go are no longer stored.”

Fast forward three years later, and the Attorney General from Arizona, and most recently (this past Monday) four Attorneys General from D.C., Indiana, Texas, and Washington, sued Google for deceiving customers to gain access to their location data.  Google is alleged to have used “dark patterns” – which are “tricks” embedded into website and application user interfaces, used to influence users’ decisions or make users do things or allow things that they didn’t meant to do or allow.  Here, Google is alleged to have used “dark patterns” to gain access to location-tracking data, even after users thought they had disallowed Google from accessing that information.  Washington, D.C. Attorney General, Karl Racine, said in a statement: “Google falsely led consumers to believe that changing their account and device settings would allow customers to protect their privacy and control what personal data the company could access.  … The truth is that contrary to Google’s representations it continues to systematically surveil customers and profit from customer data.  Google’s bold misrepresentations are a clear violation of consumers’ privacy.”

Tuesday, Google issued a blog post responding to the recent complaints.

It has been nearly a year and a half since the Schrems II decision issued in July 2020, which invalidated the European Commission’s adequacy decision for the EU-US Privacy Shield Framework.  As a result, companies were forced to reexamine their transfers of personal information out of the EU, and the safeguards that they rely on for those cross-border data transfers.  Some companies, instead of addressing the safeguards they had in place, took a hard look at the data they were transferring.  Did they need to transfer it out of the EU?  Was it even personal information?  This latter issue was recently addressed by an Austrian data regulator, one of 27 GDPR enforcers.  While Google argued that the data was not personal information, the data regulator disagreed.  It is yet to be seen if other data regulators will issue similar decisions, and if so, what the fate will be of US technology companies in Europe.

In a recent decision by Austrian’s data regulator, it was held that a website’s use of Google Analytics violates the GDPR because it uses IP address and cookie data identifiers to track information about website visitors, such as the pages read, how long you are on the website, and information about users’ devices.  The Austrian decision held that IP addresses and cookie data identifiers are personal information.  Thus, when information tied with these identifiers is passed through Google’s servers in the United States, the GDPR is implicated.  Specifically, the GDPR provides that in the case of non-EU data transfers of personal data, there must be appropriate safeguards in effect to protect the data.  The problem is—after Schrems II, (1) there is no longer an adequacy decision by the EU for US data transfers, and (2) it is unclear if other safeguard measures, such as standard contractual clauses (SCCs) or binding corporate rules (BCRs) are sufficient in view of US surveillance practices under Section 702 of the Foreign Intelligence Surveillance Act (FISA) and Executive Order 12333.  In other words, there may be no appropriate safeguards that US technology companies can implement to allow for GDPR-compliant cross-border data transfers.

The recent Austrian decision provides that, “US intelligence services use certain online identifiers (such as IP address or unique identification numbers) as a starting point for the surveillance of individuals.”  Google had argued that it implemented measures to protect the data in the US, but these were found insufficient to meet the GDPR.  Indeed, the very “IDs” that Google pointed to as purportedly constituting pseudonymized safeguards were found to make users identifiable and addressable:

“…the use of IP addresses, cookie IDs, advertising IDs, unique user IDs or other identifiers to (re)identify users do not constitute appropriate safeguards to comply with data protection principles or to safeguard the rights of data subjects.  This is because, unlike in cases where data is pseudonymized in order to disguise or delete the identifying data so that the data subjects can no longer be addressed, IDs or identifiers are used to make the individuals distinguishable and addressable.  Consequently, there is no protective effect.  They are therefore not pseudoymizations within the meaning of Recital 28, which reduces the risks for the data subjects and assist data controllers and processors in complying with their data protection obligations.”

It remains to be seen whether other EU regulators will follow suit and hold that the GDPR has been violated where European websites use Google Analytics or similar US technology services.  It will also be interesting to see if European companies start transferring adtech and analytics services to national companies.

France recently fined Alphabet Inc’s Google $169 million and Meta Platform’s Facebook $67 million on grounds that the companies violated the EU e-Privacy directive (aka the EU “Cookie Law”) by requiring too many “clicks” for users to reject cookies.  The result was that many users just accepted the cookies, thus allowing the identifiers to track their data.  The French regulator gave the companies three months to come up with a solution that makes it as easy to reject cookies as it does to accept cookies.  This is an important message for all companies as they review their cookie compliance in 2022 – make it as easy to refuse a cookie as it is to accept one.

It is interesting to note that these recent fines were not issued under GDPR, but rather under the older e-Privacy directive which has been in effect since 2002.  Unlike the GDPR, which only allows regulators to fine companies that have their European headquarters in that country, regulators can issue fines under the e-Privacy directive against any company that does business in its jurisdiction.

The EU Cookie Law (which is not actually a law, but a directive) came into effect in 2002 and was amended in 2009 (amendment effective since 2011).  This directive regulates the processing of personal data in the electronic communications sector, and specifically it regulates the use of electronic cookies on websites by conditioning use upon prior consent of users.  Unless cookies are deemed strictly necessary for the most basic functions of a website (e.g., cookies that manage shopping cart contents), users must be given clear and comprehensive information about the purposes of processing data, storage, retention, and access, and they must also be able to give their consent and be provided with a way to refuse consent.

The U.K. released a National AI Strategy with a ten-year plan to make Britain a global AI superpower in our new age of artificial intelligence.  The Strategy intends to “signal to the world [the U.K.’s] intention to build the most pro-innovation regulatory environment in the world; to drive prosperity across the UK and ensure everyone can benefit from AI; and to apply AI to help solve global challenges like climate change.”

As part of its early key actions, the U.K. intends to launch before the end of the year a consultation through the IPO on copyright areas of computer generated works and text and data mining, and on patents for AI devised inventions.  (n.b., the UK Court of Appeal recently ruled 2-1 that an AI entity cannot be legally named as an inventor on a patent).  Additionally, the U.K. is engaged in an ongoing consultation on the U.K.’s data protection regime.  The data consultation highlights that, as a result of Brexit and the U.K.’s departure from the European Union, “the UK can reshape its approach to regulation and seize opportunities with its new regulatory freedoms.” For example, Article 22 of the EU’s data protection regime, which encompasses protections against automated decision making, may be on the chopping block for the U.K.’s data protection regime.

The U.K.’s approach to data in particular underscores the regulatory balance needed to ensure that data is readily accessible for a thriving AI ecosystem but is not used in a manner that causes harm to individuals and society.  For a deeper discussion on all things data with a focus on AI and ML-enabled technology, the American Intellectual Property Law Association is hosting a virtual Data Roadshow on September 30, 2021.  Leading attorneys from industry, private practice, and the public sector will provide insight and practice tips for navigating this evolving and exciting area of law.