In July 2020, the Schrems II decision issued and the European Commission’s adequacy decision for the EU-US Privacy Shield Framework was invalidated.  Further, and broader than the invalidation of Privacy Shield adequacy decision, the Schrems II judgement found that US surveillance measures interfered with what are considered “fundamental rights” under EU law, i.e., the rights to respect for private and family life, including communications, and the protection of personal data.

Following Schrems II, companies reevaluated their policies and practices surrounding the transfer of personal information out of the EU, and the safeguards (under the GDPR) that they rely on for those cross-border data transfers.  While there has been some guidance, there has been no replacement for the EU-US Privacy Shield, and US surveillance practices remain a problem under the GDPR.   Since then, more decisions have issued, making it even harder for companies that thought  they had a solution.  For example, we reported a few weeks ago that while Google was transferring Google Analytics data to US servers for processing – purportedly under the belief that the data was not personal information, and thus did not fall under the GDPR – an Austrian data regulator recently found Google’s practice to violate the GDPR.  According to the Austrian data regulator, because Google uses IP addresses and cookie data identifiers to track information about web site visitors, that data is personal information.

This put a lot of the big US tech companies in a tough situation.  While some thought a solution to this was to move processing of personal information about European subjects to the EU, Meta has recently taken a different stance—stating in its annual report last Thursday, February 3, that it is considering shutting down Facebook and Instagam in Europe if it can’t keep transferring data back to the U.S.  The annual report states, on page 9:

In August 2020, we received a preliminary draft decision from the Irish Data Protection Commission (IDPC) that preliminarily concluded that Meta Platforms Ireland’s reliance on SCCs in respect of European user data does not achieve compliance with the General Data Protection Regulation (GDPR) and preliminarily proposed that such transfers of user data from the European Union to the United States should therefore be suspended.  We believe a final decision in this inquiry may issue as early as the first half of 2022.  If a new transatlantic data transfer framework is not adopted and we are unable to continue to rely on SCC’s or rely upon other alternative means of data transfers from Europe to the United States, we will likely be unable to offer a number of our most significant products and services, including Facebook and Instagram, in Europe, which would materially and adversely affect our business, financial condition, and results of operations.

Some news outlets have reported this as a “threat.”  In fact, a European lawmaker, Axel Voss, went so far as to call it “blackmail”: “#META cannot just blackmail the EU into giving up its data protection standards, leaving the EU would be their loss.”  That said, the statement does not read like a threat in the annual report.  It comes across as a matter-of-fact statement, i.e., if Meta cannot figure out any way to comply with the GDPR, it is going to have to stop transferring the restricted data from Europe to the United States.  It would seem that a lot of US companies would have similar statements in their annual reports – which is, we may have to stop transferring personal information from the EU to the US and there are only two ways to do this: (1) keep processing the data, but process it outside the US (in the EU, or a country without data surveillance issues like in the US), or (2) stop processing the data/serving the EU market.  Obviously the former takes a lot more time, effort, money and planning, even if it is a long-term solution for some entities.  It will be interesting to see how this plays out.

On Friday, January 28, 2022, the California Office of Attorney General issued a press release announcing that California DOJ sent notices alleging non-compliance with the California Consumer Privacy Act (CCPA) to a number of businesses operating loyalty programs in California.  The press release stated, inter alia:

“Under the CCPA, businesses that offer financial incentives, such as discounts, free items, and other rewards, in exchange for personal information must provide consumers with a notice of financial incentive.  This notice must clearly describe the material terms of the financial incentive program to the consumer before they opt in to the program.  Letters were sent today to major corporations in the retail, home improvement, travel, and food service industries, who have 30 days to cure and come into compliance with the law.”

The press release also quoted the California AG, Rob Bonta:

“In the digital age, it’s easy to forget that our data isn’t only collected when we go online.  It’s collected when we enter our phone number for a discount at the supermarket; when we use rewards for a free cup of coffee at our local coffee shop; and when we earn points to purchase items at our favorite clothing store… We may not always realize it, but these brick and mortar stores are collecting our data – and they’re finding out new ways to profit from it.”

Under the CCPA regulations, a “financial incentive” is defined broadly to mean “a program, benefit, or other offering, including payment to consumers, related to the collection, deletion, or sale of personal information.”  Cal. Code Regs.. tit. 11, Section 999.301(j).

Prior to these notices and a July 2021 press release regarding a similar notice, arguments had been made that loyalty programs were not offering financial incentives for the collection of personal information, and thus, they did were not covered by Section 1798.125 of the CCPA.  This argument seemingly hinged largely on the name itself — “loyalty program” – which implies that financial incentives are in recognition of repeat purchasing behavior.  Others have argued that just because loyalty programs are designed to reward loyal customers does not meet that they do not also provide important personal information to businesses (e.g., purchasing habits – who likes to shop where, when, and buy what).

CCPA requires that companies that offer financial incentives in exchange for personal information meet certain criteria, including: (1) notifying the customer of the financial incentive (CCPA 1798.125(b)(2) and 1798.135); (2) obtaining the customer’s “opt in consent” to the “material terms” of the financial incentive program (prior to opting in) (CCPA 1798.125(b)(3)); and (3) permitting the customer to revoke their consent at any time (id.).

The CCPA regulations provide more guidance.  A Notice of Financial Incentive must include the following:

  1. A succinct summary of the financial incentive or price or service difference offered;
  2. A description of the material terms of the financial incentive or price difference, including the categories of personal information that are implicated by the financial incentive or price or service difference and the value of the consumer’s data;
  3. How the consumer can opt-in to the financial incentive or price or service difference;
  4. A statement of the consumer’s right to withdraw from the financial incentive at any time and how the consumer may exercise that right; and
  5. An explanation of how the financial incentive or price or service difference is reasonably related to the value of the consumer’s data, including (a) a good-faith estimate of the value of the consumer’s data that forms the basis for offering the financial incentive or price or service difference; and (b) a description of the method the business used to calculate the value of the consumer’s data.

Cal. Code Regs. Tit. 11, Section 999.307 (emphasis added).

According to the quote from AG Bonta in Friday’s press release, it appears that at least some of the non-compliance notices may have targeted companies’ brick-and-mortar activities – e.g., entering phone numbers at check-out.  It is also noteworthy that a grocery store loyalty program was also expressly mentioned in the previously-mentioned July 2021 press release: “A grocery chain required consumers to provide personal information in exchange for participation in its company loyalty programs.  The company did not provide a Notice of Financial Incentive to participating customers.  After being notified of alleged noncompliance, the company amended its privacy policy to include a Notice of Financial Incentive.”

Under CCPA, businesses that receive notices of non-compliance have 30 days to cure or fix the alleged violation before an enforcement action can be initiated.

On August 13, 2018, the Associated Press published a story: “Google tracks your movements, like it or not.” According to the article, computer-science researchers at Princeton confirmed findings that “many Google services on Android devices and iPhones store your location data even if you’re using a privacy setting that says it will prevent Google from doing so.”  The article featured a map showing the locations that Google had tracked a researcher as having traveled to over several days, even though the researcher had his “Location History” turned off the whole time.  Google apparently explained this away on grounds that turning “Location History” off only prevented Google from adding movements to the “timeline” (its visualization of a user’s daily travels), but it did not stop Google from collecting location data.  To stop the collection of location data, another setting – called “Web and App Activity” – had to be turned off.  Notwithstanding this, the AP reported that Google’s support page stated at the time: “You can turn off Location History at any time.  With Location History off, the places you go are no longer stored.”

Fast forward three years later, and the Attorney General from Arizona, and most recently (this past Monday) four Attorneys General from D.C., Indiana, Texas, and Washington, sued Google for deceiving customers to gain access to their location data.  Google is alleged to have used “dark patterns” – which are “tricks” embedded into website and application user interfaces, used to influence users’ decisions or make users do things or allow things that they didn’t meant to do or allow.  Here, Google is alleged to have used “dark patterns” to gain access to location-tracking data, even after users thought they had disallowed Google from accessing that information.  Washington, D.C. Attorney General, Karl Racine, said in a statement: “Google falsely led consumers to believe that changing their account and device settings would allow customers to protect their privacy and control what personal data the company could access.  … The truth is that contrary to Google’s representations it continues to systematically surveil customers and profit from customer data.  Google’s bold misrepresentations are a clear violation of consumers’ privacy.”

Tuesday, Google issued a blog post responding to the recent complaints.

It has been nearly a year and a half since the Schrems II decision issued in July 2020, which invalidated the European Commission’s adequacy decision for the EU-US Privacy Shield Framework.  As a result, companies were forced to reexamine their transfers of personal information out of the EU, and the safeguards that they rely on for those cross-border data transfers.  Some companies, instead of addressing the safeguards they had in place, took a hard look at the data they were transferring.  Did they need to transfer it out of the EU?  Was it even personal information?  This latter issue was recently addressed by an Austrian data regulator, one of 27 GDPR enforcers.  While Google argued that the data was not personal information, the data regulator disagreed.  It is yet to be seen if other data regulators will issue similar decisions, and if so, what the fate will be of US technology companies in Europe.

In a recent decision by Austrian’s data regulator, it was held that a website’s use of Google Analytics violates the GDPR because it uses IP address and cookie data identifiers to track information about website visitors, such as the pages read, how long you are on the website, and information about users’ devices.  The Austrian decision held that IP addresses and cookie data identifiers are personal information.  Thus, when information tied with these identifiers is passed through Google’s servers in the United States, the GDPR is implicated.  Specifically, the GDPR provides that in the case of non-EU data transfers of personal data, there must be appropriate safeguards in effect to protect the data.  The problem is—after Schrems II, (1) there is no longer an adequacy decision by the EU for US data transfers, and (2) it is unclear if other safeguard measures, such as standard contractual clauses (SCCs) or binding corporate rules (BCRs) are sufficient in view of US surveillance practices under Section 702 of the Foreign Intelligence Surveillance Act (FISA) and Executive Order 12333.  In other words, there may be no appropriate safeguards that US technology companies can implement to allow for GDPR-compliant cross-border data transfers.

The recent Austrian decision provides that, “US intelligence services use certain online identifiers (such as IP address or unique identification numbers) as a starting point for the surveillance of individuals.”  Google had argued that it implemented measures to protect the data in the US, but these were found insufficient to meet the GDPR.  Indeed, the very “IDs” that Google pointed to as purportedly constituting pseudonymized safeguards were found to make users identifiable and addressable:

“…the use of IP addresses, cookie IDs, advertising IDs, unique user IDs or other identifiers to (re)identify users do not constitute appropriate safeguards to comply with data protection principles or to safeguard the rights of data subjects.  This is because, unlike in cases where data is pseudonymized in order to disguise or delete the identifying data so that the data subjects can no longer be addressed, IDs or identifiers are used to make the individuals distinguishable and addressable.  Consequently, there is no protective effect.  They are therefore not pseudoymizations within the meaning of Recital 28, which reduces the risks for the data subjects and assist data controllers and processors in complying with their data protection obligations.”

It remains to be seen whether other EU regulators will follow suit and hold that the GDPR has been violated where European websites use Google Analytics or similar US technology services.  It will also be interesting to see if European companies start transferring adtech and analytics services to national companies.

France recently fined Alphabet Inc’s Google $169 million and Meta Platform’s Facebook $67 million on grounds that the companies violated the EU e-Privacy directive (aka the EU “Cookie Law”) by requiring too many “clicks” for users to reject cookies.  The result was that many users just accepted the cookies, thus allowing the identifiers to track their data.  The French regulator gave the companies three months to come up with a solution that makes it as easy to reject cookies as it does to accept cookies.  This is an important message for all companies as they review their cookie compliance in 2022 – make it as easy to refuse a cookie as it is to accept one.

It is interesting to note that these recent fines were not issued under GDPR, but rather under the older e-Privacy directive which has been in effect since 2002.  Unlike the GDPR, which only allows regulators to fine companies that have their European headquarters in that country, regulators can issue fines under the e-Privacy directive against any company that does business in its jurisdiction.

The EU Cookie Law (which is not actually a law, but a directive) came into effect in 2002 and was amended in 2009 (amendment effective since 2011).  This directive regulates the processing of personal data in the electronic communications sector, and specifically it regulates the use of electronic cookies on websites by conditioning use upon prior consent of users.  Unless cookies are deemed strictly necessary for the most basic functions of a website (e.g., cookies that manage shopping cart contents), users must be given clear and comprehensive information about the purposes of processing data, storage, retention, and access, and they must also be able to give their consent and be provided with a way to refuse consent.

The U.K. released a National AI Strategy with a ten-year plan to make Britain a global AI superpower in our new age of artificial intelligence.  The Strategy intends to “signal to the world [the U.K.’s] intention to build the most pro-innovation regulatory environment in the world; to drive prosperity across the UK and ensure everyone can benefit from AI; and to apply AI to help solve global challenges like climate change.”

As part of its early key actions, the U.K. intends to launch before the end of the year a consultation through the IPO on copyright areas of computer generated works and text and data mining, and on patents for AI devised inventions.  (n.b., the UK Court of Appeal recently ruled 2-1 that an AI entity cannot be legally named as an inventor on a patent).  Additionally, the U.K. is engaged in an ongoing consultation on the U.K.’s data protection regime.  The data consultation highlights that, as a result of Brexit and the U.K.’s departure from the European Union, “the UK can reshape its approach to regulation and seize opportunities with its new regulatory freedoms.” For example, Article 22 of the EU’s data protection regime, which encompasses protections against automated decision making, may be on the chopping block for the U.K.’s data protection regime.

The U.K.’s approach to data in particular underscores the regulatory balance needed to ensure that data is readily accessible for a thriving AI ecosystem but is not used in a manner that causes harm to individuals and society.  For a deeper discussion on all things data with a focus on AI and ML-enabled technology, the American Intellectual Property Law Association is hosting a virtual Data Roadshow on September 30, 2021.  Leading attorneys from industry, private practice, and the public sector will provide insight and practice tips for navigating this evolving and exciting area of law.

On August 20, 2021, China passed its first general data protection law, called the Personal Information Protection Law (“PIPL”).  The law is set to take effect on November 1, 2021 (two months away), and it applies to both (1) in-country processing of personal information of natural persons; and (2) out-of-country processing of personal information of natural persons who are in China, if such processing is: (a) for the purposes of providing products or services to those people; (b) to analyze/evaluate the behavior of those people; or (3) other circumstances prescribed by laws and administrative regulations.  Thus, the PIPL will become one more thing that companies have to consider in weighing questions of where to store which user data.

While much of PIPL is similar to GDPR – such as the definitions of “personal information” and “processing”; requiring a legal basis for processing personal information; and providing individuals with various rights with respect to their personal information (e.g., portability, correct and delete, restrict and prohibit, etc.)—there are differences, and companies to whom the law applies should review their policies and practices carefully to ensure compliance.

Two ways in which PIPL stands out from some other general data protection laws are with regard to the data location requirement and the cross-border transfer requirements.

First, the law provides that critical infrastructure information (“CII”) operators (such as government system, utilities, financial system, public health) or entities processing a large amount of personal information must store personal information within the territory of mainland China.  Of note, every company operating in China is suggested to conduct a self-assessment to determine whether it may be deemed a CII operator.  In order for such information to be transferred to points outside of China, the transfer must pass a government-administered security assessment.

Second, cross-border transfer of information is allowed (for non-CII and large-volume companies) if the processor meets one of the following: (i) it passes a security assessment organized by the Cybersecurity Administration of China (CAC); (ii) it is certified by a specialized agency for the protection of PI by CAC; or (iii) it enters into a contract with the overseas recipient under the standard contract formulated by the CAC.  [Of note, it appears that despite the law going into effect in two months, there is not “standard contract” published yet.]

Penalties for violations of PIPL include, inter alia, an administrative fine of up to RMB 50 million or 5% of the processor’s turnover in the last year (it is unclear if this refers to local turnover or global turnover).

At this point you have probably heard about one of the many incidents where an AI-enabled system discriminated against certain populations in settings ranging from healthcare, law enforcement, and hiring, among others. In response to this problem, the National Institute of Standards and Technology (NIST) recently proposed a strategy for identifying and managing bias in AI, with emphasis on biases that can lead to harmful societal outcomes.  The NIST authors summarize:

“[T]here are many reasons for potential public distrust of AI related to bias in systems. These include:

  • The use of datasets and/or practices that are inherently biased and historically contribute to negative impacts
  • Automation based on these biases placed in settings that can affect people’s lives, with little to no testing or gatekeeping
  • Deployment of technology that is either not fully tested, potentially oversold, or based on questionable or non-existent science causing harmful and biased outcomes.”

As a starting place, the NIST authors outline an approach for evaluating the presentation of bias in three stages modeled on the AI lifecycle: pre-design, design & development, and deployment.  In addition, NIST will host a variety of activities in 2021 and 2022 in each area of the core building blocks of trustworthy AI (accuracy, explainability and interpretability, privacy, reliability, robustness, safety, and security (resilience), and bias).  NIST is currently accepting public comment on the proposal until September 10, 2021.

Notably, the proposal points out that “most Americans are unaware when they are interacting with AI enabled tech but feel there needs to be a ‘higher ethical standard’ than with other forms of technologies,” which “mainly stems from the perceptions of fear of loss of control and privacy.”  From a regulatory perspective, there currently is no federal data protection law in the US that broadly mirrors Europe’s GDPR Art. 22 with respect to automated decision making – “the right to not be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her.”  But several U.S. jurisdictions have passed laws that more narrowly regulate AI applications that have the potential to cause acute societal harms, such as the use of facial recognition technology in law enforcement or interviewing processes, and perhaps more regulation is likely as (biased) AI-enabled technology continues to proliferate into more settings.

In Van Buren v. United States, the Supreme Court resolved a circuit split as to whether a provision of the Computer Fraud and Abuse Act (CFAA) applies only to those who obtain information to which their computer access does not extend, or more broadly to also encompass those who misuse access that they otherwise have. By way of background, the CFAA subjects to criminal liability anyone who “intentionally accesses a computer without authorization or exceeds authorized access,” and thereby obtains computer information. 18 U.S.C. 1030(a)(2). The term “exceeds authorized accessed” is defined to mean “to access a computer with authorization and to use such access to obtain or alter information in the computer that the accesser is not entitled so to obtain or alter.”  18 U.S.C. 1030(a)(2).

The case involved a police sergeant that used his patrol-car computer to access a law enforcement database with his valid credentials in order to obtain license plate number records in exchange for money. The sergeant’s use of the database violated the department’s policy against using the database for non-law enforcement purposes, including personal use. At trial, the Government told the jury that the sergeant’s access of the database for non-law enforcement purposes violated the CFAA concept against using a computer network in a way contrary to what your job or policy prohibits. The jury convicted the sergeant, and the District Court sentenced him to 18 months in prison. The Eleventh Circuit affirmed, consistent with its precedent adopting the broader view of the CFAA.

The parties agreed that the sergeant accessed a computer with authorization when he used his valid credentials to log in to the law enforcement database, and that he obtained information when he acquired the license-plate records, but the dispute was whether the sergeant was “entitled so to obtain” the record. After analyzing the language of the statute and the policy behind the CFAA, the Court held that an individual “exceeds authorized access” under the CFAA when he accesses a computer with authorization but then obtains information located in particular areas of the computer—such as files, folders, or databases—that are off limits to him. Because the sergeant could use his credentials to obtain the license plate information, he did not exceed authorized access to the database under the terms of the CFAA.

In reaching its holding, the Court noted if “the ‘exceeds authorized access’ clause criminalizes every violation of a computer-use policy, then millions of otherwise law-abiding citizens are criminals.” Accordingly, with the narrowing of the CFAA, the decision is a good reminder to ensure that policies and agreements, including terms of use, that govern access to sensitive electronic resources are both enforceable and crafted with sufficient terms to cover insider threats and prohibit individuals with access to the electronic resource from using the resource in a damaging manner.

On April 22, 2021, the Supreme Court issued a unanimous decision finding that the FTC lacks authority to pursue equitable monetary relief in federal court under Section 13(b) of the Federal Trade Commission Act (the “FTCA”). The result means that defendant Scott Tucker does not have to pay $1.27 billion in restitution and disgorgement, notwithstanding that his payday loan business was found to constitute an unfair and deceptive practice.  The result also means that onlookers everywhere are scratching their heads thinking “what now?”  If the FTC is stripped of its authority to pursue equitable monetary relief, then will unfair and deceptive practices run rampant, knowing that the “worst case” scenario is that they will be enjoined (but get to keep any ill-gotten profits earned in the interim)?

Not entirely and probably not for long.

First, the FTC was never the only enforcement mechanism for policing unfair competition and deceptive practices violations.  Thus, there remain other agencies and statutes available to pursue wrongdoers, and many of these allow for the pursuit of equitable monetary relief.  For example, there are other federal agencies that have jurisdiction in certain situations (e.g., the Consumer Financial Protection Bureau).  Also, state attorneys general and state UDAP laws often have broad jurisdiction and authority to pursue monetary relief.

Second, Sections 5 and 19 of the FTCA give district courts the authority to impose monetary penalties and award monetary relief where the FTC has issued cease and desist orders.  So, the FTC is not currently stripped of all authority to pursue equitable monetary relief in federal court – it just needs to issue a cease and desist order first.

Third, the FTC has been pressuring and continues to pressure Congress to amend Section 13(b) of the FTCA to broaden the scope of relief available.

Fourth, the FTC could promulgate more rules and strengthen its existing rules under its rulemaking process (Section 18 of the FTCA).  Last month, acting Chairwoman Slaughter announced the formation of a new, centralized rulemaking group in the General Counsel’s office.

In sum, the Supreme Court’s decision will undoubtedly have some effects on the policing of unfair and deceptive trade practices, but there are numerous processes in place to ensure that the system is not derailed, and it is likely that the FTC will have authority to pursue monetary relief in the future.