The Federal Trade Commission will have its eye on privacy and data security enforcement in 2023.

In August, the agency announced that it is exploring ways to crack down on lax data security practices. In the announcement, the FTC explained that it was “concerned that many companies do not sufficiently or consistently invest in securing the data they collect from hackers and data thieves.”

These concerns are reflected in some of the FTC’s recent privacy enforcement actions. This article explores two significant FTC privacy actions of 2022 and provides three key tips to avoid similar proceedings in 2023.

Recent FTC Enforcement Actions

Earlier in 2022, Chegg Inc., an educational technology company, faced enforcement action from the FTC. Chegg is an online platform that provides customers with education-related services like homework help, tutoring and textbook rentals.

According to the FTC, Chegg failed to uphold its security promises to “take commercially reasonable security measures to protect the Personal Information submitted to [Chegg].”

In its complaint, the FTC explained that Chegg’s lax cybersecurity practices led to four data breaches resulting in the exposure of employees’ financial and medical information and the personal information of 40 million customers.[1]

The FTC’s complaint points out that three of the four data breaches suffered by Chegg involved phishing attacks targeted at Chegg’s employees. The other data breach occurred when a former Chegg contractor shared login information for one of Chegg’s cloud databases containing the personal information of customers.

Chegg uses a third-party cloud service provided by Amazon Web Services Inc., the cloud computing division of Inc., to store customer and employee data. The information stored by Chegg on AWS includes information related to its customer’s religion, ethnicity, date of birth and income.

According to the complaint, Chegg allowed employees and third-party contractors to access these databases using credentials that provided full access to the information and administrative privileges.

Moreover, the personal data stored by Chegg was stored in plain text and not encrypted. The FTC’s complaint also explains that Chegg encrypted passwords using outdated cryptographic hash functions with known vulnerabilities.

With Chegg’s recent data breaches in mind, the FTC’s complaint highlighted these inadequacies in Chegg’s data security practices:

  • Chegg failed to consistently implement basic security measures such as encryption and multifactor authentication;
  • Chegg failed to monitor company systems for security threats;
  • Chegg stored information insecurely; and
  • Chegg did not develop adequate security policies and training.

The FTC’s order requires that Chegg document and limit its data collection practices. The FTC also requires Chegg to allow customers access to the data collected on them and to abide by its customers’ requests to delete such data.

The order further requires Chegg to implement multifactor authentication, or another suitable authentication method, to protect customer and employee accounts.

In another FTC enforcement action in 2022, the FTC brought another enforcement action against Drizly LLC, an online platform allowing customers to place orders for beer, wine and liquor delivery.

Notably, the FTC also acted against Drizly’s CEO, James Cory Rellas, in his personal capacity. Similar to Chegg, Drizly hosts its software on AWS.

As a result, Drizly’s customer data, including passwords, email addresses, postal addresses, phone numbers, device identifiers and geolocation information, were all stored on AWS.

The 2022 complaint alleges that in 2018 Drizly and Rellas learned of problems with the company’s data security procedures after a security incident in which a Drizly employee posted the company’s AWS login information on GitHub Inc.[2]

In its complaint, the FTC states that the 2018 incident put Drizly “on notice of the potential dangers of exposing AWS credentials and should have taken appropriate steps to improve GitHub security.”

But Drizly failed to address the issues with its security procedures. As a result, in 2020, a hacker gained access to Drizly’s GitHub login credentials, hacked into the company’s database and acquired customer information.

The FTC’s complaint also alleged that Rellas contributed to these failures by not hiring a senior executive responsible for the security of consumers’ personal information collected and maintained by Drizly.

The FTC’s complaint attributed the following data security failures to Drizly:

  • Drizly failed to develop and implement adequate written security standards and train employees in data security policies;
  • Drizly failed to store AWS and database login credentials securely and further failed to require employees to use complex passwords;
  • Drizly did not periodically test its existing security features; and
  • Drizly failed to monitor its network for attempts to transfer consumer data outside the network.

The FTC’s October order requires Drizly to destroy all unnecessary data, limit the future collection and retention of data, and implement a data security program. Drizly must also replace its authentication methods that currently use security questions with multifactor authentication.

Additionally, Drizly is now required to encrypt Social Security numbers on the company’s network. The order will follow Rellas to any future companies, demanding that he personally abide by these data security requirements in future endeavors.

Enforcement actions brought by the FTC this year provide guidelines to companies wishing to avoid FTC enforcement actions.

In fact, FTC Chair Lina M. Khan’s statement on the Drizly decision stated “[t]oday’s action will not only correct Drizly’s lax data security practices but should also put other market participants on notice.”

Thus, the following steps are suggested to safeguard a company from FTC enforcement action.

Educate Employees on Cybersecurity Measures

Companies should emphasize data security education for their employees and contractors. It is suggested that companies introduce new employees to their data security practices during the onboarding process and follow up with regularly scheduled training for existing employees.

One crucial area to educate employees on is how to safeguard company credentials.

Companies should implement policies and procedures to prevent the storage of unsecured access keys on any cloud-based services. Companies should also have a policy and guidelines requiring the use of strong passwords and multifactor authentication to secure corporate accounts and information.

Companies should implement basic security measures for employees’ and contractors’ access to sensitive user information. For example, companies should regularly monitor who accesses company repositories containing sensitive consumer information.

Companies might also consider only allowing authenticated and encrypted inbound connections from approved Internet Protocol addresses to access sensitive consumer data.

Performing regular audits can help companies ensure each employee only have access to what is needed to perform that employee’s job functions.

In addition, companies should use audits to identify and terminate unneeded or abandoned employee accounts, such as accounts that are left open after an employee leaves a company or when an employee transfers to a different division/role.

Follow Through on Privacy and Data Security Promises

The FTC tends to pursue companies that fall short of the data security promises they make to consumers.

When a company promises consumers that it will adhere to reasonable data security practices, it is their responsibility to implement basic security measures and checks to fulfill this promise. Those security measures might include encryption, multifactor authentication and complex passwords.

It is also imperative that companies regularly review and update their data security practices. The FTC’s recent orders show that adhering to outdated data security measures amounts to having lax data security practices.

Individuals in charge of the company’s data security practices should stay abreast of developments in the field.

Respond to Data Security Incidents Quickly and Transparently

The FTC displays little leniency for companies and executives already on notice of data security issues within their company.

It is imperative that companies act promptly when data security events are discovered, and that companies be transparent with customers when a data security event occurs — regarding the occurrence of the event, measures the company took to prevent the event and measures the company is taking to rectify the event.

Companies should be vigilant in their efforts to discover data security events. Procedures and policies should be implemented to stay on top of data security events within the company’s networks and systems.

For example, adopting file integrity monitoring tools and tools for monitoring anomalous activity can assist with detecting these events.

After implementing these safeguards, they must be tested at least once a year for vulnerabilities, as suggested in the FTC’s orders against Drizly and Chegg.


The FTC’s prior enforcement actions serve as a cautionary tale for companies seeking to avoid similar enforcement actions from the agency.

Engaging in efforts to educate employees on data security practices, following through on data security promises, and responding to data security incidents properly can help companies reduce the likelihood of being subject to these proceedings.



This article originally appeared on Law360. Read more at:

The average cost of a data breach is on the rise.

According to the 2022 ForgeRock Consumer Identity Breach Report, the average cost in 2021 of recovering from a data breach in the U.S. is $9.5 million — an increase of 16% from the previous year.

Lawsuits and regulatory fines are a significant factor contributing to the growing cost. This year, several notable class action settlements have been announced, including T-Mobile for over $350 million, the U.S. Office of Personnel Management for $63 million and the Ambry Genetics Corp. for over $12.25 million.

This article looks at the alleged security failures in recent data breach litigations and proposes steps companies may consider to help reduce the legal risk of a data breach.

Recent Examples

In 2021, T-Mobile suffered a data breach that compromised personally identifiable information, or PII, for more than 54 million current, former or prospective customers.

According to the complaint, John Erin Binns accessed the data through a misconfigured gateway GPRS support node. Binns was then able to gain access to the production servers, which included a cache of stored credentials that allowed him to access more than 100 servers.

Binns was able to use the stolen credentials to break into T-Mobile’s internal network. According to the complaint, T-Mobile failed to fully comply with industry-standard cybersecurity practices, including proper firewall configuration, network segmentation, secure credential storage, rate limiting, user-activity monitoring, data-loss prevention, and intrusion detection and prevention.

After learning about the breach, T-Mobile publicly announced the data breach and sent notices via brief text messages. Allegedly, T-Mobile’s text messages explicitly told some customers that their Social Security number had not been comprised.

But, by contrast, T-Mobile’s messages failed to inform customers whose Social Security number had been compromised of this fact.

As part of settlement of the class action, T-Mobile agreed to pay $350 million to customers and to boost its data security spending by $150 million over the next two years. T-Mobile also reached a $2.5 million multistate settlement with 40 attorneys general.

In 2013 through 2014, a cyberattack on the Office of Personnel Management resulted in data breaches affecting more than 21 million people, which is reported as among the largest thefts of personal data from the U.S. government in history.

The Office of Personnel Management allegedly failed to comply with various Federal Information Security Modernization Act requirements, to adequately patch and update software systems, to establish a centralized management structure for information security, to encrypt data at rest and in transit, and to investigate outbound network traffic that did not conform to the domain name system protocol.

The Office of Personnel Management agreed to pay a $63 million settlement with current, former and prospective government workers affected by the breach.

In January 2020, the systems of Ambry Genetics, a state-of-the-art genetic testing laboratory, were hacked, which exposed PII and protected health information of its patients.

According to the complaint, Ambry Genetics failed to take standard and reasonably available steps to prevent the data breach, including failing to encrypt information and properly train employees, failing to monitor and timely detect the data breach, and failing to provide patients with prompt and accurate notice of the data breach.

Ambry Genetics agreed to settle the class action litigation for $12.25 million plus three years of free credit monitoring and identity theft insurance services to the proposed class.

Settlement participants can also submit a claim for up to $10,000 in reimbursement for out-of-pocket costs traceable to the data security breach and submit a claim for up to 10 hours of documented time dealing with breach issues at $30 per hour.

These key data breach litigations highlight the risks of insufficient security measures and insufficient notice to affected customers in the event of a breach. To help reduce the legal risk, we suggest the following.

Limit the scope of data collection and retention to only what is necessary.

Companies should analyze business practices to determine what PII is collected, the purpose of the collected PII and how long that PII needs to be retained.

The risk and liability of a data breach can be limited by restricting collected PII to only what is necessary and discarding that data once it is no longer necessary. Document the collected data to ensure it is periodically reevaluated and discarded at the appropriate time.

Implement reasonable industry-standard security measures.

Reasonable, basic security measures generally stem from industry standards and practices, regulations and guidance, and federal and state laws.

As some examples, recent data breach litigations highlight the following as reasonable, expected security measures:

  • Encrypting sensitive data;
  • Implementing multifactor authentication;
  • Patching and updating software systems;
  • Securing cached information and login credentials;
  • Monitoring the network for threats; and
  • Responding to security incidents.

Implement a comprehensive security program and team with oversight and input from company leadership.

Companies should build a security team that is responsible for setting security policies and procedures, documenting and managing the collected data, assessing the risk of a data breach, applying security controls, training employees on data security awareness and policies, monitoring for potential data breaches, and auditing the effectiveness of the security program.

The security team should have support from leadership and typically includes an interdisciplinary team of stakeholders across a business, including the information technology department that is well-versed in computer technology and data security, legal to monitor and ensure compliance with data protection laws and mitigate legal risk, and a lead privacy or data protection authority — e.g., a chief data protection officer.

The team should develop and be prepared to follow a strategy to address a suspected data breach or security incident, including fixing vulnerabilities that may have caused a breach, preventing additional data loss, fixing vulnerabilities the breach may have caused and notifying appropriate parties.

The team should ensure the response strategy is up-to-date with state and federal laws.

Be accurate in public disclosures and notices.

All 50 states, the District of Columbia, Puerto Rico and the Virgin Islands have enacted legislation requiring notification of security breaches involving PII.

The notice generally should include how the breach happened, what information was taken, how the thieves have used the information, if known, what actions the business has taken to remedy the situation, what actions the business is taking to protect individuals — e.g., offering free credit monitoring services — and how to reach the relevant contacts in the business.

Failing to accurately report the breach — for example, failing to accurately identify what data was compromised — to customers could result in liability for the company as well as personal liability for senior employees and executives responsible for responding to the data breach.


Taking these preventative measures to secure PII, maintain compliance with data protection guidelines and laws, and develop a plan to address and respond to a suspected breach can help businesses to reduce the likelihood of potential civil liability.

This article originally appeared on Law360. Read more at:

Google agrees to pay a historic $391.5 million to settle with attorneys general from 40 U.S. states for misleading users about its location tracking and collection practices. The settlement is the largest ever attorneys general-led consumer privacy settlement.

The attorneys general opened the Google investigation following a 2018 Associated Press article that revealed Google “records your movements even when you explicitly tell it not to.” According to the article, Android users were misled into thinking that location tracking is turned off when the “Location History” setting is “paused” or disabled. Google, however, could continue to track a user’s location through other Google apps and settings. For example, another account setting—turned on by default and ambiguously named “Web & App Activity”—enabled the company to collect, store, and use the customers’ personally identifiable location data.

In addition to the fine, Google agreed to improve transparency regarding its location data tracking and collection practices. Specifically, Google must:

  1. Show additional information to users whenever they turn a location-related account setting “on” or “off”;
  2. Make key information about location tracking unavoidable for users (i.e., not hidden); and
  3. Give users detailed information about the types of location data Google collects and how it’s used at an enhanced “Location Technologies” webpage.

The Google settlement highlights the importance for companies to (1) be transparent in their data collection practices and (2) accurately convey data collection practices in a user-accessible manner.

Yesterday, October 12, 2022, was the first time that a case under the Illinois Biometric Information Privacy Act (BIPA) went to trial – and the result was a big win for the Plaintiffs, more than 44,000 truck drivers whose fingerprints were scanned for identity verification purposes without any informed permission or notice. BIPA is an Illinois state law that requires informed, written consent before personal biometric information is captured, used, and stored. The law also provides a private right of action — allowing individuals whose biometric information is captured, used, or stored without informed, written consent to bring suit. In 2019, in Rosenbach v. Six Flags, it was held that failure to comply with the statue alone constitutes harm sufficient to confer standing. This allows for suits when the statute is violated, regardless of whether the biometric information that is captured is misused in any way or the person whose information is captured experiences any real-world harm.  Since the Rosenbach ruling, many BIPA cases have been brought – but they always result in settlement.

The case, Richard Rogers v. BNSF Railway Company (Case No. 19-C-3083, N.D. Ill.), is noteworthy not only because it was the first trial in a BIPA case, but also because (1) of how damages were proved, and (2) the defendant was not itself the party that was capturing the data, rather it had contracted out identification verification to a third-party.

First, regarding damages, the jurors were asked only to indicate how many times defendant recklessly or intentionally violated the law. They answered consistent with the defense expert’s estimated number of drivers who had their fingerprints registered: 45,600 times. The Court then entered a judgment of $228 million, based on the jury’s finding of 45,600 violations, and the language of the statute which provides for up to $5,000 for every willful or reckless violation and $1,000 for every negligent violation.

Second, regarding the defendant, BNSF Railway was not the party that actually collected anyone’s fingerprints. Rather, BNSF hired a third party company – Remprex LLC – to process drivers at the gates of the railroad’s Illinois rail yards.  BNSF argued that it did not control the “method and manner” of Remprex’s work. Counsel for the Plaintiff argued that ignorance is not a defense to the law, and if BNSF did not know about the BIPA then it was acting recklessly (the railroad company has been around for 150 years in a highly regulated industry, and its subcontractor, Remprex, was just a two-person start-up when they were first hired). Counsel for Plaintiff also argued that a party cannot “contract out” its obligation to follow the law. Compellingly, Plaintiff’s counsel also pointed to the fact that BNSF continued its biometric information data processing activities even after suit was first filed in 2019.

Today, October 7, 2022, President Joe Biden signed an executive order implementing a new privacy framework for data being shared between Europe and the United States. The new framework is called the “Trans-Atlantic Data Privacy Framework,” and it will (hopefully) serve to replace the prior framework, known as “Privacy Shield”, which was struck down by the European Court of Justice in July 2022 (in a case called Schrems II) on grounds that it did not adequately protect EU citizens from U.S. government surveillance. We wrote about the Schrems II decision here, including how it not only struck down the “Privacy Shield” framework, but also potentially called into question all EU-U.S. data transfers.

The new framework was the result of over a year of detailed negotiations between the U.S. and EU, and it is believed to address the concerns raised by the Court of Justice of the European Union (CJEU) in the Schrems II decision. If the European Commission agrees and issues an adequacy decision, the framework will serve to re-enable the flow of data between the EU and U.S., a $7.1 trillion economic relationship. So, how did the U.S. address the CJEU’s concerns?  The key principles are:

  1.  a new set of rules and binding safeguards to limit access to data by U.S. intelligence authorities to what is necessary and proportionate to protect national security, and U.S. intelligence agencies will adopt procedures to ensure effective oversight of new privacy and civil liberties standards;
  2. a two-tier redress system is created to investigate and resolve complaints filed by EU citizens if they are concerned their personal information has been improperly collected by the U.S. intelligence community, including the establishment of a new data privacy court (a data protection review court) inside the Justice Department to investigate valid complaints; and
  3. the creation of strong obligations for companies processing data transferred from the EU, including a continued requirement to self-certify their adherence to the Principles through the U.S. Department of Commerce.

The next step is for the European Commission to assess the framework and (hopefully) issue an adequacy decision. This process could take many months.  Unless and until an adequacy decision is issued, businesses will have to continue to rely on other means for transferring EU personal data to the U.S., such as binding corporate rules or standard contractual clauses.

California Attorney General Rob Bonta announced yesterday a settlement reached with beauty product retailer, Sephora, Inc. (Sephora), resolving allegations that Sephora violated various provisions of the California Consumer Privacy Act (CCPA).  Specifically, it was alleged that Sephora failed to:

  • Disclose to consumers that it was selling their personal information
  • Process user requests to opt out of sale of personal information in accordance with the CCPA
  • Cure these violations within the 30-day period currently allowed by the CCPA.

Attorney General Bonta issued a press release saying: “I hope today’s settlement sends a strong message to businesses that are still failing to comply with California’s consumer privacy law.  My office is watching, and we will hold you accountable.”

In Sephora’s case, Sephora was allowing third-party companies to install tracking software on their website and in their app so that third parties could monitor customers as they shopped.  The third-parties were tracking, inter alia, what kind of computer the customer was using, what products/brands the user put in her shopping cart, and the user’s location.  Sephora was using the information obtained from these third-party trackers to more effectively target potential customers.  Sephora’s arrangement with these third-party customers constituted a “sale” under the CCPA, which required Sephora to allow customers to opt-out of such information-sharing.

Under the settlement agreement, Sephora agreed to:

  • Pay $1.2 million
  • Expressly disclose that it sells data
  • Provide opt-outs for the sale of personal information, including via the Global Privacy Control
  • Conform its service provider agreements to the CCPA’s requirements; and
  • Report to the AG on its sales of personal information, the status of its service provider relationships, and its efforts to honor Global Privacy Control

For more information on the Sephora settlement agreement, and on the Attorney General’s ongoing enforcement actions with respect to failures to process opt-out requests, please see the AG’s Press Release.

In July 2020, the Schrems II decision issued and the European Commission’s adequacy decision for the EU-US Privacy Shield Framework was invalidated.  Further, and broader than the invalidation of Privacy Shield adequacy decision, the Schrems II judgement found that US surveillance measures interfered with what are considered “fundamental rights” under EU law, i.e., the rights to respect for private and family life, including communications, and the protection of personal data.

Following Schrems II, companies reevaluated their policies and practices surrounding the transfer of personal information out of the EU, and the safeguards (under the GDPR) that they rely on for those cross-border data transfers.  While there has been some guidance, there has been no replacement for the EU-US Privacy Shield, and US surveillance practices remain a problem under the GDPR.   Since then, more decisions have issued, making it even harder for companies that thought  they had a solution.  For example, we reported a few weeks ago that while Google was transferring Google Analytics data to US servers for processing – purportedly under the belief that the data was not personal information, and thus did not fall under the GDPR – an Austrian data regulator recently found Google’s practice to violate the GDPR.  According to the Austrian data regulator, because Google uses IP addresses and cookie data identifiers to track information about web site visitors, that data is personal information.

This put a lot of the big US tech companies in a tough situation.  While some thought a solution to this was to move processing of personal information about European subjects to the EU, Meta has recently taken a different stance—stating in its annual report last Thursday, February 3, that it is considering shutting down Facebook and Instagam in Europe if it can’t keep transferring data back to the U.S.  The annual report states, on page 9:

In August 2020, we received a preliminary draft decision from the Irish Data Protection Commission (IDPC) that preliminarily concluded that Meta Platforms Ireland’s reliance on SCCs in respect of European user data does not achieve compliance with the General Data Protection Regulation (GDPR) and preliminarily proposed that such transfers of user data from the European Union to the United States should therefore be suspended.  We believe a final decision in this inquiry may issue as early as the first half of 2022.  If a new transatlantic data transfer framework is not adopted and we are unable to continue to rely on SCC’s or rely upon other alternative means of data transfers from Europe to the United States, we will likely be unable to offer a number of our most significant products and services, including Facebook and Instagram, in Europe, which would materially and adversely affect our business, financial condition, and results of operations.

Some news outlets have reported this as a “threat.”  In fact, a European lawmaker, Axel Voss, went so far as to call it “blackmail”: “#META cannot just blackmail the EU into giving up its data protection standards, leaving the EU would be their loss.”  That said, the statement does not read like a threat in the annual report.  It comes across as a matter-of-fact statement, i.e., if Meta cannot figure out any way to comply with the GDPR, it is going to have to stop transferring the restricted data from Europe to the United States.  It would seem that a lot of US companies would have similar statements in their annual reports – which is, we may have to stop transferring personal information from the EU to the US and there are only two ways to do this: (1) keep processing the data, but process it outside the US (in the EU, or a country without data surveillance issues like in the US), or (2) stop processing the data/serving the EU market.  Obviously the former takes a lot more time, effort, money and planning, even if it is a long-term solution for some entities.  It will be interesting to see how this plays out.

On Friday, January 28, 2022, the California Office of Attorney General issued a press release announcing that California DOJ sent notices alleging non-compliance with the California Consumer Privacy Act (CCPA) to a number of businesses operating loyalty programs in California.  The press release stated, inter alia:

“Under the CCPA, businesses that offer financial incentives, such as discounts, free items, and other rewards, in exchange for personal information must provide consumers with a notice of financial incentive.  This notice must clearly describe the material terms of the financial incentive program to the consumer before they opt in to the program.  Letters were sent today to major corporations in the retail, home improvement, travel, and food service industries, who have 30 days to cure and come into compliance with the law.”

The press release also quoted the California AG, Rob Bonta:

“In the digital age, it’s easy to forget that our data isn’t only collected when we go online.  It’s collected when we enter our phone number for a discount at the supermarket; when we use rewards for a free cup of coffee at our local coffee shop; and when we earn points to purchase items at our favorite clothing store… We may not always realize it, but these brick and mortar stores are collecting our data – and they’re finding out new ways to profit from it.”

Under the CCPA regulations, a “financial incentive” is defined broadly to mean “a program, benefit, or other offering, including payment to consumers, related to the collection, deletion, or sale of personal information.”  Cal. Code Regs.. tit. 11, Section 999.301(j).

Prior to these notices and a July 2021 press release regarding a similar notice, arguments had been made that loyalty programs were not offering financial incentives for the collection of personal information, and thus, they did were not covered by Section 1798.125 of the CCPA.  This argument seemingly hinged largely on the name itself — “loyalty program” – which implies that financial incentives are in recognition of repeat purchasing behavior.  Others have argued that just because loyalty programs are designed to reward loyal customers does not meet that they do not also provide important personal information to businesses (e.g., purchasing habits – who likes to shop where, when, and buy what).

CCPA requires that companies that offer financial incentives in exchange for personal information meet certain criteria, including: (1) notifying the customer of the financial incentive (CCPA 1798.125(b)(2) and 1798.135); (2) obtaining the customer’s “opt in consent” to the “material terms” of the financial incentive program (prior to opting in) (CCPA 1798.125(b)(3)); and (3) permitting the customer to revoke their consent at any time (id.).

The CCPA regulations provide more guidance.  A Notice of Financial Incentive must include the following:

  1. A succinct summary of the financial incentive or price or service difference offered;
  2. A description of the material terms of the financial incentive or price difference, including the categories of personal information that are implicated by the financial incentive or price or service difference and the value of the consumer’s data;
  3. How the consumer can opt-in to the financial incentive or price or service difference;
  4. A statement of the consumer’s right to withdraw from the financial incentive at any time and how the consumer may exercise that right; and
  5. An explanation of how the financial incentive or price or service difference is reasonably related to the value of the consumer’s data, including (a) a good-faith estimate of the value of the consumer’s data that forms the basis for offering the financial incentive or price or service difference; and (b) a description of the method the business used to calculate the value of the consumer’s data.

Cal. Code Regs. Tit. 11, Section 999.307 (emphasis added).

According to the quote from AG Bonta in Friday’s press release, it appears that at least some of the non-compliance notices may have targeted companies’ brick-and-mortar activities – e.g., entering phone numbers at check-out.  It is also noteworthy that a grocery store loyalty program was also expressly mentioned in the previously-mentioned July 2021 press release: “A grocery chain required consumers to provide personal information in exchange for participation in its company loyalty programs.  The company did not provide a Notice of Financial Incentive to participating customers.  After being notified of alleged noncompliance, the company amended its privacy policy to include a Notice of Financial Incentive.”

Under CCPA, businesses that receive notices of non-compliance have 30 days to cure or fix the alleged violation before an enforcement action can be initiated.

On August 13, 2018, the Associated Press published a story: “Google tracks your movements, like it or not.” According to the article, computer-science researchers at Princeton confirmed findings that “many Google services on Android devices and iPhones store your location data even if you’re using a privacy setting that says it will prevent Google from doing so.”  The article featured a map showing the locations that Google had tracked a researcher as having traveled to over several days, even though the researcher had his “Location History” turned off the whole time.  Google apparently explained this away on grounds that turning “Location History” off only prevented Google from adding movements to the “timeline” (its visualization of a user’s daily travels), but it did not stop Google from collecting location data.  To stop the collection of location data, another setting – called “Web and App Activity” – had to be turned off.  Notwithstanding this, the AP reported that Google’s support page stated at the time: “You can turn off Location History at any time.  With Location History off, the places you go are no longer stored.”

Fast forward three years later, and the Attorney General from Arizona, and most recently (this past Monday) four Attorneys General from D.C., Indiana, Texas, and Washington, sued Google for deceiving customers to gain access to their location data.  Google is alleged to have used “dark patterns” – which are “tricks” embedded into website and application user interfaces, used to influence users’ decisions or make users do things or allow things that they didn’t meant to do or allow.  Here, Google is alleged to have used “dark patterns” to gain access to location-tracking data, even after users thought they had disallowed Google from accessing that information.  Washington, D.C. Attorney General, Karl Racine, said in a statement: “Google falsely led consumers to believe that changing their account and device settings would allow customers to protect their privacy and control what personal data the company could access.  … The truth is that contrary to Google’s representations it continues to systematically surveil customers and profit from customer data.  Google’s bold misrepresentations are a clear violation of consumers’ privacy.”

Tuesday, Google issued a blog post responding to the recent complaints.

It has been nearly a year and a half since the Schrems II decision issued in July 2020, which invalidated the European Commission’s adequacy decision for the EU-US Privacy Shield Framework.  As a result, companies were forced to reexamine their transfers of personal information out of the EU, and the safeguards that they rely on for those cross-border data transfers.  Some companies, instead of addressing the safeguards they had in place, took a hard look at the data they were transferring.  Did they need to transfer it out of the EU?  Was it even personal information?  This latter issue was recently addressed by an Austrian data regulator, one of 27 GDPR enforcers.  While Google argued that the data was not personal information, the data regulator disagreed.  It is yet to be seen if other data regulators will issue similar decisions, and if so, what the fate will be of US technology companies in Europe.

In a recent decision by Austrian’s data regulator, it was held that a website’s use of Google Analytics violates the GDPR because it uses IP address and cookie data identifiers to track information about website visitors, such as the pages read, how long you are on the website, and information about users’ devices.  The Austrian decision held that IP addresses and cookie data identifiers are personal information.  Thus, when information tied with these identifiers is passed through Google’s servers in the United States, the GDPR is implicated.  Specifically, the GDPR provides that in the case of non-EU data transfers of personal data, there must be appropriate safeguards in effect to protect the data.  The problem is—after Schrems II, (1) there is no longer an adequacy decision by the EU for US data transfers, and (2) it is unclear if other safeguard measures, such as standard contractual clauses (SCCs) or binding corporate rules (BCRs) are sufficient in view of US surveillance practices under Section 702 of the Foreign Intelligence Surveillance Act (FISA) and Executive Order 12333.  In other words, there may be no appropriate safeguards that US technology companies can implement to allow for GDPR-compliant cross-border data transfers.

The recent Austrian decision provides that, “US intelligence services use certain online identifiers (such as IP address or unique identification numbers) as a starting point for the surveillance of individuals.”  Google had argued that it implemented measures to protect the data in the US, but these were found insufficient to meet the GDPR.  Indeed, the very “IDs” that Google pointed to as purportedly constituting pseudonymized safeguards were found to make users identifiable and addressable:

“…the use of IP addresses, cookie IDs, advertising IDs, unique user IDs or other identifiers to (re)identify users do not constitute appropriate safeguards to comply with data protection principles or to safeguard the rights of data subjects.  This is because, unlike in cases where data is pseudonymized in order to disguise or delete the identifying data so that the data subjects can no longer be addressed, IDs or identifiers are used to make the individuals distinguishable and addressable.  Consequently, there is no protective effect.  They are therefore not pseudoymizations within the meaning of Recital 28, which reduces the risks for the data subjects and assist data controllers and processors in complying with their data protection obligations.”

It remains to be seen whether other EU regulators will follow suit and hold that the GDPR has been violated where European websites use Google Analytics or similar US technology services.  It will also be interesting to see if European companies start transferring adtech and analytics services to national companies.