Many companies may be quick to dismiss Washington’s “My Health, My Data” (MHMD) as a health data law that does not apply to them. But there are many reasons you should think twice before disregarding this law. 

First, unlike the state privacy laws that have been passed so far, MHMD applies to all companies regardless of their revenues, how much personal data they process, or what percent of their annual revenue is generated from processing or selling personal data.  Also, there is no nonprofit carve-out for MHMD.  So even if you process a tiny bit of health data (and see below regarding the expansive definition of what constitutes “consumer health data”), MHMD may apply.

Second, in some respects it is beyond a businesses’ control whether they need to comply with MHMD.  While MHMD applies to Washington resident and any “regulated entity” that “conducts business in Washington or that produces or provides products or services that are targeted to consumers in Washington,” MHMD also applies to all “natural person[s] whose consumer health data is collected in Washington.”  Broadening the scope even further, “collect” is expansively defined to mean more than just the gathering of data in Washington.  If you buy, rent, access, retain, receive, acquire, infer, derive, “or otherwise process” consumer health data in the state, then you must comply.  Some of these verbs are extremely broad – such as “accessing,” “acquiring” and (perhaps broadest yet) “inferring” or “deriving” health data in Washington.  Moreover, the catch-all “or otherwise process” should be enough to make every company scratch their head as to whether compliance is necessary.  In other words, do you do anything with data that can be linked to the state of Washington in any way that could arguably be considered “consumer health data”?

Third, “consumer health data” is defined very broadly as “personal information that is linked or reasonably linkable to a consumer that identifies the consumer’s past, present, or future physical or mental health status.”   MHMD does not define “mental health” or “physical health”; however, one can be sure that this broad definition of “consumer health data” includes information that companies may not ordinarily think of as health data.  For example, the Centers for Disease Control and Prevention website says that mental health “includes our emotional, psychological, and social well-being” and affects “how we think, feel, and act,” as well as “how we handle stress, relate to others, and make health choices.”  Does this mean that MHMD applies to all data concerning one’s emotions, psychological well-being, social well-being, how one is thinking, how one is feeling, how one is acting, how one is dealing with stress, how one is relating to others, and health choices one is making?  With respect to physical health, the National Institute of Health website talks about the following: what you put into your body, how much activity you get, your weight, how much you sleep, whether you smoke, and your stress levels.  Does data regarding all of the above really constitute “consumer health data”?  If so, any companies that come into contact with data concerning food or drinks, activities or any kind, anything related to one’s size/weight (such as clothing) and/or sleep may be subject to MHMD.

Indeed, just as the term “PII” was called into question in recent years because to a certain extent all data may be personally identifiable, the same may be true for WA’s expansively defined “consumer health data.”  Indeed, rather than defining what is “consumer health data,” it may be easier to determine categories that are clearly outside of the definition.  And because MHMD contains a private right of action, you can be sure that plaintiffs are going to assert very expansive definitions of “consumer health data” in litigations and threatened litigations. 

Rothwell Figg remains committed to assisting its clients with all of their privacy, data protection, and IP needs, including litigations and threatened litigations; counselling on security breaches, data governance, AI, and IP; negotiating and drafting contracts; securing IP protection; and drafting privacy and AI policies.

In this corner, the U.S. Federal Trade Commission (FTC): 

“Facebook has repeatedly violated its privacy promises,” said Samuel Levine, Director of the FTC’s Bureau of Consumer Protection. “The company’s recklessness has put young users at risk, and Facebook needs to answer for its failures.”

In that corner, Meta (formerly, Facebook):

Meta head of communications Andy Stone called the FTC’s announcement a “political stunt,” vowed to “vigorously fight” the action, and criticized the FTC for allowing Chinese-owned social media app TikTok to “operate without constraint on American soil.”

Yesterday, May 3, 2023, the FTC ordered Meta to show cause as to why the Commission should not modify its 2020 Order and enter a new proposed order based on Facebook’s record of alleged violations and its independent assessor’s findings that Facebook has not complied with the requirements of its privacy program. Specifically, the independent assessor found that Facebook “misled parents about their ability to control with whom their children communicated through its Messenger Kids app, and misrepresented the access it provided some app developers to private user data,” in breach of the FTC’s 2012 and 2020 Privacy Orders with Meta and in violation of the Children’s Online Privacy Protection Act (“COPPA”).

The changes to the 2020 Privacy Order that the FTC has proposed, if ordered, would undoubtedly be impactful in many respects and would apply to the full complement of products and services offered by Meta, including Facebook, Instagram, WhatsApp, Messenger, and Meta Quest.  The proposed changes include:

  • Blanket prohibition against monetizing data of children and teens under 18: Meta and all its related entities would be restricted in how they use the data they collect from children and teens. The company could only collect and use such data to provide the services or for security purposes, and would be prohibited from monetizing this data or otherwise using it for commercial gain even after those users turn 18.
  • Pause on the launch of new products, services: The company would be prohibited from releasing new or modified products, services, or features without written confirmation from the assessor that its privacy program is in full compliance with the order’s requirements and presents no material gaps or weaknesses.
  • Extension of compliance to merged companies: Meta would be required to ensure compliance with the FTC order for any companies it acquires or merges with, and to honor those companies’ prior privacy commitments.
  • Limits on future uses of facial recognition technology: Meta would be required to disclose and obtain users’ affirmative consent for any future uses of facial recognition technology. The change would expand the limits on the use of facial recognition technology included in the 2020 Order.
  • Strengthening existing requirements: Some privacy program provisions in the 2020 Order would be strengthened, such as those related to privacy review, third-party monitoring, data inventory and access controls, and employee training. Meta’s reporting obligations also would be expanded to include its own violations of its commitments.

The undertaking to even begin to comply with these proposed changes would appear to be massive and would not only impact Meta and its products and services operationally, but financially as well.  That is not to say that these proposed modifications may not be warranted if the alleged violations prove to be true.  The takeaway here… the FTC means business!

This current order is the third time the FTC has taken action against Meta – so let’s call this “Round 3.”  Meta has 30 days to respond to the proposed findings from the FTC’s investigation.  Stay tuned!

On March 16, 2023, the French Data Protection Agency (the “CNIL”) imposed a fine of € 25,000 on the company CITYSCOOT in connection with a finding that CITYSCOOT failed to comply with the obligation to ensure data minimization, as required by Article 5.1.c of the GDPR. The facts that led to the judgment included a finding that during the short-term rental of a scooter, CITYSCOOT would collect (and store) the vehicle’s geolocation data every 30 seconds. CITYSCOOT maintained that the information was being processed and stored for four reasons: (1) processing of traffic offenses; (2) processing of consumer complaints; (3) user support (to call for help if a user falls); and (4) management of claims and thefts. The CNIL found that none of these purposes justified the collection of geolocation data in such detail as that carried out by the company, and that CITYSCOOT’s practices were very intrusive on the private life of users.

“What exactly does data minimization require” could become a hot topic for U.S. privacy litigation in the coming years, particularly given that the majority of U.S. states that have adopted general privacy laws thus far have required data minimization by statute. (The Iowa Consumer Data Protection Act (ICDPA) does not have a statutory data minimization requirement.) For example, under the California Privacy Rights Act (CPRA) any information collected must be “reasonably necessary and proportionate to either the purposes for which it was collected or another disclosed purpose” similar to the context under which it was collected.” Similarly, the Virginia Consumer Data Protection Act (VCDPA) expressly provides a Data Minimization requirement, which it defines as an “[o]bligation to limit data collection to what is adequate, relevant, and reasonably necessary for the disclosed purposes.” The Colorado Privacy Act (CPA) provides that “[c]ontrollers must assess and document the minimum types and amount of Personal Data needed for the state processing purposes.” The Connecticut Data Privacy Act (CTDPA) provides that controllers must “[l]imit collection of personal data to what is adequate, relevant, and reasonably necessary for the specific purpose(s) for which the data is processed (also known as “data minimization”).” The Utah Consumer Privacy Act (UCPA) mentions “purpose specification and data minimization” among the responsibilities of controllers.   

In view of the enforcement of “data minimization” in Europe and the nearly universal adoption of “data minimization” obligations in the United States, it would behoove any business to regularly evaluate not just what types of data it is collecting, but also how much and how frequently it is collecting it. Also, as part of annual privacy mapping and updating of privacy policies, it is a good idea to ensure that the identified “purposes” for collecting data continue to be accurate and complete. 

Rothwell Figg remains committed to assisting its clients with all of their privacy needs, including not just updating policies and contracts, but also consulting with businesses on “best practices” for data management.  Also, because our privacy team is highly technical and comprised of experienced litigators, we are ready should complex questions, security breaches,  or litigation arise.

A number of federal privacy laws provide private rights of action, allowing individuals (or class actions) to bring claims alleging violations of certain privacy laws. Some examples of these statutes include the Video Privacy and Protection Act (VPPA), the Telephone Consumer Protection Act (TCPA), and the Fair Credit Reporting Act (FCRA). What is more is that some state privacy laws can be “removed” to federal court in certain circumstances, in which case, Article III standing has to be shown in those cases, too. The most frequent examples of this are federal claims under Illinois’ Biometric Information Privacy Act (BIPA).

However, what it takes to establish Article III standing in these cases is far from crystal clear. And this is notwithstanding the fact that the Supreme Court has attempted to clarify standing in privacy cases twice in the past decade. Unfortunately, while the Supreme Court was given another opportunity to clarify Article III standing in privacy cases in Wakefield v. ViSalus, the Court declined to do so this week.

First, in Spokeo, Inc. v. Robins, in 2016, the Supreme Court considered standing in a case involving a violation of the FCRA. In that case, while the Court acknowledged that Congress sought to curb the dissemination of false information by adopting the procedures of the FCRA, Robins did not satisfy the demands of Article III by alleging a bare procedural violation—i.e., the dissemination of an incorrect zip code. The Court called the FCRA violation that Robins “suffered” (dissemination of a false zip code) a “procedural violation,” and reversed the Ninth Circuit Court of Appeals on grounds that while the Circuit Court correctly looked at whether Robins had suffered an injury, it failed to analyze whether that injury was “concrete and particularized.”   

Second, in TransUnion LLC v. Ramierz, in 2021, the Supreme Court considered standing in another case involving a violation of the FCRA. In that case, the majority found that only some of the class members had shown any physical, monetary, or various intangible harms including reputational harm to have Article III standing. The majority found that the remainder of the class members did not sufficiently show concrete harm because even though TransUnion had false information about them, the information was never sent to creditors; thus, no “concrete” harm resulted from the false information. Because all of the class members in TransUnion did not have standing, the Court found that the standing requirement was not met. In TransUnion, the Court further noted that, although Congress’ views on what constitutes a concrete injury could be instructive, Article III was not satisfied just because Congress created a statutory cause of action. 

Since Spokeo and TransUnion, it has remained unclear – and an issue of circuit splits – what violations of privacy statutes constitute “intangible harms” (sufficient to confer Article III standing) versus “procedural harms” (insufficient to confer Article III standing). For example, there is also a circuit split regarding whether certain TCPA violations result in a “concrete” harm sufficient to confer Article III standing. In the case of text messages, the 11th Circuit has held that the receipt of a single unsolicited text message is not sufficient to confer standing (see Salcedo v. Hanna, 936 F.3d 1162 (2019)), whereas the 5th Circuit has held that it is sufficient to confer standing (see Cranor v. 5 Star Nutrition, No. 19-51173). The issue presented in Wakefield v. ViSalus, which the Supreme Court declined to hear this week, would have provided the Court an opportunity to clarify when a statutory violation that does not result in physical or monetary harm results in an “intangible harm” sufficient to confer Article III standing as opposed to a mere “procedural harm.” The facts in ViSalus were such that individuals had received allegedly unconsented to robocalls marketing ViSalus’s weight-loss shake mix. However, for at least some of the individuals, the lack of consent is arguably a procedural question because the individuals had voluntarily provided phone numbers and some consent to receive marketing and promotional communications, they just had not provided prior express written consent and the regulations had changed in 2013 to require prior express written consent. 

What is the difference between the harms in TransUnion (where false information was collected but not yet disseminated, and the FCRA law is aimed at promoting accurate and fair information) and the harms in ViSalus (where the individuals had provided contact information and consented to marketing communications but the regulations required prior express written consent, and the TCPA law is aimed at protecting consumers from invasive telemarketing practices)?

The Federal Trade Commission will have its eye on privacy and data security enforcement in 2023.

In August, the agency announced that it is exploring ways to crack down on lax data security practices. In the announcement, the FTC explained that it was “concerned that many companies do not sufficiently or consistently invest in securing the data they collect from hackers and data thieves.”

These concerns are reflected in some of the FTC’s recent privacy enforcement actions. This article explores two significant FTC privacy actions of 2022 and provides three key tips to avoid similar proceedings in 2023.

Recent FTC Enforcement Actions

Earlier in 2022, Chegg Inc., an educational technology company, faced enforcement action from the FTC. Chegg is an online platform that provides customers with education-related services like homework help, tutoring and textbook rentals.

According to the FTC, Chegg failed to uphold its security promises to “take commercially reasonable security measures to protect the Personal Information submitted to [Chegg].”

In its complaint, the FTC explained that Chegg’s lax cybersecurity practices led to four data breaches resulting in the exposure of employees’ financial and medical information and the personal information of 40 million customers.[1]

The FTC’s complaint points out that three of the four data breaches suffered by Chegg involved phishing attacks targeted at Chegg’s employees. The other data breach occurred when a former Chegg contractor shared login information for one of Chegg’s cloud databases containing the personal information of customers.

Chegg uses a third-party cloud service provided by Amazon Web Services Inc., the cloud computing division of Amazon.com Inc., to store customer and employee data. The information stored by Chegg on AWS includes information related to its customer’s religion, ethnicity, date of birth and income.

According to the complaint, Chegg allowed employees and third-party contractors to access these databases using credentials that provided full access to the information and administrative privileges.

Moreover, the personal data stored by Chegg was stored in plain text and not encrypted. The FTC’s complaint also explains that Chegg encrypted passwords using outdated cryptographic hash functions with known vulnerabilities.

With Chegg’s recent data breaches in mind, the FTC’s complaint highlighted these inadequacies in Chegg’s data security practices:

  • Chegg failed to consistently implement basic security measures such as encryption and multifactor authentication;
  • Chegg failed to monitor company systems for security threats;
  • Chegg stored information insecurely; and
  • Chegg did not develop adequate security policies and training.

The FTC’s order requires that Chegg document and limit its data collection practices. The FTC also requires Chegg to allow customers access to the data collected on them and to abide by its customers’ requests to delete such data.

The order further requires Chegg to implement multifactor authentication, or another suitable authentication method, to protect customer and employee accounts.

In another FTC enforcement action in 2022, the FTC brought another enforcement action against Drizly LLC, an online platform allowing customers to place orders for beer, wine and liquor delivery.

Notably, the FTC also acted against Drizly’s CEO, James Cory Rellas, in his personal capacity. Similar to Chegg, Drizly hosts its software on AWS.

As a result, Drizly’s customer data, including passwords, email addresses, postal addresses, phone numbers, device identifiers and geolocation information, were all stored on AWS.

The 2022 complaint alleges that in 2018 Drizly and Rellas learned of problems with the company’s data security procedures after a security incident in which a Drizly employee posted the company’s AWS login information on GitHub Inc.[2]

In its complaint, the FTC states that the 2018 incident put Drizly “on notice of the potential dangers of exposing AWS credentials and should have taken appropriate steps to improve GitHub security.”

But Drizly failed to address the issues with its security procedures. As a result, in 2020, a hacker gained access to Drizly’s GitHub login credentials, hacked into the company’s database and acquired customer information.

The FTC’s complaint also alleged that Rellas contributed to these failures by not hiring a senior executive responsible for the security of consumers’ personal information collected and maintained by Drizly.

The FTC’s complaint attributed the following data security failures to Drizly:

  • Drizly failed to develop and implement adequate written security standards and train employees in data security policies;
  • Drizly failed to store AWS and database login credentials securely and further failed to require employees to use complex passwords;
  • Drizly did not periodically test its existing security features; and
  • Drizly failed to monitor its network for attempts to transfer consumer data outside the network.

The FTC’s October order requires Drizly to destroy all unnecessary data, limit the future collection and retention of data, and implement a data security program. Drizly must also replace its authentication methods that currently use security questions with multifactor authentication.

Additionally, Drizly is now required to encrypt Social Security numbers on the company’s network. The order will follow Rellas to any future companies, demanding that he personally abide by these data security requirements in future endeavors.

Enforcement actions brought by the FTC this year provide guidelines to companies wishing to avoid FTC enforcement actions.

In fact, FTC Chair Lina M. Khan’s statement on the Drizly decision stated “[t]oday’s action will not only correct Drizly’s lax data security practices but should also put other market participants on notice.”

Thus, the following steps are suggested to safeguard a company from FTC enforcement action.

Educate Employees on Cybersecurity Measures

Companies should emphasize data security education for their employees and contractors. It is suggested that companies introduce new employees to their data security practices during the onboarding process and follow up with regularly scheduled training for existing employees.

One crucial area to educate employees on is how to safeguard company credentials.

Companies should implement policies and procedures to prevent the storage of unsecured access keys on any cloud-based services. Companies should also have a policy and guidelines requiring the use of strong passwords and multifactor authentication to secure corporate accounts and information.

Companies should implement basic security measures for employees’ and contractors’ access to sensitive user information. For example, companies should regularly monitor who accesses company repositories containing sensitive consumer information.

Companies might also consider only allowing authenticated and encrypted inbound connections from approved Internet Protocol addresses to access sensitive consumer data.

Performing regular audits can help companies ensure each employee only have access to what is needed to perform that employee’s job functions.

In addition, companies should use audits to identify and terminate unneeded or abandoned employee accounts, such as accounts that are left open after an employee leaves a company or when an employee transfers to a different division/role.

Follow Through on Privacy and Data Security Promises

The FTC tends to pursue companies that fall short of the data security promises they make to consumers.

When a company promises consumers that it will adhere to reasonable data security practices, it is their responsibility to implement basic security measures and checks to fulfill this promise. Those security measures might include encryption, multifactor authentication and complex passwords.

It is also imperative that companies regularly review and update their data security practices. The FTC’s recent orders show that adhering to outdated data security measures amounts to having lax data security practices.

Individuals in charge of the company’s data security practices should stay abreast of developments in the field.

Respond to Data Security Incidents Quickly and Transparently

The FTC displays little leniency for companies and executives already on notice of data security issues within their company.

It is imperative that companies act promptly when data security events are discovered, and that companies be transparent with customers when a data security event occurs — regarding the occurrence of the event, measures the company took to prevent the event and measures the company is taking to rectify the event.

Companies should be vigilant in their efforts to discover data security events. Procedures and policies should be implemented to stay on top of data security events within the company’s networks and systems.

For example, adopting file integrity monitoring tools and tools for monitoring anomalous activity can assist with detecting these events.

After implementing these safeguards, they must be tested at least once a year for vulnerabilities, as suggested in the FTC’s orders against Drizly and Chegg.

Conclusion

The FTC’s prior enforcement actions serve as a cautionary tale for companies seeking to avoid similar enforcement actions from the agency.

Engaging in efforts to educate employees on data security practices, following through on data security promises, and responding to data security incidents properly can help companies reduce the likelihood of being subject to these proceedings.

[1] https://www.ftc.gov/system/files/ftc_gov/pdf/2023151-Chegg-Complaint.pdf.

[2] https://www.ftc.gov/system/files/ftc_gov/pdf/202-3185-Drizly-Complaint.pdf.

This article originally appeared on Law360. Read more at: https://www.law360.com/articles/1561075/ftc-actions-hold-data-privacy-lessons-for-2023.

The average cost of a data breach is on the rise.

According to the 2022 ForgeRock Consumer Identity Breach Report, the average cost in 2021 of recovering from a data breach in the U.S. is $9.5 million — an increase of 16% from the previous year.

Lawsuits and regulatory fines are a significant factor contributing to the growing cost. This year, several notable class action settlements have been announced, including T-Mobile for over $350 million, the U.S. Office of Personnel Management for $63 million and the Ambry Genetics Corp. for over $12.25 million.

This article looks at the alleged security failures in recent data breach litigations and proposes steps companies may consider to help reduce the legal risk of a data breach.

Recent Examples

In 2021, T-Mobile suffered a data breach that compromised personally identifiable information, or PII, for more than 54 million current, former or prospective customers.

According to the complaint, John Erin Binns accessed the data through a misconfigured gateway GPRS support node. Binns was then able to gain access to the production servers, which included a cache of stored credentials that allowed him to access more than 100 servers.

Binns was able to use the stolen credentials to break into T-Mobile’s internal network. According to the complaint, T-Mobile failed to fully comply with industry-standard cybersecurity practices, including proper firewall configuration, network segmentation, secure credential storage, rate limiting, user-activity monitoring, data-loss prevention, and intrusion detection and prevention.

After learning about the breach, T-Mobile publicly announced the data breach and sent notices via brief text messages. Allegedly, T-Mobile’s text messages explicitly told some customers that their Social Security number had not been comprised.

But, by contrast, T-Mobile’s messages failed to inform customers whose Social Security number had been compromised of this fact.

As part of settlement of the class action, T-Mobile agreed to pay $350 million to customers and to boost its data security spending by $150 million over the next two years. T-Mobile also reached a $2.5 million multistate settlement with 40 attorneys general.

In 2013 through 2014, a cyberattack on the Office of Personnel Management resulted in data breaches affecting more than 21 million people, which is reported as among the largest thefts of personal data from the U.S. government in history.

The Office of Personnel Management allegedly failed to comply with various Federal Information Security Modernization Act requirements, to adequately patch and update software systems, to establish a centralized management structure for information security, to encrypt data at rest and in transit, and to investigate outbound network traffic that did not conform to the domain name system protocol.

The Office of Personnel Management agreed to pay a $63 million settlement with current, former and prospective government workers affected by the breach.

In January 2020, the systems of Ambry Genetics, a state-of-the-art genetic testing laboratory, were hacked, which exposed PII and protected health information of its patients.

According to the complaint, Ambry Genetics failed to take standard and reasonably available steps to prevent the data breach, including failing to encrypt information and properly train employees, failing to monitor and timely detect the data breach, and failing to provide patients with prompt and accurate notice of the data breach.

Ambry Genetics agreed to settle the class action litigation for $12.25 million plus three years of free credit monitoring and identity theft insurance services to the proposed class.

Settlement participants can also submit a claim for up to $10,000 in reimbursement for out-of-pocket costs traceable to the data security breach and submit a claim for up to 10 hours of documented time dealing with breach issues at $30 per hour.

These key data breach litigations highlight the risks of insufficient security measures and insufficient notice to affected customers in the event of a breach. To help reduce the legal risk, we suggest the following.

Limit the scope of data collection and retention to only what is necessary.

Companies should analyze business practices to determine what PII is collected, the purpose of the collected PII and how long that PII needs to be retained.

The risk and liability of a data breach can be limited by restricting collected PII to only what is necessary and discarding that data once it is no longer necessary. Document the collected data to ensure it is periodically reevaluated and discarded at the appropriate time.

Implement reasonable industry-standard security measures.

Reasonable, basic security measures generally stem from industry standards and practices, regulations and guidance, and federal and state laws.

As some examples, recent data breach litigations highlight the following as reasonable, expected security measures:

  • Encrypting sensitive data;
  • Implementing multifactor authentication;
  • Patching and updating software systems;
  • Securing cached information and login credentials;
  • Monitoring the network for threats; and
  • Responding to security incidents.

Implement a comprehensive security program and team with oversight and input from company leadership.

Companies should build a security team that is responsible for setting security policies and procedures, documenting and managing the collected data, assessing the risk of a data breach, applying security controls, training employees on data security awareness and policies, monitoring for potential data breaches, and auditing the effectiveness of the security program.

The security team should have support from leadership and typically includes an interdisciplinary team of stakeholders across a business, including the information technology department that is well-versed in computer technology and data security, legal to monitor and ensure compliance with data protection laws and mitigate legal risk, and a lead privacy or data protection authority — e.g., a chief data protection officer.

The team should develop and be prepared to follow a strategy to address a suspected data breach or security incident, including fixing vulnerabilities that may have caused a breach, preventing additional data loss, fixing vulnerabilities the breach may have caused and notifying appropriate parties.

The team should ensure the response strategy is up-to-date with state and federal laws.

Be accurate in public disclosures and notices.

All 50 states, the District of Columbia, Puerto Rico and the Virgin Islands have enacted legislation requiring notification of security breaches involving PII.

The notice generally should include how the breach happened, what information was taken, how the thieves have used the information, if known, what actions the business has taken to remedy the situation, what actions the business is taking to protect individuals — e.g., offering free credit monitoring services — and how to reach the relevant contacts in the business.

Failing to accurately report the breach — for example, failing to accurately identify what data was compromised — to customers could result in liability for the company as well as personal liability for senior employees and executives responsible for responding to the data breach.

Conclusion

Taking these preventative measures to secure PII, maintain compliance with data protection guidelines and laws, and develop a plan to address and respond to a suspected breach can help businesses to reduce the likelihood of potential civil liability.

This article originally appeared on Law360. Read more at: https://www.law360.com/articles/1555322.

Google agrees to pay a historic $391.5 million to settle with attorneys general from 40 U.S. states for misleading users about its location tracking and collection practices. The settlement is the largest ever attorneys general-led consumer privacy settlement.

The attorneys general opened the Google investigation following a 2018 Associated Press article that revealed Google “records your movements even when you explicitly tell it not to.” According to the article, Android users were misled into thinking that location tracking is turned off when the “Location History” setting is “paused” or disabled. Google, however, could continue to track a user’s location through other Google apps and settings. For example, another account setting—turned on by default and ambiguously named “Web & App Activity”—enabled the company to collect, store, and use the customers’ personally identifiable location data.

In addition to the fine, Google agreed to improve transparency regarding its location data tracking and collection practices. Specifically, Google must:

  1. Show additional information to users whenever they turn a location-related account setting “on” or “off”;
  2. Make key information about location tracking unavoidable for users (i.e., not hidden); and
  3. Give users detailed information about the types of location data Google collects and how it’s used at an enhanced “Location Technologies” webpage.

The Google settlement highlights the importance for companies to (1) be transparent in their data collection practices and (2) accurately convey data collection practices in a user-accessible manner.

Yesterday, October 12, 2022, was the first time that a case under the Illinois Biometric Information Privacy Act (BIPA) went to trial – and the result was a big win for the Plaintiffs, more than 44,000 truck drivers whose fingerprints were scanned for identity verification purposes without any informed permission or notice. BIPA is an Illinois state law that requires informed, written consent before personal biometric information is captured, used, and stored. The law also provides a private right of action — allowing individuals whose biometric information is captured, used, or stored without informed, written consent to bring suit. In 2019, in Rosenbach v. Six Flags, it was held that failure to comply with the statue alone constitutes harm sufficient to confer standing. This allows for suits when the statute is violated, regardless of whether the biometric information that is captured is misused in any way or the person whose information is captured experiences any real-world harm.  Since the Rosenbach ruling, many BIPA cases have been brought – but they always result in settlement.

The case, Richard Rogers v. BNSF Railway Company (Case No. 19-C-3083, N.D. Ill.), is noteworthy not only because it was the first trial in a BIPA case, but also because (1) of how damages were proved, and (2) the defendant was not itself the party that was capturing the data, rather it had contracted out identification verification to a third-party.

First, regarding damages, the jurors were asked only to indicate how many times defendant recklessly or intentionally violated the law. They answered consistent with the defense expert’s estimated number of drivers who had their fingerprints registered: 45,600 times. The Court then entered a judgment of $228 million, based on the jury’s finding of 45,600 violations, and the language of the statute which provides for up to $5,000 for every willful or reckless violation and $1,000 for every negligent violation.

Second, regarding the defendant, BNSF Railway was not the party that actually collected anyone’s fingerprints. Rather, BNSF hired a third party company – Remprex LLC – to process drivers at the gates of the railroad’s Illinois rail yards.  BNSF argued that it did not control the “method and manner” of Remprex’s work. Counsel for the Plaintiff argued that ignorance is not a defense to the law, and if BNSF did not know about the BIPA then it was acting recklessly (the railroad company has been around for 150 years in a highly regulated industry, and its subcontractor, Remprex, was just a two-person start-up when they were first hired). Counsel for Plaintiff also argued that a party cannot “contract out” its obligation to follow the law. Compellingly, Plaintiff’s counsel also pointed to the fact that BNSF continued its biometric information data processing activities even after suit was first filed in 2019.

Today, October 7, 2022, President Joe Biden signed an executive order implementing a new privacy framework for data being shared between Europe and the United States. The new framework is called the “Trans-Atlantic Data Privacy Framework,” and it will (hopefully) serve to replace the prior framework, known as “Privacy Shield”, which was struck down by the European Court of Justice in July 2022 (in a case called Schrems II) on grounds that it did not adequately protect EU citizens from U.S. government surveillance. We wrote about the Schrems II decision here, including how it not only struck down the “Privacy Shield” framework, but also potentially called into question all EU-U.S. data transfers.

The new framework was the result of over a year of detailed negotiations between the U.S. and EU, and it is believed to address the concerns raised by the Court of Justice of the European Union (CJEU) in the Schrems II decision. If the European Commission agrees and issues an adequacy decision, the framework will serve to re-enable the flow of data between the EU and U.S., a $7.1 trillion economic relationship. So, how did the U.S. address the CJEU’s concerns?  The key principles are:

  1.  a new set of rules and binding safeguards to limit access to data by U.S. intelligence authorities to what is necessary and proportionate to protect national security, and U.S. intelligence agencies will adopt procedures to ensure effective oversight of new privacy and civil liberties standards;
  2. a two-tier redress system is created to investigate and resolve complaints filed by EU citizens if they are concerned their personal information has been improperly collected by the U.S. intelligence community, including the establishment of a new data privacy court (a data protection review court) inside the Justice Department to investigate valid complaints; and
  3. the creation of strong obligations for companies processing data transferred from the EU, including a continued requirement to self-certify their adherence to the Principles through the U.S. Department of Commerce.

The next step is for the European Commission to assess the framework and (hopefully) issue an adequacy decision. This process could take many months.  Unless and until an adequacy decision is issued, businesses will have to continue to rely on other means for transferring EU personal data to the U.S., such as binding corporate rules or standard contractual clauses.

California Attorney General Rob Bonta announced yesterday a settlement reached with beauty product retailer, Sephora, Inc. (Sephora), resolving allegations that Sephora violated various provisions of the California Consumer Privacy Act (CCPA).  Specifically, it was alleged that Sephora failed to:

  • Disclose to consumers that it was selling their personal information
  • Process user requests to opt out of sale of personal information in accordance with the CCPA
  • Cure these violations within the 30-day period currently allowed by the CCPA.

Attorney General Bonta issued a press release saying: “I hope today’s settlement sends a strong message to businesses that are still failing to comply with California’s consumer privacy law.  My office is watching, and we will hold you accountable.”

In Sephora’s case, Sephora was allowing third-party companies to install tracking software on their website and in their app so that third parties could monitor customers as they shopped.  The third-parties were tracking, inter alia, what kind of computer the customer was using, what products/brands the user put in her shopping cart, and the user’s location.  Sephora was using the information obtained from these third-party trackers to more effectively target potential customers.  Sephora’s arrangement with these third-party customers constituted a “sale” under the CCPA, which required Sephora to allow customers to opt-out of such information-sharing.

Under the settlement agreement, Sephora agreed to:

  • Pay $1.2 million
  • Expressly disclose that it sells data
  • Provide opt-outs for the sale of personal information, including via the Global Privacy Control
  • Conform its service provider agreements to the CCPA’s requirements; and
  • Report to the AG on its sales of personal information, the status of its service provider relationships, and its efforts to honor Global Privacy Control

For more information on the Sephora settlement agreement, and on the Attorney General’s ongoing enforcement actions with respect to failures to process opt-out requests, please see the AG’s Press Release.