It is fairly standard language in privacy policies: “This privacy policy may be amended or updated from time to time, so please check back regularly for updates.”  It sends the message that the company can change its data practices and policies without ever notifying the end-user. It tells the end-user that the burden is on them to check back. And it signals that the end user has no control. While the end-user may have agreed to turn over their data when the company’s practices and policies were very conservative, the company can change those practices and policies the very next day without the end-user ever knowing. I mean, let’s face it, how often do you read a privacy policy in the first place, let alone “check back” with it to see if it’s been updated?

Recently the Federal Trade Commission (FTC) issued a warning in its Technology Blog titled: “AI (and other) Companies: Quietly Changing Your Terms of Service Could Be Unfair or Deceptive.” The post states, inter alia: “It may be unfair or deceptive for a company to adopt more permissive data practices—for example, to start sharing consumers’ data with third parties or using that data for AI training—and to only inform consumers of this change through a surreptitious, retroactive amendment to its terms of service or privacy policy.

The post goes on to explain several past examples of the FTC challenging companies for engaging in unfair and deceptive trade practices after they liberalized their privacy policy and practices after consumers agreed to more restrictive terms, without notifying consumers. It then summarizes: “Even though the technological landscape has changed between 2004 and today, particularly with the advent of consumer-facing AI products, the facts remain the same: A business that collects user data based on one set of privacy commitments cannot then unilaterally renege on those commitments after collecting users’ data. Especially given that certain features of digital markets can make it more difficult for users to easily switch between services, users may lack resource once a firm has used attractive privacy commitments to lure them to the product only to turn around and then back out of those commitments.” 

The take-away: if you want to use, share, or otherwise process data in a new way, you need to provide actual notice to end-users before you do it. The FTC warns that it will “continue to bring actions against companies that engage in unfair or deceptive practices—including those that try to switch up the “rules of the game” on consumers by surreptitiously re-writing their privacy policies or terms or service to allow themselves free rein to user consumer data for product development.” 

So, if your privacy policy or terms of service advise end-users to “check back” for updates, you may want to update those policies. The order of things is to notify first, then change your data practices—not the other way around.

On Oct. 30, President Joe Biden issued an executive order on safe, secure and trustworthy artificial intelligence.[1]

The executive order provides a sprawling list of directives aimed at establishing standards for AI safety and security and protecting privacy.

While the executive order acknowledges the executive branch’s lack of authority for any lawmaking or rulemaking, AI stakeholders and their advisers, as well as companies using or planning to use AI, should consider the directives detailed in the executive order as a good indicator of where the regulatory and legislative landscape may be heading in the U.S.

At a minimum, the detailed directives will likely be considered important indicators for establishing best practices when it comes to the development and use of AI.  

Broad Definition of Artificial Intelligence

The executive order adopts the definition of artificial intelligence from Title 15 of the U.S. Code, Section 9401, the statutory codification of the National AI Initiative Act of 2020.

The term “artificial intelligence” means a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments. Artificial intelligence systems use machine and human-based inputs to:

  • Perceive real and virtual environments;
  • Abstract such perceptions into models through analysis in an automated manner; and
  • Use model inference to formulate options for information or action.

While much of the current headlines surrounding the use of AI and the calls for regulation concerns generative AI, or GAI, models and applications, the definition provided in the executive order is much broader than simply GAI and, essentially, applies to all types of AI, including AI that has been in popular use for many years.

Thus, despite the current focus on GAI models, companies employing any type of AI technology should be on notice that their activities might be implicated by the directives outlined in the executive order.

Specific Directives to Agencies

The executive order outlines specific directives to numerous federal agencies for examining and addressing the use of AI in certain sectors.

Many of these directives concern federal agencies in “critical” fields like health care, financial services, education, housing, law enforcement and transportation.

The executive order, while lacking the authority to create new privacy laws, urges these agencies to provide guidance with respect to how existing privacy standards and regulations apply to AI. For example, the executive order includes the following directives:

  • With respect to the health and human services sector, the executive order urges the secretary of health and human services to provide guidance on the “incorporation of safety, privacy, and security standards into the software development lifecycle for protection of personally identifiable information”;
  • The executive order also directs the secretary of health and human services to issue guidance, or take other action, in response to noncompliance with privacy laws as they relate to AI;
  • The secretary of education is required to develop an AI toolkit that includes guidance for designing AI systems to align with privacy-related laws and regulations in the educational context; and
  • The executive order encourages the Federal Trade Commission to consider whether to exercise its existing authorities to ensure that consumers and workers are protected from harms that may be enabled by the use of AI.

Though the executive order urges the application of existing privacy laws to AI, the executive order also recognizes the executive branch’s lack of lawmaking authority and makes a call to the U.S. Congress to pass bipartisan data privacy legislation.

The executive order also addresses GAI and, specifically, reducing the risks posed by synthetic content, defined as “information, such as images, videos, audio clips, and text, that has been significantly modified or generated by algorithms, including by AI.”

The executive order directs the secretary of commerce, in consultation with the heads of other relevant agencies, to submit a report identifying the existing standards, tools, methods and practices, as well as the potential development of further science-backed standards and techniques, for:

  • Authenticating content and tracking its provenance;
  • Labeling synthetic content, such as using watermarking;
  • Detecting synthetic content;
  • Preventing generative AI from producing child sexual abuse material or producing nonconsensual intimate imagery of real individuals — to include intimate digital depictions of the body or body parts of an identifiable individual;
  • Testing software used for the above purposes; and
  • Auditing and maintaining synthetic content.

Ultimately, the report will be used to issue guidance to agencies for labeling and authenticating such content that they produce or publish.

The executive order includes several directives that call for the development guidelines, standards, and best practices for AI safety and security.

The executive order instructs the secretary of commerce, acting through the director of the National Institute of Standards and Technology and in coordination with the secretary of energy and the secretary of homeland security to

establish guidelines and best practices, with the aim of promoting consensus industry standards, for developing and deploying safe, secure, and trustworthy AI systems, including developing companion resources to the AI Risk Management Framework and to the Secure Software Development Framework to incorporate secure development practices for generative AI and for dual-use foundation models, as well as launching an initiative to create guidance and benchmarks for evaluating and auditing AI capabilities, with a focus on capabilities through which AI could cause harm, such as in the areas of cybersecurity and biosecurity.

The executive order directs the secretary of homeland security to establish an Artificial Intelligence Safety and Security Board as an advisory committee, which will made up of AI experts from the private sector, academia and government.

This newly established board will provide to the secretary of homeland security and the federal government’s critical infrastructure community advice, information, and recommendations for improving security, resilience and incident response related to AI usage in critical infrastructure.

Another important topic addressed by the executive order relates to patents and copyrights — the patentability of inventions developed using AI, including the issue of inventorship, and the scope of copyright protection for works produced using AI.

The executive order directs the undersecretary of commerce for intellectual property and the director of the U.S. Patent and Trademark Office to publish guidance to USPTO patent examiners and applicants addressing inventorship and the use of AI, including generative AI, in the inventive process, and other considerations at the intersection of AI and IP, which could include updated guidance on patent eligibility to address innovation in AI and critical and emerging technologies.

The executive order directs the U.S. Copyright Office to issue recommendations to the president on potential executive actions relating to copyright and AI.

The recommendations shall address any copyright and related issues discussed in the Copyright Office’s study, including the scope of protection for works produced using AI and the treatment of copyrighted works in AI training.

Development of Privacy-Enhancing Technologies

The executive order also supports the research and development of privacy-enhancing technologies, or PETs, that mitigate privacy risks arising from data processing.

The order defines PETs as including secure multiparty computation, homomorphic encryption, zero-knowledge proofs, federated learning, secure enclaves, differential privacy and synthetic-data-generation tools. The executive order’s support of these technologies underscores the importance of taking a privacy-by-design approach during the development lifecycle.

While the executive order cannot require private companies to adopt PETs, the executive order does require that federal agencies use PETs when appropriate. However, the fact that the executive order cannot set this requirement for private companies does not insulate these companies from liability.

For example, FTC enforcement actions often assess whether an entity adopted “reasonable” privacy and data security measures based on technology that is readily available.

Because the executive order seeks to increase the development and adoption of PETs, it is only a matter of time before agencies like the FTC consider the use of these PETs necessary for carrying out reasonable privacy and data security measures.

Regulations for Developers of Dual-Use Foundation Models

Applying the Defense Production Act, the executive order requires that developers of dual-use foundation models share safety test results with the U.S. government. The executive order defines “dual-use foundation model” to mean:

[A]n AI model that is trained on broad data; generally uses self-supervision; contains at least tens of billions of parameters; is applicable across a wide range of contexts; and that exhibits, or could be easily modified to exhibit, high levels of performance at tasks that pose a serious risk to security, national economic security, national public health or safety, or any combination of those matters

This definition includes models that implement technical safeguards designed to prevent users from taking advantage of the model’s unsafe capabilities. For reference, existing models encompassed by this definition include OpenAI LLC’s GPT4, some versions of Meta Platforms Inc.’s Llama 2 model, Anthropic PBC’s Claude 2 and Google LLC’s PaLM 2 model.

Given that in the AI space bigger is better, the executive order’s requirement that dual-use foundation models contain at least tens of billions of parameters will likely not result in a significant carve out as GAI models continue to progress.

Thus, the executive order’s regulations with respect to these models will likely apply to both incumbent companies as well as companies looking to enter the space. Additionally, it is unclear how far this definition will extend.

For example, does this definition extend to companies that further train and fine-tune a third party’s foundation model? It is possible that the executive order definition of dual-use foundation model includes models fine-tuned using services like Inc.’s Bedrock, which allows developers to further train foundation models.

Accordingly, companies that further train a third party’s foundation model should be on notice of the potential applicability of this definition.

Relying upon the Defense Production Act, the executive order requires companies developing or demonstrating an intent to develop dual-use foundation models to submit information, reports, and records regarding the training, development, and production of such models. The submissions would include the results of red-team safety tests and documentation concerning ownership and protection of the model’s weights and parameters.

The executive order defines “AI red-teaming” to encompass structured testing efforts to find flaws and vulnerabilities in an AI system and provides that the secretary of commerce will establish guidelines for conducting AI red-teaming tests.

However, the executive order does not provide any guidance as to how companies can balance submitting the required documents and information while still maintaining trade secret protection, satisfying obligations of confidentiality or complying with contract provisions and legal requirements that may be applicable.

Navigating the tension between compliance and these other considerations will certainly be a primary concern for companies required to abide by these reporting requirements.

Notably, in line with the July 21 voluntary commitments from leading AI companies, the executive order reinforces the de facto standard to maintain model weights as highly confidential trade secrets.

The executive order provides that the secretary of commerce shall solicit input from the private sector, academia, civic society and other stakeholders concerning the risks and benefits of models having widely available weights.

Companies developing or seeking to develop open-source large language models will want to monitor developments on this front.

The executive order also suggests that the reporting requirements under the Defense Protection Act would additionally apply to models and computing clusters of a certain size.

Specifically, until the secretary of commerce defines different technical conditions, these reporting requirements are deemed to additionally apply to:

Any model that was trained using a quantity of computing power greater than 1026 integer or floating-point operations, or using primarily biological sequence data and using a quantity of computing power greater than 1023 integer or floating-point operations.

Any computing cluster that has a set of machines physically co-located in a single data center, transitively connected by data center networking of over 100 Gbit/s, and having a theoretical maximum computing capacity of 1020 integer or floating-point operations per second for training AI.

The executive order does not constrain these additional categories of models and computing clusters to only dual-use foundation models using this amount of computing power.

Because these technical conditions are subject to change and updates on a regular basis, companies developing AI models not falling within the definition of a dual-use foundation model should be on notice of the potential applicability of these provisions.


The foregoing highlights just some of the many directives included in the extensive executive order.

The executive order is jam-packed with calls to establish numerous boards, institutes, task forces and groups each tasked with differing responsibilities around establishing new standards for AI safety and security, protecting Americans’ privacy, advancing equity and civil rights, standing up for consumers and workers, promoting innovation and competition, and advancing American leadership around the world.

Anyone involved in the AI space would be well advised to keep a close watch on the follow through of these numerous directives.

They will, no doubt, shape the regulatory and legislative landscape in the U.S. as it evolves over the coming months and years, and may help strike the balance between the freedom to innovate in this rapidly evolving space and the need to impose some level of regulation. Much more to follow.

Rothwell Figg members Jenny Colgate and Jennifer Maisel contributed to this article.

This article was originally published in Law360’s Expert Analysis section on November 6, 2023. Read more at: 


[1] .

Following its many warnings of impending enforcement action against entities providing Artificial Intelligence (“AI”) products, the FTC has officially launched an investigation into OpenAI[1]. The FTC initiates its investigation on the heels of the Center for AI and Digital Policy’s July 10, 2023 supplement to its March 30, 2023 complaint, which requests that the FTC investigate OpenAI. The FTC’s investigation of OpenAI also follows multiple civil class action lawsuits filed against OpenAI in the past month alleging numerous privacy and intellectual property violations.

According to the Civil Investigative Demand (“CID”) sent to OpenAI, the FTC is focused on whether OpenAI has (1) engaged in unfair or deceptive privacy and data security practices or (2) engages in unfair or deceptive practices relating to risks of harm to consumers[2]. Pursuant to these concerns, the 20-page demand sets forth numerous interrogatories and document requests directed toward almost every aspect of OpenAI’s business.

Specifically, among other inquiries, the CID requests information about how OpenAI handles personal information at various points in the development and deployment of its Large Language Models (“LLMs”) and LLM Products (i.e., ChatGPT). For example, the FTC is concerned with whether OpenAI removes, filters, or anonymizes personal information appearing in training data. Similarly, the FTC requests that OpenAI explain how it mitigates the risk of its LLM Products generating outputs containing personal information.

Generally speaking, the CID asks about OpenAI’s policies and procedures for disclosing, identifying, and mitigating risks. Not surprisingly, the FTC’s investigation includes questions about OpenAI’s response to the publicly disclosed March 20, 2023 data breach and additional inquiries concerning OpenAI’s awareness of any other data security vulnerabilities or “prompt injection” attacks. Relatedly, the CID probes into OpenAI’s collection and retention of personal information, reflecting the exact “data minimization” principles that the FTC has previously emphasized in numerous enforcement actions.

Also, the interrogatories inquire into OpenAI’s policing of third-party use of its Application Programming Interface (“API”). Specifically, the FTC requests information concerning how OpenAI restricts third-parties from using the API and any required technical or organizational controls that third parties with access to the API must implement. These inquiries suggest that the FTC seeks to hold OpenAI accountable not only for its own use of its LLMs but also for third-party use of its LLMs. Interestingly, the CID comes less than a week after OpenAI announced that it would make the GPT-4 API generally available to all paying customers[3].

While the FTC’s investigation is a first for generative AI products, its demands are indicative of themes consistent with prior enforcement actions—transparency, data minimization, risk identification, and risk mitigation. With many generative AI products at risk until the resolution of the FTC’s investigation, the CID sends a message that AI companies should focus on implementing privacy by design and responsible AI practices centered on transparency and security. Additionally, although collaboration is a significant emphasis in the development of GAI technologies, businesses should be mindful as to whom and to what extent they provide third-party access to their LLMs.

[1] See



Just last week, researchers at Robust Intelligence were able to manipulate NVIDIA’s artificial intelligence software, the “NeMo Framework,” to ignore safety restraints and reveal private information. According to reports, it only took hours for the Robust Intelligence researchers to get the NeMo framework to release personally identifiable information from a database.[1] Since these vulnerabilities were exposed, NVIDIA has expressed that it fixed one of the root causes of this flaw. The issues with NVIDIA’s AI software follow a significant data security incident concerning ChatGPT in March 2023. After this incident, OpenAI published a report explaining that a bug caused ChatGPT to expose users’ chat queries and personal information.[2] Like NVIDIA’s response, OpenAI patched this bug once it was discovered. Relatedly, in April 2023, OpenAI announced a “bug bounty” program that invites “ethical hackers” to report vulnerabilities, bugs, or security flaws discovered in its systems.

These data security incidents occur against the backdrop of what has been termed an “AI Arms Race.”[3] This “race” kicked off at the beginning of the year when the popularity of ChatGPT spurred an influx of consumer-facing artificial intelligence products. Popular examples of these emerging AI products include Microsoft’s Bing chatbot and Google’s Bard chatbot. The urgency to release these products to the public is exemplified by Microsoft’s CEO, Satya Nadella, who in February 2023 expressed that “[a] race starts today. . . We’re going to move, and move fast.”[3] In light of the race to get artificial intelligence products on the market, one could argue that technology companies have opted for a “release first, fix later” approach to data security for these products.

Of course, bugs are inevitable in complex software. In fact, “bug bounty” programs are not new. Many prominent technology companies utilize these types of programs, and the FTC often requires that entities establish similar programs for fixing known security vulnerabilities. But are these programs sufficient to protect AI companies from liability for data security incidents? The ease at which hackers can exploit the vulnerabilities in these artificial intelligence products and the speed at which these products hit the market begs the question of whether the NVIDIA and OpenAI data leaks were preventable at the outset. Indeed, AI developers know that today’s AI products pose consumer privacy risks. This reality is underscored by testimony from Gary Marcus, a leader in the field, who expressed that the existing AI systems do not adequately protect our privacy.[4] Because of the prevalence and acknowledgment of the lack of security in the current AI systems, it might be the case that these companies could be held liable for the vulnerabilities in their AI products – regardless of how the company reacts after these vulnerabilities are exploited.

In the past, companies have faced enforcement based at least partly on the lack of reasonable measures taken to prevent security vulnerabilities. Often, enforcement on these grounds is brought by the FTC. For example, in a 2023 complaint against Ring, the FTC admonished Ring’s “lax attitude toward privacy and security.” In a 2013 enforcement action, the FTC alleged that HTC America “introduced numerous security vulnerabilities … which, if exploited, provide third-party applications with unauthorized access to sensitive information and sensitive device functionality.” 

Enforcement actions brought on similar grounds have already been mentioned in connection with today’s artificial intelligence products. FTC Commissioner Alvaro M. Bedoya warned in his April 2023 “Early Thoughts on Generative AI” remarks before the International Association of Privacy Professionals, that “[the FTC has] frequently brought actions against companies for the failure to take reasonable measures to prevent reasonably foreseeable risks.”[5] This warning echoes the FTC’s urging that AI companies “take all reasonable precautions before [the generative AI product] hits the market” in a blog post about deceptive AI.[6] While not directly addressing data privacy concerns, this blog post further provides that “deterrence measures should be durable, built-in features and not bug corrections or optional features that third parties can undermine via modification or removal.”[6] Commissioner Alvaro M. Bedoya’s May 31, 2023 statement on the recent settlement with Amazon emphasizes that “[m]achine learning is no excuse to break the law,” sending an unequivocal message to technology companies.[7] The FTC’s public statements concerning how AI companies should be safeguarding consumers from the potential harms caused by AI products might be a harbinger of privacy enforcement actions to come.

While there has yet to be a privacy enforcement action against a technology company concerning its AI product, it is likely only a matter of time. Indeed, in March, the Center for AI and Digital Policy filed a complaint with the FTC asking it to investigate OpenAI, stating that its GPT-4 product is a privacy and public safety risk.[8] Amidst calls for regulation from industry leaders, the data security practices (or lack thereof) used by developers of these technologies will only face even greater scrutiny in the future. 









Many companies may be quick to dismiss Washington’s “My Health, My Data” (MHMD) as a health data law that does not apply to them. But there are many reasons you should think twice before disregarding this law. 

First, unlike the state privacy laws that have been passed so far, MHMD applies to all companies regardless of their revenues, how much personal data they process, or what percent of their annual revenue is generated from processing or selling personal data.  Also, there is no nonprofit carve-out for MHMD.  So even if you process a tiny bit of health data (and see below regarding the expansive definition of what constitutes “consumer health data”), MHMD may apply.

Second, in some respects it is beyond a businesses’ control whether they need to comply with MHMD.  While MHMD applies to Washington resident and any “regulated entity” that “conducts business in Washington or that produces or provides products or services that are targeted to consumers in Washington,” MHMD also applies to all “natural person[s] whose consumer health data is collected in Washington.”  Broadening the scope even further, “collect” is expansively defined to mean more than just the gathering of data in Washington.  If you buy, rent, access, retain, receive, acquire, infer, derive, “or otherwise process” consumer health data in the state, then you must comply.  Some of these verbs are extremely broad – such as “accessing,” “acquiring” and (perhaps broadest yet) “inferring” or “deriving” health data in Washington.  Moreover, the catch-all “or otherwise process” should be enough to make every company scratch their head as to whether compliance is necessary.  In other words, do you do anything with data that can be linked to the state of Washington in any way that could arguably be considered “consumer health data”?

Third, “consumer health data” is defined very broadly as “personal information that is linked or reasonably linkable to a consumer that identifies the consumer’s past, present, or future physical or mental health status.”   MHMD does not define “mental health” or “physical health”; however, one can be sure that this broad definition of “consumer health data” includes information that companies may not ordinarily think of as health data.  For example, the Centers for Disease Control and Prevention website says that mental health “includes our emotional, psychological, and social well-being” and affects “how we think, feel, and act,” as well as “how we handle stress, relate to others, and make health choices.”  Does this mean that MHMD applies to all data concerning one’s emotions, psychological well-being, social well-being, how one is thinking, how one is feeling, how one is acting, how one is dealing with stress, how one is relating to others, and health choices one is making?  With respect to physical health, the National Institute of Health website talks about the following: what you put into your body, how much activity you get, your weight, how much you sleep, whether you smoke, and your stress levels.  Does data regarding all of the above really constitute “consumer health data”?  If so, any companies that come into contact with data concerning food or drinks, activities or any kind, anything related to one’s size/weight (such as clothing) and/or sleep may be subject to MHMD.

Indeed, just as the term “PII” was called into question in recent years because to a certain extent all data may be personally identifiable, the same may be true for WA’s expansively defined “consumer health data.”  Indeed, rather than defining what is “consumer health data,” it may be easier to determine categories that are clearly outside of the definition.  And because MHMD contains a private right of action, you can be sure that plaintiffs are going to assert very expansive definitions of “consumer health data” in litigations and threatened litigations. 

Rothwell Figg remains committed to assisting its clients with all of their privacy, data protection, and IP needs, including litigations and threatened litigations; counselling on security breaches, data governance, AI, and IP; negotiating and drafting contracts; securing IP protection; and drafting privacy and AI policies.

In this corner, the U.S. Federal Trade Commission (FTC): 

“Facebook has repeatedly violated its privacy promises,” said Samuel Levine, Director of the FTC’s Bureau of Consumer Protection. “The company’s recklessness has put young users at risk, and Facebook needs to answer for its failures.”

In that corner, Meta (formerly, Facebook):

Meta head of communications Andy Stone called the FTC’s announcement a “political stunt,” vowed to “vigorously fight” the action, and criticized the FTC for allowing Chinese-owned social media app TikTok to “operate without constraint on American soil.”

Yesterday, May 3, 2023, the FTC ordered Meta to show cause as to why the Commission should not modify its 2020 Order and enter a new proposed order based on Facebook’s record of alleged violations and its independent assessor’s findings that Facebook has not complied with the requirements of its privacy program. Specifically, the independent assessor found that Facebook “misled parents about their ability to control with whom their children communicated through its Messenger Kids app, and misrepresented the access it provided some app developers to private user data,” in breach of the FTC’s 2012 and 2020 Privacy Orders with Meta and in violation of the Children’s Online Privacy Protection Act (“COPPA”).

The changes to the 2020 Privacy Order that the FTC has proposed, if ordered, would undoubtedly be impactful in many respects and would apply to the full complement of products and services offered by Meta, including Facebook, Instagram, WhatsApp, Messenger, and Meta Quest.  The proposed changes include:

  • Blanket prohibition against monetizing data of children and teens under 18: Meta and all its related entities would be restricted in how they use the data they collect from children and teens. The company could only collect and use such data to provide the services or for security purposes, and would be prohibited from monetizing this data or otherwise using it for commercial gain even after those users turn 18.
  • Pause on the launch of new products, services: The company would be prohibited from releasing new or modified products, services, or features without written confirmation from the assessor that its privacy program is in full compliance with the order’s requirements and presents no material gaps or weaknesses.
  • Extension of compliance to merged companies: Meta would be required to ensure compliance with the FTC order for any companies it acquires or merges with, and to honor those companies’ prior privacy commitments.
  • Limits on future uses of facial recognition technology: Meta would be required to disclose and obtain users’ affirmative consent for any future uses of facial recognition technology. The change would expand the limits on the use of facial recognition technology included in the 2020 Order.
  • Strengthening existing requirements: Some privacy program provisions in the 2020 Order would be strengthened, such as those related to privacy review, third-party monitoring, data inventory and access controls, and employee training. Meta’s reporting obligations also would be expanded to include its own violations of its commitments.

The undertaking to even begin to comply with these proposed changes would appear to be massive and would not only impact Meta and its products and services operationally, but financially as well.  That is not to say that these proposed modifications may not be warranted if the alleged violations prove to be true.  The takeaway here… the FTC means business!

This current order is the third time the FTC has taken action against Meta – so let’s call this “Round 3.”  Meta has 30 days to respond to the proposed findings from the FTC’s investigation.  Stay tuned!

On March 16, 2023, the French Data Protection Agency (the “CNIL”) imposed a fine of € 25,000 on the company CITYSCOOT in connection with a finding that CITYSCOOT failed to comply with the obligation to ensure data minimization, as required by Article 5.1.c of the GDPR. The facts that led to the judgment included a finding that during the short-term rental of a scooter, CITYSCOOT would collect (and store) the vehicle’s geolocation data every 30 seconds. CITYSCOOT maintained that the information was being processed and stored for four reasons: (1) processing of traffic offenses; (2) processing of consumer complaints; (3) user support (to call for help if a user falls); and (4) management of claims and thefts. The CNIL found that none of these purposes justified the collection of geolocation data in such detail as that carried out by the company, and that CITYSCOOT’s practices were very intrusive on the private life of users.

“What exactly does data minimization require” could become a hot topic for U.S. privacy litigation in the coming years, particularly given that the majority of U.S. states that have adopted general privacy laws thus far have required data minimization by statute. (The Iowa Consumer Data Protection Act (ICDPA) does not have a statutory data minimization requirement.) For example, under the California Privacy Rights Act (CPRA) any information collected must be “reasonably necessary and proportionate to either the purposes for which it was collected or another disclosed purpose” similar to the context under which it was collected.” Similarly, the Virginia Consumer Data Protection Act (VCDPA) expressly provides a Data Minimization requirement, which it defines as an “[o]bligation to limit data collection to what is adequate, relevant, and reasonably necessary for the disclosed purposes.” The Colorado Privacy Act (CPA) provides that “[c]ontrollers must assess and document the minimum types and amount of Personal Data needed for the state processing purposes.” The Connecticut Data Privacy Act (CTDPA) provides that controllers must “[l]imit collection of personal data to what is adequate, relevant, and reasonably necessary for the specific purpose(s) for which the data is processed (also known as “data minimization”).” The Utah Consumer Privacy Act (UCPA) mentions “purpose specification and data minimization” among the responsibilities of controllers.   

In view of the enforcement of “data minimization” in Europe and the nearly universal adoption of “data minimization” obligations in the United States, it would behoove any business to regularly evaluate not just what types of data it is collecting, but also how much and how frequently it is collecting it. Also, as part of annual privacy mapping and updating of privacy policies, it is a good idea to ensure that the identified “purposes” for collecting data continue to be accurate and complete. 

Rothwell Figg remains committed to assisting its clients with all of their privacy needs, including not just updating policies and contracts, but also consulting with businesses on “best practices” for data management.  Also, because our privacy team is highly technical and comprised of experienced litigators, we are ready should complex questions, security breaches,  or litigation arise.

A number of federal privacy laws provide private rights of action, allowing individuals (or class actions) to bring claims alleging violations of certain privacy laws. Some examples of these statutes include the Video Privacy and Protection Act (VPPA), the Telephone Consumer Protection Act (TCPA), and the Fair Credit Reporting Act (FCRA). What is more is that some state privacy laws can be “removed” to federal court in certain circumstances, in which case, Article III standing has to be shown in those cases, too. The most frequent examples of this are federal claims under Illinois’ Biometric Information Privacy Act (BIPA).

However, what it takes to establish Article III standing in these cases is far from crystal clear. And this is notwithstanding the fact that the Supreme Court has attempted to clarify standing in privacy cases twice in the past decade. Unfortunately, while the Supreme Court was given another opportunity to clarify Article III standing in privacy cases in Wakefield v. ViSalus, the Court declined to do so this week.

First, in Spokeo, Inc. v. Robins, in 2016, the Supreme Court considered standing in a case involving a violation of the FCRA. In that case, while the Court acknowledged that Congress sought to curb the dissemination of false information by adopting the procedures of the FCRA, Robins did not satisfy the demands of Article III by alleging a bare procedural violation—i.e., the dissemination of an incorrect zip code. The Court called the FCRA violation that Robins “suffered” (dissemination of a false zip code) a “procedural violation,” and reversed the Ninth Circuit Court of Appeals on grounds that while the Circuit Court correctly looked at whether Robins had suffered an injury, it failed to analyze whether that injury was “concrete and particularized.”   

Second, in TransUnion LLC v. Ramierz, in 2021, the Supreme Court considered standing in another case involving a violation of the FCRA. In that case, the majority found that only some of the class members had shown any physical, monetary, or various intangible harms including reputational harm to have Article III standing. The majority found that the remainder of the class members did not sufficiently show concrete harm because even though TransUnion had false information about them, the information was never sent to creditors; thus, no “concrete” harm resulted from the false information. Because all of the class members in TransUnion did not have standing, the Court found that the standing requirement was not met. In TransUnion, the Court further noted that, although Congress’ views on what constitutes a concrete injury could be instructive, Article III was not satisfied just because Congress created a statutory cause of action. 

Since Spokeo and TransUnion, it has remained unclear – and an issue of circuit splits – what violations of privacy statutes constitute “intangible harms” (sufficient to confer Article III standing) versus “procedural harms” (insufficient to confer Article III standing). For example, there is also a circuit split regarding whether certain TCPA violations result in a “concrete” harm sufficient to confer Article III standing. In the case of text messages, the 11th Circuit has held that the receipt of a single unsolicited text message is not sufficient to confer standing (see Salcedo v. Hanna, 936 F.3d 1162 (2019)), whereas the 5th Circuit has held that it is sufficient to confer standing (see Cranor v. 5 Star Nutrition, No. 19-51173). The issue presented in Wakefield v. ViSalus, which the Supreme Court declined to hear this week, would have provided the Court an opportunity to clarify when a statutory violation that does not result in physical or monetary harm results in an “intangible harm” sufficient to confer Article III standing as opposed to a mere “procedural harm.” The facts in ViSalus were such that individuals had received allegedly unconsented to robocalls marketing ViSalus’s weight-loss shake mix. However, for at least some of the individuals, the lack of consent is arguably a procedural question because the individuals had voluntarily provided phone numbers and some consent to receive marketing and promotional communications, they just had not provided prior express written consent and the regulations had changed in 2013 to require prior express written consent. 

What is the difference between the harms in TransUnion (where false information was collected but not yet disseminated, and the FCRA law is aimed at promoting accurate and fair information) and the harms in ViSalus (where the individuals had provided contact information and consented to marketing communications but the regulations required prior express written consent, and the TCPA law is aimed at protecting consumers from invasive telemarketing practices)?

The Federal Trade Commission will have its eye on privacy and data security enforcement in 2023.

In August, the agency announced that it is exploring ways to crack down on lax data security practices. In the announcement, the FTC explained that it was “concerned that many companies do not sufficiently or consistently invest in securing the data they collect from hackers and data thieves.”

These concerns are reflected in some of the FTC’s recent privacy enforcement actions. This article explores two significant FTC privacy actions of 2022 and provides three key tips to avoid similar proceedings in 2023.

Recent FTC Enforcement Actions

Earlier in 2022, Chegg Inc., an educational technology company, faced enforcement action from the FTC. Chegg is an online platform that provides customers with education-related services like homework help, tutoring and textbook rentals.

According to the FTC, Chegg failed to uphold its security promises to “take commercially reasonable security measures to protect the Personal Information submitted to [Chegg].”

In its complaint, the FTC explained that Chegg’s lax cybersecurity practices led to four data breaches resulting in the exposure of employees’ financial and medical information and the personal information of 40 million customers.[1]

The FTC’s complaint points out that three of the four data breaches suffered by Chegg involved phishing attacks targeted at Chegg’s employees. The other data breach occurred when a former Chegg contractor shared login information for one of Chegg’s cloud databases containing the personal information of customers.

Chegg uses a third-party cloud service provided by Amazon Web Services Inc., the cloud computing division of Inc., to store customer and employee data. The information stored by Chegg on AWS includes information related to its customer’s religion, ethnicity, date of birth and income.

According to the complaint, Chegg allowed employees and third-party contractors to access these databases using credentials that provided full access to the information and administrative privileges.

Moreover, the personal data stored by Chegg was stored in plain text and not encrypted. The FTC’s complaint also explains that Chegg encrypted passwords using outdated cryptographic hash functions with known vulnerabilities.

With Chegg’s recent data breaches in mind, the FTC’s complaint highlighted these inadequacies in Chegg’s data security practices:

  • Chegg failed to consistently implement basic security measures such as encryption and multifactor authentication;
  • Chegg failed to monitor company systems for security threats;
  • Chegg stored information insecurely; and
  • Chegg did not develop adequate security policies and training.

The FTC’s order requires that Chegg document and limit its data collection practices. The FTC also requires Chegg to allow customers access to the data collected on them and to abide by its customers’ requests to delete such data.

The order further requires Chegg to implement multifactor authentication, or another suitable authentication method, to protect customer and employee accounts.

In another FTC enforcement action in 2022, the FTC brought another enforcement action against Drizly LLC, an online platform allowing customers to place orders for beer, wine and liquor delivery.

Notably, the FTC also acted against Drizly’s CEO, James Cory Rellas, in his personal capacity. Similar to Chegg, Drizly hosts its software on AWS.

As a result, Drizly’s customer data, including passwords, email addresses, postal addresses, phone numbers, device identifiers and geolocation information, were all stored on AWS.

The 2022 complaint alleges that in 2018 Drizly and Rellas learned of problems with the company’s data security procedures after a security incident in which a Drizly employee posted the company’s AWS login information on GitHub Inc.[2]

In its complaint, the FTC states that the 2018 incident put Drizly “on notice of the potential dangers of exposing AWS credentials and should have taken appropriate steps to improve GitHub security.”

But Drizly failed to address the issues with its security procedures. As a result, in 2020, a hacker gained access to Drizly’s GitHub login credentials, hacked into the company’s database and acquired customer information.

The FTC’s complaint also alleged that Rellas contributed to these failures by not hiring a senior executive responsible for the security of consumers’ personal information collected and maintained by Drizly.

The FTC’s complaint attributed the following data security failures to Drizly:

  • Drizly failed to develop and implement adequate written security standards and train employees in data security policies;
  • Drizly failed to store AWS and database login credentials securely and further failed to require employees to use complex passwords;
  • Drizly did not periodically test its existing security features; and
  • Drizly failed to monitor its network for attempts to transfer consumer data outside the network.

The FTC’s October order requires Drizly to destroy all unnecessary data, limit the future collection and retention of data, and implement a data security program. Drizly must also replace its authentication methods that currently use security questions with multifactor authentication.

Additionally, Drizly is now required to encrypt Social Security numbers on the company’s network. The order will follow Rellas to any future companies, demanding that he personally abide by these data security requirements in future endeavors.

Enforcement actions brought by the FTC this year provide guidelines to companies wishing to avoid FTC enforcement actions.

In fact, FTC Chair Lina M. Khan’s statement on the Drizly decision stated “[t]oday’s action will not only correct Drizly’s lax data security practices but should also put other market participants on notice.”

Thus, the following steps are suggested to safeguard a company from FTC enforcement action.

Educate Employees on Cybersecurity Measures

Companies should emphasize data security education for their employees and contractors. It is suggested that companies introduce new employees to their data security practices during the onboarding process and follow up with regularly scheduled training for existing employees.

One crucial area to educate employees on is how to safeguard company credentials.

Companies should implement policies and procedures to prevent the storage of unsecured access keys on any cloud-based services. Companies should also have a policy and guidelines requiring the use of strong passwords and multifactor authentication to secure corporate accounts and information.

Companies should implement basic security measures for employees’ and contractors’ access to sensitive user information. For example, companies should regularly monitor who accesses company repositories containing sensitive consumer information.

Companies might also consider only allowing authenticated and encrypted inbound connections from approved Internet Protocol addresses to access sensitive consumer data.

Performing regular audits can help companies ensure each employee only have access to what is needed to perform that employee’s job functions.

In addition, companies should use audits to identify and terminate unneeded or abandoned employee accounts, such as accounts that are left open after an employee leaves a company or when an employee transfers to a different division/role.

Follow Through on Privacy and Data Security Promises

The FTC tends to pursue companies that fall short of the data security promises they make to consumers.

When a company promises consumers that it will adhere to reasonable data security practices, it is their responsibility to implement basic security measures and checks to fulfill this promise. Those security measures might include encryption, multifactor authentication and complex passwords.

It is also imperative that companies regularly review and update their data security practices. The FTC’s recent orders show that adhering to outdated data security measures amounts to having lax data security practices.

Individuals in charge of the company’s data security practices should stay abreast of developments in the field.

Respond to Data Security Incidents Quickly and Transparently

The FTC displays little leniency for companies and executives already on notice of data security issues within their company.

It is imperative that companies act promptly when data security events are discovered, and that companies be transparent with customers when a data security event occurs — regarding the occurrence of the event, measures the company took to prevent the event and measures the company is taking to rectify the event.

Companies should be vigilant in their efforts to discover data security events. Procedures and policies should be implemented to stay on top of data security events within the company’s networks and systems.

For example, adopting file integrity monitoring tools and tools for monitoring anomalous activity can assist with detecting these events.

After implementing these safeguards, they must be tested at least once a year for vulnerabilities, as suggested in the FTC’s orders against Drizly and Chegg.


The FTC’s prior enforcement actions serve as a cautionary tale for companies seeking to avoid similar enforcement actions from the agency.

Engaging in efforts to educate employees on data security practices, following through on data security promises, and responding to data security incidents properly can help companies reduce the likelihood of being subject to these proceedings.



This article originally appeared on Law360. Read more at:

The average cost of a data breach is on the rise.

According to the 2022 ForgeRock Consumer Identity Breach Report, the average cost in 2021 of recovering from a data breach in the U.S. is $9.5 million — an increase of 16% from the previous year.

Lawsuits and regulatory fines are a significant factor contributing to the growing cost. This year, several notable class action settlements have been announced, including T-Mobile for over $350 million, the U.S. Office of Personnel Management for $63 million and the Ambry Genetics Corp. for over $12.25 million.

This article looks at the alleged security failures in recent data breach litigations and proposes steps companies may consider to help reduce the legal risk of a data breach.

Recent Examples

In 2021, T-Mobile suffered a data breach that compromised personally identifiable information, or PII, for more than 54 million current, former or prospective customers.

According to the complaint, John Erin Binns accessed the data through a misconfigured gateway GPRS support node. Binns was then able to gain access to the production servers, which included a cache of stored credentials that allowed him to access more than 100 servers.

Binns was able to use the stolen credentials to break into T-Mobile’s internal network. According to the complaint, T-Mobile failed to fully comply with industry-standard cybersecurity practices, including proper firewall configuration, network segmentation, secure credential storage, rate limiting, user-activity monitoring, data-loss prevention, and intrusion detection and prevention.

After learning about the breach, T-Mobile publicly announced the data breach and sent notices via brief text messages. Allegedly, T-Mobile’s text messages explicitly told some customers that their Social Security number had not been comprised.

But, by contrast, T-Mobile’s messages failed to inform customers whose Social Security number had been compromised of this fact.

As part of settlement of the class action, T-Mobile agreed to pay $350 million to customers and to boost its data security spending by $150 million over the next two years. T-Mobile also reached a $2.5 million multistate settlement with 40 attorneys general.

In 2013 through 2014, a cyberattack on the Office of Personnel Management resulted in data breaches affecting more than 21 million people, which is reported as among the largest thefts of personal data from the U.S. government in history.

The Office of Personnel Management allegedly failed to comply with various Federal Information Security Modernization Act requirements, to adequately patch and update software systems, to establish a centralized management structure for information security, to encrypt data at rest and in transit, and to investigate outbound network traffic that did not conform to the domain name system protocol.

The Office of Personnel Management agreed to pay a $63 million settlement with current, former and prospective government workers affected by the breach.

In January 2020, the systems of Ambry Genetics, a state-of-the-art genetic testing laboratory, were hacked, which exposed PII and protected health information of its patients.

According to the complaint, Ambry Genetics failed to take standard and reasonably available steps to prevent the data breach, including failing to encrypt information and properly train employees, failing to monitor and timely detect the data breach, and failing to provide patients with prompt and accurate notice of the data breach.

Ambry Genetics agreed to settle the class action litigation for $12.25 million plus three years of free credit monitoring and identity theft insurance services to the proposed class.

Settlement participants can also submit a claim for up to $10,000 in reimbursement for out-of-pocket costs traceable to the data security breach and submit a claim for up to 10 hours of documented time dealing with breach issues at $30 per hour.

These key data breach litigations highlight the risks of insufficient security measures and insufficient notice to affected customers in the event of a breach. To help reduce the legal risk, we suggest the following.

Limit the scope of data collection and retention to only what is necessary.

Companies should analyze business practices to determine what PII is collected, the purpose of the collected PII and how long that PII needs to be retained.

The risk and liability of a data breach can be limited by restricting collected PII to only what is necessary and discarding that data once it is no longer necessary. Document the collected data to ensure it is periodically reevaluated and discarded at the appropriate time.

Implement reasonable industry-standard security measures.

Reasonable, basic security measures generally stem from industry standards and practices, regulations and guidance, and federal and state laws.

As some examples, recent data breach litigations highlight the following as reasonable, expected security measures:

  • Encrypting sensitive data;
  • Implementing multifactor authentication;
  • Patching and updating software systems;
  • Securing cached information and login credentials;
  • Monitoring the network for threats; and
  • Responding to security incidents.

Implement a comprehensive security program and team with oversight and input from company leadership.

Companies should build a security team that is responsible for setting security policies and procedures, documenting and managing the collected data, assessing the risk of a data breach, applying security controls, training employees on data security awareness and policies, monitoring for potential data breaches, and auditing the effectiveness of the security program.

The security team should have support from leadership and typically includes an interdisciplinary team of stakeholders across a business, including the information technology department that is well-versed in computer technology and data security, legal to monitor and ensure compliance with data protection laws and mitigate legal risk, and a lead privacy or data protection authority — e.g., a chief data protection officer.

The team should develop and be prepared to follow a strategy to address a suspected data breach or security incident, including fixing vulnerabilities that may have caused a breach, preventing additional data loss, fixing vulnerabilities the breach may have caused and notifying appropriate parties.

The team should ensure the response strategy is up-to-date with state and federal laws.

Be accurate in public disclosures and notices.

All 50 states, the District of Columbia, Puerto Rico and the Virgin Islands have enacted legislation requiring notification of security breaches involving PII.

The notice generally should include how the breach happened, what information was taken, how the thieves have used the information, if known, what actions the business has taken to remedy the situation, what actions the business is taking to protect individuals — e.g., offering free credit monitoring services — and how to reach the relevant contacts in the business.

Failing to accurately report the breach — for example, failing to accurately identify what data was compromised — to customers could result in liability for the company as well as personal liability for senior employees and executives responsible for responding to the data breach.


Taking these preventative measures to secure PII, maintain compliance with data protection guidelines and laws, and develop a plan to address and respond to a suspected breach can help businesses to reduce the likelihood of potential civil liability.

This article originally appeared on Law360. Read more at: