While Europe is leveraging hefty fines against violators of the EU General Data Protection Regulation (GDPR) (here is a tracker of recent fines: https://www.enforcementtracker.com/), the United States Supreme Court heard oral arguments last month on whether the FTC – the chief federal agency on privacy policy and enforcement since the 1970s – lacks authority to demand monetary relief.

The oral arguments in AMG Capital Management v. FTC focused on the question of whether Section 13(b) of the Federal Trade Commission Act, by authorizing “injunction[s],” also authorizes the FTC to demand monetary relief such as restitution. If it is found that the FTC is not authorized to demand monetary relief this would affect all cases before the FTC, which has broad authority to enforce both consumer protection and antitrust laws under Section 5 of the FTC Act (providing that “unfair or deceptive acts or practices in or affecting commerce … are … declared unlawful”).

More specifically, the issue on appeal concerns Section 13(b) of the FTC Act, codified at 15 U.S.C. § 53(b).  Section 13(b) permits the FTC to seek, and federal courts to issue, “a permanent injunction” to enjoin acts or practices that may violate the FTC Act. In the decades since the statute was enacted in 1973, courts have broadly construed the term “injunction” to also include other equitable relief, such as monetary relief in the form of restitution or the disgorgement of ill-gotten gains. However, in recent years, several courts have questioned this statutory interpretation, instead relying on the more modern, strictly textual approach.

In 2019, for instance, in FTC v. Credit Bureau Center, the Seventh Circuit refused to read an implied remedy of restitution into the text of the statute. Similarly, in September of 2020, in FTC v. Abbvie et al., the Third Circuit held that the FTC was not authorized to seek disgorgement as a remedy under Section 13(b). This turning of the tides set the stage for a circuit split on the issue, which ripened for review last summer when the Supreme Court granted certiorari to consider the question as presented in the instant case.[1]

At oral arguments, counsel for the FTC argued that the language of the statute should be interpreted in the context in which it was drafted, using what Chief Justice Roberts called the “free-wheeling” approach, not confined by the specific language and looking more to the drafters’ intent. This understanding, the FTC argued, has been used by courts in nearly 50 years of equity jurisprudence finding that the FTC has the authority to order the return of funds from an unjustly enriched transgressor. In particular, counsel highlighted the Supreme Court cases, Porter v. Warner Holding Co. and Mitchell v. Robert DeMario Jewelry, Inc., which both applied this principle to analogously-worded statutes and held a more expansive view of the FTC’s authority.

Yet, as Justice Kavanaugh recognized, the problem with the FTC’s argument is the text of the statute itself. There is simply no mention of any additional equitable relief available to the FTC. Counsel for AMG expanded on this argument, explaining that the best way to determine Congress’s intent at the time is “by looking at the words on the page.” AMG and several Justices also noted that Sections 5(l) and 19 of the FTC Act, drafted at the same time as Section 13(b), expressly provide monetary relief, leading a reader to believe that the omission in Section 13(b) was in fact intentional. And in response to Justice Breyer’s concerns about overturning years of precedent, AMG countered that longstanding error is still error, and the Supreme Court has a duty to correct such error now that the issue is before the Court.

AMG’s position was further bolstered by concerns about the constitutional issues with due process and notice, should the FTC be permitted to seek monetary remedies in the first instance of wrongdoing. Yet on the other hand, several of the Justices also raised the concern about a reasonable person, knowing and understanding that his or her conduct was deceptive, keeping the ill-gotten gains obtained from fraudulent schemes and violating the law.

In the end, as Justice Breyer joked, “Blue brief, I think you’re right. Red brief, I think you are right. They can’t both be right.” Despite very reasonable and plausible arguments on both sides, the Justices will have to make a decision. And that decision comes with high stakes. Specifically, an adverse decision from the Supreme Court will significantly limit the FTC’s enforcement authority and very likely impact which avenues it will take (via state court, federal court, or through the agency’s own power) to protect consumers based on the remedies available.

Based on the line of questioning from the Justices, the skepticism of the FTC’s statutory purpose arguments, and the Court’s recent shift towards strict textual interpretation, some commentators say the writing is on the wall. However, even if the Supreme Court holds that FTC does not have explicit statutory authority to seek monetary relief under Section 13(b), all is not lost for the FTC. As Justice Kavanaugh suggested, “Why isn’t the answer here for the Agency to seek this new authority from Congress for us to maintain a principle of separation of powers?” This may well be the best path forward for the FTC, an option that it began to explore late last year after the adverse decisions from the Third and Seventh Circuits. In a letter sent back in October, the Commission urged Congress to clarify that the Commission may “obtain monetary relief, including restitution and disgorgement” under Section 13(b) and ultimately “restore Section 13(b) to the way it has operated for four decades.”

We will provide an update on this case once the Supreme Court issues a decision.


[1] The underlying facts of AMG Capital Management v. FTC are fairly straightforward. Scott Tucker was the owner of AMG Capital Management, a provider of high-interest, short-term payday loans. In April 2012, the FTC filed suit against Tucker, alleging that Tucker’s loan enforcement practices were harsher than the terms consumers had actually agreed to in the loan notes. The District of Nevada found that Tucker was engaging in unfair or deceptive trade practices, in violation of Section 5 of the FTC Act, and ordered him to pay $1.27 billion in equitable relief to the FTC to ultimately be distributed to the victims. Tucker appealed, and the Ninth Circuit affirmed, noting that its precedent squarely holds that Section 13 of the FTC Act “empowers district courts to grant any ancillary relief necessary to accomplish complete justice.”

A recent article from CNN reported on SpaceX and Amazon sparring over their competing satellite-based internet business. The article reports that at the center of the dispute is “a recent attempt by SpaceX to modify its license for Starlink, a massive constellation of internet satellites, of which SpaceX has already launched more than 900.”  SpaceX reportedly wants to put a few thousand of its satellites in a lower altitude than previously planned or authorized, which Amazon alleges would put them in the way of the constellation of internet satellites it has proposed, called Project Kuiper, and thus increase the risk of a collision in space and increase radio interference for customers.  Amazon argues that it designed its constellation (which has an FCC license, but no launches yet) around the SpaceX constellation, and now SpaceX wants to change the design of its system.  SpaceX has explained that its proposed change in altitude minimizes the risk of collision.  As the CNN article reports: “Putting satellites into lower orbits is generally considered a best practice because, if a satellite were to malfunction, the Earth’s gravity could drag it out of orbit – and away from other satellites – more quickly.”

While the dispute between Amazon and SpaceX is interesting in its own right, it further raises questions about whether our current privacy and intellectual property regimes are ready for what lies ahead?  In 2019, my partner, Marty Zoltick, and I wrote a chapter on this as it pertains to privacy and data protection laws – “The Application of Data Protection Laws in (Outer) Space.” Among the topics addressed in our chapter are: (i) what is outer space, and where do the laws of nation states end; (ii) what laws and treaties apply in outer space; (iii) what are the shortcomings of existing data protection regulations; and (iv)  what new international laws, rules, and/or regulations are needed to more clearly establish which data protection laws apply when personal data is processed in air and space.

Website operators can consider a host of potential legal claims against entities that scrape their sites’ content without authorization, such as breach of a well-crafted terms of service agreement, copyright infringement, trespass, conversion, common law misappropriation, unfair competition, violations of the Computer Fraud and Abuse Act, misappropriation of trade secrets, and trademark infringement, among others.  Each type of claim has its limits, and multiple claims may intersect or overlap in significant ways, particularly when it comes to preemption or remedies.  Accordingly, the nature and context of both the unauthorized web scraping activities and the scraped content should be carefully evaluated to determine an appropriate response.

For example, a recent complaint filed by Southwest against Kiwi illustrates how a data scrape may lead to potential violations of the Lanham Act where the material scraped includes or is used with protected logos and branding.  In its complaint, Southwest alleges that Kiwi scraped its airline fares, and displays Southwest’s protected “Heart” mark in conjunction with promoting and re-selling Southwest’s fares on Kiwi’s online travel agency site.  Southwest alleges that Kiwi is using its Heart mark in a manner that is likely to cause confusion, or to cause mistake, or to deceive as to the affiliation, connection or association of Kiwi with Southwest, or as to the origin, sponsorship or approval of Kiwi’s goods and services by Southwest in violation of Section 32 of the Lanham Act, 15 U.S.C. § 1114.  Southwest has also alleged claims of false designation of origin and trademark dilution under the Lanham Act.

Southwest has also asserted claims of breach of its website Terms & Conditions, violation of the Computer Fraud and Abuse Act, violation of Texas Penal Code § 33.02 (Breach of Computer Security), and common law unjust enrichment.  The case is Southwest Airlines Co. v. Kiwi.com, Inc. et al., 3:21-cv-00098, pending in the Northern District of Texas.

On Monday, the Federal Trade Commission (FTC) announced a settlement with Everalbum, Inc., the California-based developer of a photo app called “Ever,” with regard to allegations that the company deceived consumers about the use of its facial recognition technology and its data retention practices.

The Ever app allowed users to store and organize photos and videos uploaded from their mobile devices, computers, and social media accounts. In February 2017, the app launched a new “Friends” feature that used facial recognition technology, which allowed users to “tag” individuals appearing in the photos and group together similarly tagged photos. However, although Everalbum represented that it would not apply this facial recognition technology without users’ express consent, the FTC found that this feature was automatically activated for most Ever app users, and could not be turned off. Those users residing in Illinois, Texas, Washington, and the EU– currently, the only jurisdictions with laws related to biometric identifiers– could choose to turn on the facial recognition feature.

Moreover, unbeknownst to users, the facial images extracted from the app were combined with facial images obtained from publicly available sources to create datasets used to develop facial recognition services sold to Paravision, a company specializing in security and AI technology. The FTC also alleged that while Everalbum had promised to delete the stored photos and videos of users who had deactivated their accounts, that it failed to do so, and instead retained the content indefinitely.

The settlement requires Everalbum (who shutdown the Ever app back in August 2020) to: (i) delete the photos and videos of deactivated accounts; (ii) destroy any facial recognition data, models, or algorithms derived from the users’ photos or videos; and (iii) obtain affirmative consent for any use of the biometric data collected from its facial recognition technology. Everalbum is also prohibited from misrepresenting how it collects, uses, discloses, maintains, or deletes personal information. In response to the settlement, Paravision has stated that it is committed to utilizing its facial recognition services in an ethical manner and that the most recent model does not use any of the Ever app’s user data.

Commissioner Rohit Chopra issued a statement along with the settlement, reiterating the notion that facial recognition technology is “fundamentally flawed and reinforces harmful biases” and highlighting the importance of policing its use. He also calls the FTC’s inability to seek a monetary penalty in a first egregious offense “unfortunate” and makes a plea for the FTC, the states, and regulators around the globe to enact laws restricting the use of facial recognition and biometric identifier technologies, and to vigorously pursue enforcement actions against transgressors.

Facebook, the parent company to WhatsApp, is reporting near-record low revenue growth. Thus, presumably in an effort to monetize WhatsApp more heavily, WhatsApp recently announced changes to its privacy policy: as of February 8, 2021, all WhatsApp users (except those that live in Europe) must agree to share their data with Facebook. If users do not agree, WhatsApp will delete their account.

Since the announcement of the changes to WhatsApp privacy policy, there has been much public criticism. The public has also been speaking through their actions—with massive amounts of users changing messaging platforms over the last several days.

For example, following Turkish President Recep Tayyip Erdogan’s media office and the country’s defense ministry telling journalists that they are quitting WhatsApp and moving to an encrypted messaging app called BiP, businesswire.com reports that 4.6 million new users have purportedly signed up for BiP. BiP is a Turkish platform, with all data stored in data centers of Turkcell in Turkey with highly secured encryption. Atac Tansug, Executive Vice President – Digital Services and Solutions of Turkcell, has promised users that they will not be obligated to share their data with third parties.

Elon Musk has also issued a call for users to switch from WhatsApp. Specifically, Musk has urged people via Twitter to switch to Signal. Signal is an encrypted app that lets users send messages and make calls via the Internet. Signal’s marketing message focuses on privacy. According to gadgets.ndtv.com, Signal is open source and its code is peer-reviewed, which means its privacy and security is regularly checked by independent experts.

Meanwhile, Microsoft is urging WhatsApp users to change to Microsoft’s messaging platform, Skype. The Skype Twitter account tweeted: “Skype respects your privacy. We are committed to keeping your personal data private and do not sell to 3rd parties.” The tweet also included a URL linking to Microsoft’s privacy policy statement.

The changes to WhatsApp’s privacy policy are a good example of:

(1) how companies try to monetize data available to them;

(2) how changes to a company’s privacy policy can drive users away;

(3) how Europe’s data privacy laws resulted in different treatment for those living in Europe; and

(4) how companies can compete (and will likely increasingly compete) based on their privacy policies.

Just before the Christmas holidays, the Department of the Treasury’s Financial Crimes Enforcement Network (FinCEN) issued proposed rulemaking entitled “Requirements for Certain Transactions Involving Convertible Virtual Currency or Digital Assets.”  The proposed regulations seek to “require banks and money service businesses (“MSBs”) to submit reports, keep records, and verify the identity of customers in relation to transactions involving convertible virtual currency (“CVC”) or digital assets with legal tender status (“legal tender digital assets” or “LTDA”) held in unhosted wallets…”  The proposed rulemaking is set to be adopted under the Bank Secrecy Act (BSA).

FinCEN justified their proposal on national security grounds – i.e., the national security threat posed by bad actors using CVCs to, inter alia, “facilitate international terrorist financing, weapons proliferation, sanctions evasion, and transactional money laundering.”  Thus, the question arising out of the proposal is the same that often arises – indeed, it’s the same question that came out of the Schrems II decision that led to the invalidation of Privacy Shield last year: What is the proper balance of national security vs. personal privacy?

Specifically, in the case of FinCEN’s recent proposal, banks and MSBs would be required to:

  1. file a report with FinCEN containing certain information related to a customer’s CVC or LTDA transaction and counterparty (including name and physical address), and to verify the identity of their customer, if a counterparty to the transaction is using an unhosted or otherwise covered wallet and the transaction is greater than $10,000 (or the transaction is one of multiple CVC transactions involving such counterparty wallets and the customer flowing through the bank or MSB within a 24-hour period that aggregate to value in or value out of greater than $10,ooo); and
  2. keep records of a customer’s CVC or LTDA transaction and counterparty, including verifying the identity of their customer, if a counterparty is using an unhosted or otherwise covered wallet and the transaction is greater than $3,000.

The proposed rulemaking was open for public comments for only 15 days (the standard public comment period for these types of policies is 60 days), until January 4, 2021.  (Note: the Electronic Frontier Foundation (EEF) and Coinbase both criticized the limited timeframe for public comments, given that the holidays occurred during the 15-day period).

Several of the public comments on the proposal were focused on privacy-related concerns.  Even though the proposal sought to make the know your customer (KYC) rules for traditional banking institutions equally applicable to cryptocurrency, commenters argued that the promises of cryptocurrency (e.g., privacy and self-sovereignty) and the technological nature of cryptocurrency (e.g., the public ledger for blockchain-based currencies like Bitcoin) introduced new concerns.

For example, EEF noted that some cryptocurrencies like Bitcoin keep a public record of all transactions.  Thus, if the name of a user connected with a particular Bitcoin address is known, “the government may have access to a massive amount of data beyond just what the regulation purports to cover.”

Jack Dorsey, the CEO of Twitter and Square, also submitted comments.  Dorsey’s major complaint was that the proposed rules would create “unnecessary friction” between cryptocurrency users and financial institutions, which could lead to “perverse incentives.”  “To put it plainly – were the [regulations] to be implemented as written, Square would be required to collect unreliable data about people who have not opted into our service or signed up as our customers.”

Of course, the proposed rulemaking will not be the end of action in cryptocurrency regulation.  The recently passed “National Defense Authorization Act for Fiscal Year 2021” (H.R.6395) contains additional anti-money laundering tools that may further complicate cryptocurrency procedures in the coming months.

Rothwell Figg partner Christopher Ott was interviewed by Frazer Rice on his “Wealth Actually” podcast, discussing cybersecurity for high net worth individuals. The discussion includes how one should view his or her own digital risks, how to protect oneself, and what to do when you have been compromised.

You can listed to the interview here: https://frazerrice.com/blog/ep-72-chris-ott/.

Pursuant to Section 6(b) of the FTC Act and a December 11, 2020, resolution of the Federal Trade Commission (FTC) entitled “Resolution Directing use of Compulsory Process to Collect Information Regarding Social Media and Video Streaming Service Providers’ Privacy Practices,” the FTC has now issued orders requiring Facebook, WhatsApp, Snap, Twitter, YouTube, ByteDance, Twitch, Reddit, and Discord to file information concerning the method and manner in which they collect, use, store, and disclose information about users and their devices.  The companies have 45 days to respond, and, at the moment, the study is not part of any specific law enforcement proceeding.

The FTC voted 4-1 in favor of beginning the study.  A Joint Statement of three of the supporting FTC Commissioners–Chopra, Slaughter, and Wilson—highlights that several “social media and video streaming companies have been able to exploit their user-surveillance capabilities to achieve such significant financial gains that they are now among the most profitable companies in the world.”  The Joint Statement further explains:

“One key aspect of the inquiry is ascertaining the full scale and scope of social media and video streaming companies’ data collection. The FTC wants to know how many users these companies have, how active the users are, what the companies know about them, how they got that information, and what steps the companies take to continue to engage users. The inquiry also asks how social media and video streaming companies process the data they collect and what kinds of inferences they are able to make about user attributes, interest, and interactions. The FTC wants to understand how business models influence what Americans hear and see, with whom they talk, and what information they share. The questions push to uncover how children and families are targeted and categorized. These questions also address whether we are being subjected to social engineering experiments. And the FTC wants to better understand the financial incentives of social media and video streaming services.”

Commission Phillips submitted a Dissenting Statement, primarily criticizing the breadth of the inquiry.

Although it is very early in this inquiry, we can glean certain discrete details about what this order does and does not mean. First, this order has little direct connection with the public debates about free speech and Section 230.  Rather, this order does signal that the primary consumer protection and privacy watchdog is going to dive into the so-called attention economy.  This attention economy drives the online advertising goldrush, and many of the “free” online services that we enjoy. However, the effects that this still-new economic model may have upon consumers remain poorly understood.

The FTC study may shine a light on the practice of offering free services in exchange for monetization of personal data.  The investigation may also lead to further activity at the federal level in areas such as child exploitation, misinformation, and violence with social media and video streaming services.  Sen. Edward Markey (D-Mass) and Sen. Richard Blumenthal (D-Conn.) already introduced a bill earlier this year that addresses the “non-transparent ways” digital media apps and platforms market to and collect information from users ages 16 and under.

IBM Security (the company’s cybersecurity division) has recently discovered a global phishing campaign targeting organizations associated with the critical coronavirus vaccine distribution chain. The division’s “X-Force,” created at the onset of the pandemic to monitor cyber threats against vaccine developers and distributors, released a report on Thursday with their analysis and recommendations.

Starting in September, and spanning across at least 6 countries and multiple organizations, the malicious cyber operation appeared to specifically target the Cold Chain Equipment Optimization Platform (CCEOP), a $400M, 5-year project launched in 2015 by Gavi the Vaccine Alliance, UNICEF, and other partners. CCEOP was initiated to upgrade existing cold chain equipment in 56 countries by 2021, and has been specifically utilized this year to support vaccine response efforts for COVID-19. As vaccines are both light- and heat-sensitive, the newly developed technology has been absolutely vital in preserving vaccine quality and potency during storage and transportation.

The nature of the cyberattacks involved impersonation of a business executive from Haier Biomedical, a legitimate CCEOP supplier and purportedly the world’s only complete cold chain provider. Disguised as this business leader, the cyber-attacker sent phishing emails to organizations that support the CCEOP project efforts, including those in the energy sector (e.g., companies who develop solar-powered vaccine refrigerators and dry ice through petroleum production) and the IT sector (e.g., organizations involved in safeguarding pharmaceutical manufacturing and the biotechnical and electrical components of container transportation). The emails were posed as requests for quotations for participating in a vaccine program, but included malicious HTML attachments which prompted the recipient to enter their personal credentials. IBM believes that the purpose of harvesting the credentials is possibly to steal the cold chain technology, or more likely, to gain unauthorized access to critical and confidential information pertaining to the COVID-19 vaccine distribution, including timelines, lists of recipients, and shipping routes, for a more nefarious objective. IBM has not yet identified the cyber-attacker(s), but because of the sophisticated and precision-targeted attacks, it is believed to be the work of a nation-state, rather than a rogue criminal operation.

While it is currently unclear whether the phishing attacks were successful, governments are already sounding the alarm and alerting those organizations involved in developing, manufacturing, and distributing the COVID-19 vaccine to the threat of cyberattacks. The thought of an intentional interference with the vaccine’s cold chain distribution is chilling, and companies involved are strongly encouraged to review the recommendations and indicators of compromise set forth in IBM’s report, as well as the U.S. Cybersecurity & Infrastructure Security Agency’s (CISA) tips for avoiding phishing attacks in its CISA Insights: Enhance Email & Web Security.

While the nation was preoccupied with the presidential race, California voted “YES” on the California Privacy Rights Act (“CPRA”), which amends and expands on the CCPA. Our previous post sets forth the highlights of the balloted Proposition 24, but in a nutshell, the CPRA was designed to close a number of loopholes in the CCPA, strengthen consumer privacy protections, and establish the California Privacy Protection Agency as the primary enforcement authority.

Some dates to keep in mind – The CPRA amendments become effective on January 1, 2023, and will apply to personal information (“PI”) collected by covered businesses on or after January 1, 2022. The CCPA’s existing exemption for PI collected for employment purposes or in connection with business-to-business (B2B) communications was extended until January 1, 2023 (previously January 1, 2022). The California Privacy Protection Agency must be established by July 1, 2021 and must adopt final regulations by July 1, 2022. Enforcement of the CPRA amendments by the Agency will not begin until July 1, 2023.

While the key dates seem far off, companies should start thinking, sooner versus later, about what steps they will need to take to implement the new privacy law. And the good news is that you don’t have to start from scratch – compliance with the CPRA should simply build on current efforts towards compliance with the CCPA (which is still enforceable as currently implemented).

So what steps should your business take now?

  • Determine if your business is covered by the CPRA. The CPRA changes the thresholds for businesses that are subject to California’s privacy law. For instance, the CPRA defines covered businesses as those engaged in the buying, selling, or sharing of the PI of more than 100,000 California consumers/households. This is an increase from 50,000 under the CCPA, meaning that more small businesses are now excluded from the regulation. However, the specific addition of businesses that “share” PI, clarifies and expands the scope of the law, to now cover businesses who provide PI to others, whether or not for monetary or other valuable consideration (in an effort to further regulate the use of PI for behavioral or targeted advertising purposes).
  • Revamp your data subject request systems.  The CPRA creates new rights for California consumers, including the right to correct PI, the right to limit the use of sensitive PI, and the right to opt out of the “sharing” of PI. Your business should thus implement changes on the system back-end to accept and act on such requests by consumers. Companies will also need to consider how to distinguish and separate out “sensitive personal information,” which includes SSN, driver’s license number, passport number, credit card info in combination with log-in credentials for a bank account, geolocation data, health and biometric data, and information about race, ethnicity, religion, and sexual orientation.
  • Review and revise business agreements. The CPRA places new contractual obligations on service providers, contractors, and third parties. Specifically, it requires that businesses sending PI to third parties enter into an agreement binding the recipient to the same level of privacy protections provided by the CPRA, granting the business rights to take reasonable steps to remediate unauthorized use, and requiring the recipient to notify the business if it can no longer comply.
  • Enhance data security and implement risk assessments. The CPRA requires businesses to take reasonable precautions to protect consumers’ PI from a security breach. In addition, under the California Privacy Protection Agency’s rulemaking, businesses that process PI that presents a significant risk to consumers’ privacy or security must (i) perform an annual cybersecurity audit, and (ii) submit to the Agency on a regular basis a risk assessment with respect to the potential risks related to their processing of PI.

If you need any help navigating or implementing California’s evolving privacy law, please contact us at privacy@rothwellfigg.com.