On August 20, 2021, China passed its first general data protection law, called the Personal Information Protection Law (“PIPL”).  The law is set to take effect on November 1, 2021 (two months away), and it applies to both (1) in-country processing of personal information of natural persons; and (2) out-of-country processing of personal information of natural persons who are in China, if such processing is: (a) for the purposes of providing products or services to those people; (b) to analyze/evaluate the behavior of those people; or (3) other circumstances prescribed by laws and administrative regulations.  Thus, the PIPL will become one more thing that companies have to consider in weighing questions of where to store which user data.

While much of PIPL is similar to GDPR – such as the definitions of “personal information” and “processing”; requiring a legal basis for processing personal information; and providing individuals with various rights with respect to their personal information (e.g., portability, correct and delete, restrict and prohibit, etc.)—there are differences, and companies to whom the law applies should review their policies and practices carefully to ensure compliance.

Two ways in which PIPL stands out from some other general data protection laws are with regard to the data location requirement and the cross-border transfer requirements.

First, the law provides that critical infrastructure information (“CII”) operators (such as government system, utilities, financial system, public health) or entities processing a large amount of personal information must store personal information within the territory of mainland China.  Of note, every company operating in China is suggested to conduct a self-assessment to determine whether it may be deemed a CII operator.  In order for such information to be transferred to points outside of China, the transfer must pass a government-administered security assessment.

Second, cross-border transfer of information is allowed (for non-CII and large-volume companies) if the processor meets one of the following: (i) it passes a security assessment organized by the Cybersecurity Administration of China (CAC); (ii) it is certified by a specialized agency for the protection of PI by CAC; or (iii) it enters into a contract with the overseas recipient under the standard contract formulated by the CAC.  [Of note, it appears that despite the law going into effect in two months, there is not “standard contract” published yet.]

Penalties for violations of PIPL include, inter alia, an administrative fine of up to RMB 50 million or 5% of the processor’s turnover in the last year (it is unclear if this refers to local turnover or global turnover).

At this point you have probably heard about one of the many incidents where an AI-enabled system discriminated against certain populations in settings ranging from healthcare, law enforcement, and hiring, among others. In response to this problem, the National Institute of Standards and Technology (NIST) recently proposed a strategy for identifying and managing bias in AI, with emphasis on biases that can lead to harmful societal outcomes.  The NIST authors summarize:

“[T]here are many reasons for potential public distrust of AI related to bias in systems. These include:

  • The use of datasets and/or practices that are inherently biased and historically contribute to negative impacts
  • Automation based on these biases placed in settings that can affect people’s lives, with little to no testing or gatekeeping
  • Deployment of technology that is either not fully tested, potentially oversold, or based on questionable or non-existent science causing harmful and biased outcomes.”

As a starting place, the NIST authors outline an approach for evaluating the presentation of bias in three stages modeled on the AI lifecycle: pre-design, design & development, and deployment.  In addition, NIST will host a variety of activities in 2021 and 2022 in each area of the core building blocks of trustworthy AI (accuracy, explainability and interpretability, privacy, reliability, robustness, safety, and security (resilience), and bias).  NIST is currently accepting public comment on the proposal until September 10, 2021.

Notably, the proposal points out that “most Americans are unaware when they are interacting with AI enabled tech but feel there needs to be a ‘higher ethical standard’ than with other forms of technologies,” which “mainly stems from the perceptions of fear of loss of control and privacy.”  From a regulatory perspective, there currently is no federal data protection law in the US that broadly mirrors Europe’s GDPR Art. 22 with respect to automated decision making – “the right to not be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her.”  But several U.S. jurisdictions have passed laws that more narrowly regulate AI applications that have the potential to cause acute societal harms, such as the use of facial recognition technology in law enforcement or interviewing processes, and perhaps more regulation is likely as (biased) AI-enabled technology continues to proliferate into more settings.

In Van Buren v. United States, the Supreme Court resolved a circuit split as to whether a provision of the Computer Fraud and Abuse Act (CFAA) applies only to those who obtain information to which their computer access does not extend, or more broadly to also encompass those who misuse access that they otherwise have. By way of background, the CFAA subjects to criminal liability anyone who “intentionally accesses a computer without authorization or exceeds authorized access,” and thereby obtains computer information. 18 U.S.C. 1030(a)(2). The term “exceeds authorized accessed” is defined to mean “to access a computer with authorization and to use such access to obtain or alter information in the computer that the accesser is not entitled so to obtain or alter.”  18 U.S.C. 1030(a)(2).

The case involved a police sergeant that used his patrol-car computer to access a law enforcement database with his valid credentials in order to obtain license plate number records in exchange for money. The sergeant’s use of the database violated the department’s policy against using the database for non-law enforcement purposes, including personal use. At trial, the Government told the jury that the sergeant’s access of the database for non-law enforcement purposes violated the CFAA concept against using a computer network in a way contrary to what your job or policy prohibits. The jury convicted the sergeant, and the District Court sentenced him to 18 months in prison. The Eleventh Circuit affirmed, consistent with its precedent adopting the broader view of the CFAA.

The parties agreed that the sergeant accessed a computer with authorization when he used his valid credentials to log in to the law enforcement database, and that he obtained information when he acquired the license-plate records, but the dispute was whether the sergeant was “entitled so to obtain” the record. After analyzing the language of the statute and the policy behind the CFAA, the Court held that an individual “exceeds authorized access” under the CFAA when he accesses a computer with authorization but then obtains information located in particular areas of the computer—such as files, folders, or databases—that are off limits to him. Because the sergeant could use his credentials to obtain the license plate information, he did not exceed authorized access to the database under the terms of the CFAA.

In reaching its holding, the Court noted if “the ‘exceeds authorized access’ clause criminalizes every violation of a computer-use policy, then millions of otherwise law-abiding citizens are criminals.” Accordingly, with the narrowing of the CFAA, the decision is a good reminder to ensure that policies and agreements, including terms of use, that govern access to sensitive electronic resources are both enforceable and crafted with sufficient terms to cover insider threats and prohibit individuals with access to the electronic resource from using the resource in a damaging manner.

On April 22, 2021, the Supreme Court issued a unanimous decision finding that the FTC lacks authority to pursue equitable monetary relief in federal court under Section 13(b) of the Federal Trade Commission Act (the “FTCA”). The result means that defendant Scott Tucker does not have to pay $1.27 billion in restitution and disgorgement, notwithstanding that his payday loan business was found to constitute an unfair and deceptive practice.  The result also means that onlookers everywhere are scratching their heads thinking “what now?”  If the FTC is stripped of its authority to pursue equitable monetary relief, then will unfair and deceptive practices run rampant, knowing that the “worst case” scenario is that they will be enjoined (but get to keep any ill-gotten profits earned in the interim)?

Not entirely and probably not for long.

First, the FTC was never the only enforcement mechanism for policing unfair competition and deceptive practices violations.  Thus, there remain other agencies and statutes available to pursue wrongdoers, and many of these allow for the pursuit of equitable monetary relief.  For example, there are other federal agencies that have jurisdiction in certain situations (e.g., the Consumer Financial Protection Bureau).  Also, state attorneys general and state UDAP laws often have broad jurisdiction and authority to pursue monetary relief.

Second, Sections 5 and 19 of the FTCA give district courts the authority to impose monetary penalties and award monetary relief where the FTC has issued cease and desist orders.  So, the FTC is not currently stripped of all authority to pursue equitable monetary relief in federal court – it just needs to issue a cease and desist order first.

Third, the FTC has been pressuring and continues to pressure Congress to amend Section 13(b) of the FTCA to broaden the scope of relief available.

Fourth, the FTC could promulgate more rules and strengthen its existing rules under its rulemaking process (Section 18 of the FTCA).  Last month, acting Chairwoman Slaughter announced the formation of a new, centralized rulemaking group in the General Counsel’s office.

In sum, the Supreme Court’s decision will undoubtedly have some effects on the policing of unfair and deceptive trade practices, but there are numerous processes in place to ensure that the system is not derailed, and it is likely that the FTC will have authority to pursue monetary relief in the future.

As part of its three-part series on the future of human-computer interaction (HCI), Facebook Reality Labs recently published a blog post describing a wrist-based wearable device that uses electromyography (EMG) to translate electrical motor nerve signals that travel through the wrist to the hand into digital commands that can be used to control the functions of a device.  Initially, the EMG wristband will be used to provide a “click,” which is an equivalent to tapping on a button, and will eventually progress to richer controls that can be used in Augmented Reality (AR) settings.  For example, in an AR application, users will be able to touch and move virtual user interfaces and objects, and control virtual objects at a distance like a superhero.  The wristband may further leverage haptic feedback to approximate certain sensations, such as pulling back the string of a bow in order to shoot an arrow in an AR environment.

One general promise of neural interfaces is to allow humans to have more control over machines.  When coupled with AR glasses and a dynamic artificial intelligence system that learns to interpret input, the EMG wristband has the potential to become part of a solution that bring users to the center of an AR experience and frees users from the confines of more traditional input devices like a mouse, keyboard, and screen.  The research team further identifies privacy, security, and safety as fundamental research questions, arguing that HCI researchers “must ask how we can help people make informed decisions about their AR interaction experience,” i.e., “how do we enable people to create meaningful boundaries between themselves and their devices?”

For those of you wondering, the research team did confirm that the EMG interface is not “akin to mind reading”:

“Think of it like this: You take many photos and choose to share only some of them. Similarly, you have many thoughts and you choose to act on only some of them. When that happens, your brain sends signals to your hands and fingers telling them to move in specific ways in order to perform actions like typing and swiping. This is about decoding those signals at the wrist — the actions you’ve already decided to perform — and translating them into digital commands for your device. It’s a much faster way to act on the instructions that you already send to your device when you tap to select a song on your phone, click a mouse, or type on a keyboard today.”

Yesterday, Virginia passed the Virginia Consumer Data Protection Act (VCDPA), making it the second state (behind California, with its California Consumer Protection Act (CCPA) to enact a general consumer privacy law.  The VCDPA will take effect on January 1, 2023, which is also the same day the California Privacy Rights Act (CPRA), an act to strengthen the CCPA, will go into effect.

The VCDPA applies to “persons” that conduct business in Virginia (or produce products and services that are targeted to Virginia residents) that “control or process” the personal data (1) of at least 100,000 Virginia residents or (2) for an entity that derives over half of its gross revenue from the sale of personal data, of at least 25,000 Virginia residents.  Nonprofit organizations and institutions of higher education are exempt.

The VCDPA defines “personal data” broadly as any information that is “linked or reasonably linked to an identified or identifiable natural person.”  “Personal data” does not include publicly available information, or de-identified data (which by definition cannot be reasonably linked), although de-identified data would be subject to certain safeguards to limit the risk of re-identification.

The VCDPA stands out in that it is the first of its kind in the U.S. to require controllers to seek opt-in “consent” from consumers with respect to the processing of sensitive data, including health information, race, ethnicity, and precise geolocation data, and to mandate formal data protection assessments (similar to the European Union’s General Data Protection Regulation (GDPR)).  The data protection assessment obligation requires controllers to conduct data protection assessments that weigh the overall benefits of the processing activity against the potential risks of the consumer (as mitigated by applicable safeguards) before engaging in processing activities that involve sensitive data, targeted advertising, the sale of personal data, processing for purposes of profiling, and any other activities that “present a heightened risk of harm to consumers.”  Further, the attorney general may compel production of these assessments pursuant to an investigative civil demand, without court approval, and may evaluate the data protection assessment for compliance with VCDPA.  The assessments are considered confidential and exempt from Virginia’s Freedom of Information Act (FOIA), and any attorney-client privilege or work product protection with respect to an assessment or its contents cannot be considered waived.

The rest of the VCDPA provisions are more in-line with other consumer privacy laws.

The VCDPA provides consumers with data subject rights, including a right to confirm whether a controller is processing personal data, to access such personal data, to correct inaccuracies in the personal data, to delete personal data, to obtain a copy of the personal data in a portable and usable format, and to opt out of the processing of personal data for purposes of (i) targeted advertising, (ii) the sale of personal data, or (iii) profiling in furtherance of decisions that produce legal or similarly significant effects concerning the customer.

The VCDPA requires data controllers to, inter alia, limit collection of personal data to what is adequate, relevant, and reasonably necessary; to not process personal data outside of the disclosures of what/how personal data is being processed; and to establish, implement, and maintain reasonable administrative, technical, and physical data security practices (reasonableness is tied to the volume and nature of personal data at issue).

The law also requires controllers to provide consumers with a privacy notice including the categories of personal data processed by the controller; the purpose for processing personal data; how consumers may exercise their data subject rights; the categories of personal data the controller shares with third parties (if any); the categories of third parties, if any, with whom the controller shares personal data; if the controller sells personal data or processes personal data for targeted advertising, the controller shall disclose such processing as well as the manner in which a consumer may opt-out of such processing; and information on how consumers may submit requests to exercise their data subject rights (which  must take into account “the ways in which consumers normally interact with the controller, the need for secure and reliable communication of such requests, and the ability of the controller to authenticate the identity of the consumer making the request”).

The VCDPA does not provide for a private consumer right of action (unlike CCPA, which provides for a private right of action for violations of the duty to implement and maintain reasonable security procedures).  Instead, the state’s attorney general has exclusive authority to enforce the law.  Prior to initiating any action, the attorney general must give the controller or processor 30 days’ written notice, and if the controller or processor cures the noticed violation(s) and provides an express written statement to the attorney general stating so and that no further violations shall occur, then no action for statutory damages will be initiated.  The VCDPA provides for statutory damages of up to $7,500 for “each violation,” as well as injunctions and reasonable expenses for the attorney general investigating and preparing the case, including attorney fees.

All penalties collected for violations of the VCDPA will be paid into the newly-created “Consumer Privacy Fund,” which will be used “to support the work of the Office of the Attorney General to enforce the provisions of this chapter.”

In December 2020, Apple started requiring Apps to display mandatory labels that provide a graphic, easy-to-digest version of their privacy policies.  They are being called “privacy nutrition labels,” presumably a reference to the mandatory FDA-required “Nutrition Facts” labels that have appeared on food since 1990.  Below I offer ten thoughts related to these new labelling requirements.

  1. The idea for Apple’s labels may not have been Apple’s.  As reported by wired.com, in the early 2010s, academic researchers developed mobile app privacy label prototypes and ran tests in 20 in-lab exercises and an online test of 366 participants.  See here.  They found that by “bringing information to the user when they are making the decision and by presenting it in a clearer fashion, we can assist users in making more privacy-protecting decisions.”  For example, according to this Washington Post article, the “privacy nutrition labels” for the Zoom video chat app says it takes six kinds of data linked to your identity,  while rival Cisco Webex Meetings says it collects no data beyond what’s required for the app to run.  Just like the “Nutrition Facts” on cereal boxes may influence what cereal box you buy, will “privacy nutrition labels” influence what Apps you download?
  1. While the aforementioned study shows that the new Apple “privacy nutrition labels” could be a game-changer for consumer purchasing behavior with respect to Apps, the real effects will likely depend on a number of factors, including how easy the labels are to understand, how consistently they present information across different products/Apps, where they are presented/how far down one has to scroll, visually how they are presented/whether they stand out to customers, the extent to which Apps “market” privacy and try to compete on privacy policies, and importantly, whether the presented information is accurate.
  1. On the topic of understanding the labels, it seems there is a long way for most consumers to go before they understand what to look for on a “privacy nutrition label.”  But, most users would at least understand that the longer the list of data that the App uses, the more invasive it is with regard to their privacy.  Here is a great article on how to read and understand “privacy nutrition labels,” including some of the key details that consumers may want to scan for.
  1. On the topic of presentation of the labels, my personal view is that Apple has a long way to go if it wants these labels to stand out.  When I first looked for one, I couldn’t find it – I just scrolled right on past it.  Now that I know what I’m looking for, it’s easier to spot.  It will be interesting to see the extent to which users are educated on looking for and using these labels.  It will also be interesting to see if (and how) Apple updates the look and feel of these labels in the future.  Or if an agency ever steps in and mandates similar labelling in the future, similar to the FDA’s “Nutrition Facts” labels.
  1. On the topic of “marketing” privacy and competing based on privacy promises, I think we are already there in some industries.  For example, see my prior blog post: “WhatsApp – An Example of How Companies Compete Based on Privacy.”
  1. On the topic of accuracy, the Washington Post published this Consumer Tech Review: “I checked Apple’s new privacy ‘nutrition labels.’  Many were false.” The reviewer pointed out numerous cases where the privacy labels allegedly misrepresented the data collection practices of the App (and tips on how to determine this).  He concluded that “Apple’s big privacy product is built on a shaky foundation: the honor system.  In tiny print on the detail page of each app label, Apple says, “This information has not been verified by Apple.””
  1. Of course, is it Apple’s obligation to verify the information?  By Apple requiring that the information be disclosed on its “privacy nutrition labels,” it is taking a first step.  Consumer protection laws and regulations exist to police with respect to misrepresentations.  Of course, the chief federal agency on privacy policy enforcement (the Federal Trade Commission) may soon be ruled by the Supreme Court to lack the ability to demand monetary relief (see our post on this here).  Thus, to the extent an App is profiting from its misrepresented “privacy nutrition labels” – and a goal of enforcement would be disgorge those ill-gotten gains – avenues of enforcement outside of the FTC may need to be explored.
  1. Does the fact that Apple’s privacy nutrition labels may be wrong mean they should be disregarded altogether?  While it may be tempting to consider this, I did a quick search on whether food nutrition labels are ever wrong.  This article reported that the law allows a margin of error of up to 20 percent for the stated value versus the actual value of nutrients.  Wow.  Of course, this is different than a cereal representing that it has no fat, and it turns out that it is loaded with fat (which may be more analogous to what some companies are doing by misrepresenting their privacy policies on their Apple “privacy nutrition labels”), but it is still informative.  And what is maybe more informative is the fact that, per the same article, a 2010 study in the Journal of Consumer Affairs purportedly reported that those who read nutrition labels (but did not exercise) were more likely to lose weight than those who did not read labels and did exercise—“[i]n other words, the awareness of a food’s (approximate) nutritional content and portion size does appear to influence eating behaviors in a beneficial way.”  I would imagine that the same may be true with respect to those who read “privacy nutrition labels.”  While they may not always be accurate, paying attention to them may influence your App downloading behaviors in a beneficial way (if you care about privacy).
  1. Now what about the name “privacy nutrition label”?  Is the word “nutrition” in there just as a reference to the FDA “Nutrition Facts,” or is there an implication hidden in there that certain privacy policies are more or less “healthy”?  Here are some definitions of “nutrition” to help you ponder this:
    • gov: “Nutrition is about eating a healthy and balanced diet.  Food and drink provide the energy and nutrients you need to be healthy.  Understanding these nutrition terms may make it easier for you to make better food choices.”
    • Wikipedia: “Nutrition is the science the interprets the nutrients and other substances in food in relation to maintenance, growth, reproduction, health and disease of an organism.  It includes ingestion, absorption, assimilation, biosynthesis, catabolism, and excretion.”
    • Merriam-webster.com: Nutrition is “the act or process of nourishing or being nourished, specifically: the sum of the processes by which an anima or plant takes in and utilizes food substances.”
    • World Health Organization: “Nutrition is a critical part of health and development.  Better nutrition is related to improved infant, child and maternal health, stronger immune systems, safer pregnancy and childbirth, lower risk of non-communicable diseases (such as diabetes and cardiovascular disease), and longevity.”
  1. Finally, what’s the future of these labels?  Will competitors start requiring similar “privacy nutrition labels”?  Will websites start offering them as a “graphic version” of their longer privacy policies?  Will they start to appear at brick-and-mortar establishments on small displays?  While there are risks associated with additional statements of one’s privacy policy (i.e., one more document to maintain, verify the accuracy of, etc.), there may be benefits as well as consumers start to pay more attention to companies’ privacy policies, and companies increasingly compete on privacy promises.

While Europe is leveraging hefty fines against violators of the EU General Data Protection Regulation (GDPR) (here is a tracker of recent fines: https://www.enforcementtracker.com/), the United States Supreme Court heard oral arguments last month on whether the FTC – the chief federal agency on privacy policy and enforcement since the 1970s – lacks authority to demand monetary relief.

The oral arguments in AMG Capital Management v. FTC focused on the question of whether Section 13(b) of the Federal Trade Commission Act, by authorizing “injunction[s],” also authorizes the FTC to demand monetary relief such as restitution. If it is found that the FTC is not authorized to demand monetary relief this would affect all cases before the FTC, which has broad authority to enforce both consumer protection and antitrust laws under Section 5 of the FTC Act (providing that “unfair or deceptive acts or practices in or affecting commerce … are … declared unlawful”).

More specifically, the issue on appeal concerns Section 13(b) of the FTC Act, codified at 15 U.S.C. § 53(b).  Section 13(b) permits the FTC to seek, and federal courts to issue, “a permanent injunction” to enjoin acts or practices that may violate the FTC Act. In the decades since the statute was enacted in 1973, courts have broadly construed the term “injunction” to also include other equitable relief, such as monetary relief in the form of restitution or the disgorgement of ill-gotten gains. However, in recent years, several courts have questioned this statutory interpretation, instead relying on the more modern, strictly textual approach.

In 2019, for instance, in FTC v. Credit Bureau Center, the Seventh Circuit refused to read an implied remedy of restitution into the text of the statute. Similarly, in September of 2020, in FTC v. Abbvie et al., the Third Circuit held that the FTC was not authorized to seek disgorgement as a remedy under Section 13(b). This turning of the tides set the stage for a circuit split on the issue, which ripened for review last summer when the Supreme Court granted certiorari to consider the question as presented in the instant case.[1]

At oral arguments, counsel for the FTC argued that the language of the statute should be interpreted in the context in which it was drafted, using what Chief Justice Roberts called the “free-wheeling” approach, not confined by the specific language and looking more to the drafters’ intent. This understanding, the FTC argued, has been used by courts in nearly 50 years of equity jurisprudence finding that the FTC has the authority to order the return of funds from an unjustly enriched transgressor. In particular, counsel highlighted the Supreme Court cases, Porter v. Warner Holding Co. and Mitchell v. Robert DeMario Jewelry, Inc., which both applied this principle to analogously-worded statutes and held a more expansive view of the FTC’s authority.

Yet, as Justice Kavanaugh recognized, the problem with the FTC’s argument is the text of the statute itself. There is simply no mention of any additional equitable relief available to the FTC. Counsel for AMG expanded on this argument, explaining that the best way to determine Congress’s intent at the time is “by looking at the words on the page.” AMG and several Justices also noted that Sections 5(l) and 19 of the FTC Act, drafted at the same time as Section 13(b), expressly provide monetary relief, leading a reader to believe that the omission in Section 13(b) was in fact intentional. And in response to Justice Breyer’s concerns about overturning years of precedent, AMG countered that longstanding error is still error, and the Supreme Court has a duty to correct such error now that the issue is before the Court.

AMG’s position was further bolstered by concerns about the constitutional issues with due process and notice, should the FTC be permitted to seek monetary remedies in the first instance of wrongdoing. Yet on the other hand, several of the Justices also raised the concern about a reasonable person, knowing and understanding that his or her conduct was deceptive, keeping the ill-gotten gains obtained from fraudulent schemes and violating the law.

In the end, as Justice Breyer joked, “Blue brief, I think you’re right. Red brief, I think you are right. They can’t both be right.” Despite very reasonable and plausible arguments on both sides, the Justices will have to make a decision. And that decision comes with high stakes. Specifically, an adverse decision from the Supreme Court will significantly limit the FTC’s enforcement authority and very likely impact which avenues it will take (via state court, federal court, or through the agency’s own power) to protect consumers based on the remedies available.

Based on the line of questioning from the Justices, the skepticism of the FTC’s statutory purpose arguments, and the Court’s recent shift towards strict textual interpretation, some commentators say the writing is on the wall. However, even if the Supreme Court holds that FTC does not have explicit statutory authority to seek monetary relief under Section 13(b), all is not lost for the FTC. As Justice Kavanaugh suggested, “Why isn’t the answer here for the Agency to seek this new authority from Congress for us to maintain a principle of separation of powers?” This may well be the best path forward for the FTC, an option that it began to explore late last year after the adverse decisions from the Third and Seventh Circuits. In a letter sent back in October, the Commission urged Congress to clarify that the Commission may “obtain monetary relief, including restitution and disgorgement” under Section 13(b) and ultimately “restore Section 13(b) to the way it has operated for four decades.”

We will provide an update on this case once the Supreme Court issues a decision.

 

[1] The underlying facts of AMG Capital Management v. FTC are fairly straightforward. Scott Tucker was the owner of AMG Capital Management, a provider of high-interest, short-term payday loans. In April 2012, the FTC filed suit against Tucker, alleging that Tucker’s loan enforcement practices were harsher than the terms consumers had actually agreed to in the loan notes. The District of Nevada found that Tucker was engaging in unfair or deceptive trade practices, in violation of Section 5 of the FTC Act, and ordered him to pay $1.27 billion in equitable relief to the FTC to ultimately be distributed to the victims. Tucker appealed, and the Ninth Circuit affirmed, noting that its precedent squarely holds that Section 13 of the FTC Act “empowers district courts to grant any ancillary relief necessary to accomplish complete justice.”

A recent article from CNN reported on SpaceX and Amazon sparring over their competing satellite-based internet business. The article reports that at the center of the dispute is “a recent attempt by SpaceX to modify its license for Starlink, a massive constellation of internet satellites, of which SpaceX has already launched more than 900.”  SpaceX reportedly wants to put a few thousand of its satellites in a lower altitude than previously planned or authorized, which Amazon alleges would put them in the way of the constellation of internet satellites it has proposed, called Project Kuiper, and thus increase the risk of a collision in space and increase radio interference for customers.  Amazon argues that it designed its constellation (which has an FCC license, but no launches yet) around the SpaceX constellation, and now SpaceX wants to change the design of its system.  SpaceX has explained that its proposed change in altitude minimizes the risk of collision.  As the CNN article reports: “Putting satellites into lower orbits is generally considered a best practice because, if a satellite were to malfunction, the Earth’s gravity could drag it out of orbit – and away from other satellites – more quickly.”

While the dispute between Amazon and SpaceX is interesting in its own right, it further raises questions about whether our current privacy and intellectual property regimes are ready for what lies ahead?  In 2019, my partner, Marty Zoltick, and I wrote a chapter on this as it pertains to privacy and data protection laws – “The Application of Data Protection Laws in (Outer) Space.” Among the topics addressed in our chapter are: (i) what is outer space, and where do the laws of nation states end; (ii) what laws and treaties apply in outer space; (iii) what are the shortcomings of existing data protection regulations; and (iv)  what new international laws, rules, and/or regulations are needed to more clearly establish which data protection laws apply when personal data is processed in air and space.

Website operators can consider a host of potential legal claims against entities that scrape their sites’ content without authorization, such as breach of a well-crafted terms of service agreement, copyright infringement, trespass, conversion, common law misappropriation, unfair competition, violations of the Computer Fraud and Abuse Act, misappropriation of trade secrets, and trademark infringement, among others.  Each type of claim has its limits, and multiple claims may intersect or overlap in significant ways, particularly when it comes to preemption or remedies.  Accordingly, the nature and context of both the unauthorized web scraping activities and the scraped content should be carefully evaluated to determine an appropriate response.

For example, a recent complaint filed by Southwest against Kiwi illustrates how a data scrape may lead to potential violations of the Lanham Act where the material scraped includes or is used with protected logos and branding.  In its complaint, Southwest alleges that Kiwi scraped its airline fares, and displays Southwest’s protected “Heart” mark in conjunction with promoting and re-selling Southwest’s fares on Kiwi’s online travel agency site.  Southwest alleges that Kiwi is using its Heart mark in a manner that is likely to cause confusion, or to cause mistake, or to deceive as to the affiliation, connection or association of Kiwi with Southwest, or as to the origin, sponsorship or approval of Kiwi’s goods and services by Southwest in violation of Section 32 of the Lanham Act, 15 U.S.C. § 1114.  Southwest has also alleged claims of false designation of origin and trademark dilution under the Lanham Act.

Southwest has also asserted claims of breach of its website Terms & Conditions, violation of the Computer Fraud and Abuse Act, violation of Texas Penal Code § 33.02 (Breach of Computer Security), and common law unjust enrichment.  The case is Southwest Airlines Co. v. Kiwi.com, Inc. et al., 3:21-cv-00098, pending in the Northern District of Texas.