The California Consumer Privacy Act (CCPA) went into effect on January 1, 2020, and enforcement begins tomorrow, July 1, 2020.  Is your privacy policy compliant?  Here are a few quick questions that may help you determine the answer –

  • Does your privacy policy have a “last updated” date that is less than a year old?
  • Do you fully identify the personal information that your company collects?
  • Do you fully explain the purposes for which you use the collected personal information?
  • Do you share or sell the information, and if so, do you explain to whom and why?
  • Do you identify the rights individuals have with respect to their personal data, including:
    • the right to erase or delete all or some of one’s personal data;
    • the right to a copy of one’s personal data, including in machine readable form;
    • the right to change, update, or fix one’s personal data if it is inaccurate;
    • the right to stop using all or some of one’s personal data (where you have no legal right to keep using it) or to limit use of one’s personal data; and
    • the right to opt-out of the same of information.
  • Do you provide at least two forms of contact, for individuals to submit requests for information, including at least a toll-free number (or email address if your business is online-only)?

For more information, see Rothwell Figg’s Privacy, Data Protection, and Cybersecurity Page, including our CCPA Compliance Guide, or contact us directly at

The question is – do wiretapping statutes apply in cases where there is no traditional third party interceptor?  And more practically speaking, how does an entity using plug-ins and cookies avoid liability under wiretapping statutes while there is so much uncertainty in the law?

We previously blogged about this issue In re: Facebook, Inc. Internet Tracking Litigation (here).  We reported how: (i) the district court dismissed the plaintiff’s action, which brought claims under, inter alia, the Electronic Communications Privacy Act (ECPA) and California Invasion of Privacy Act (CIPA), pursuant to the “party” exception (i.e., there was no third party intermediary); and (ii) the Ninth Circuit reversed on grounds that the “party” exception is applicable where the sender is unaware of the transmission.  We also explained how the result of this Ninth Circuit decision was a split between circuit courts on the applicability of wiretapping statutes where a sender’s own computer transmits messages, and also a split within the Ninth Circuit.  And at the time we blogged, Facebook had filed a motion for rehearing before the Ninth Circuit.

Earlier this week, the Ninth Circuit denied Facebook’s motion for reconsideration, thereby solidifying the aforementioned splits.  While according to public sources Facebook has thus far declined to comment on its next steps, it seems likely that Facebook may file a petition for a writ of certiorari in the Supreme Court.

Unless and until the Supreme Court clarifies the scope of the Wiretap Act, those using third party cookies or plug-ins to track users’ Internet activity would be wise to (1) review their disclosures and ensure that they provide detailed information about what third party plug-ins and cookies the site uses, and exactly when and how they work (while being careful to ensure that, at the same time, they do not reveal corporate trade secrets or other confidential information); (2) review their consent procedures to ensure that affirmative consent to the use of the disclosed plug-ins and cookies is sought; and (3) review any contracts and terms of service with the third party.

On the flip side, companies that offer plug-ins may want to contractually require website operators to provide detailed disclosures and seek affirmative consent from users before installing code, or alternatively, they may want to implement their own consent mechanisms for their plug-ins, such as a “2-click solution.”  A 2-click solution is where a user clicks on an image (such as a “Like” button), and then informed consent is obtained directly from the user before installing the plug-in (e.g., “By clicking ‘Like’ you install a plug-in from Company X, which will direct your browser to send a copy of the URL of the visited page to Company X”).

If you have any privacy questions related to your company’s use of plug-ins or cookies, please contact us at

Articles summarizing CCPA often state that it applies to for-profit businesses that do business in California that satisfy certain criteria, and they fail to ever mention that CCPA does apply to some non-profits.

The CCPA defines “business” as “a sole proprietorship, partnership, limited liability company, corporation, association, or other legal entity that is organized or operated for the profit or financial benefit of its shareholders or other owners, that collects consumers’ personal information, or on the behalf of which such information is collected and that alone, or jointly with others, determines the purposes and means of the processing of consumers’ personal information, that does business in the State of California, and that satisfies one or more of the following thresholders…[annual gross revenue in excess of $25M; buys/receives/sells/shares personal information of 50K or more consumers; or derives 50 percent or more of its annual revenues from selling consumers’ personal information].”  See CA 1798.140(c)(1).

However, the statute does not stop there.  It goes on to explain instances when a non-profit could be subject to CCPA.  Specifically, if the non-profit entity controls or is controlled by an entity that qualifies as a “business” under CCPA (i.e., meets the above criteria), and shares common branding with that business, then the non-profit is subject to CCPA.

“Control” or “controlled” is defined broadly by CCPA as “ownership of, or the power to vote, more than 50 percent of the outstanding shares of any class of voting security of a business; control in any manner over the election of a majority of the directors, or of individuals exercising similar functions; or the power to exercise controlling influence over the management of a company” (emphasis added).  What “the power to exercise controlling influence over the management of a company” means is yet to be determined.

“Common branding” is defined by CCPA as “a shared name, servicemark, or trademark.”

If you need help determining whether your non-profit is exempt from CCPA, please contact us at

On May 28, President Donald Trump issued an executive order on preventing online censorship targeting the Communications Decency Act, or CDA, titled “Protection for good Samaritan blocking and screening of offensive material.”[1]

While there remain serious doubts as to the legality of the order, including the extent to which it is a constitutionally impermissible viewpoint-based regulation of speech, the order makes it clear that the Trump administration will be urging, or even directing, regulators to scrutinize online speech with a view toward attaching consequences to such speech in circumstances in which regulators have, in the past, treated such speech as immune.

For this reason, no matter what the order’s legal merits may prove to be, we recommend that companies operating online platforms take this opportunity to review their terms of service agreements and content moderation guidelines. In addition to discussing some areas of focus, we also offer some practical tips for reducing litigation risks.

The CDA Safe Harbor Provisions

The order purports to circumscribe an important but rarely discussed law known as Title 47 of the U.S. Code, Section 230(c).

This law creates safe harbors that protect most online platforms from liability for the words and other communications of third parties who use those online platforms. The safe harbor provisions of Section 230(c) set forth two protections: (1) a publisher protection that “[n]o provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider,”[2] and (2) a good Samaritan blocking protection that no provider or user of an interactive computer service shall be held liable on account of “any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing or otherwise objectionable.”[3]

Courts have historically interpreted the publisher provision as shielding service providers from liability for all publication decisions, such as editing, removing or posting information, with respect to content entirely created by third parties.[4]

With decisions issued this year, courts continue to uphold that, with limited exceptions,[5] the publisher provision broadly shields websites and other online computer-based services from liability as a publisher for material posted by others on the service, even when such third-party content is directed to illicit drug sales or promote attacks committed by a terrorist organization.[6]

Thus, the publisher exception remains a vital shield for online platforms that choose to do little about the third-party content that they host.

The good Samaritan provision provides an additional shield for liability for any provider of an interactive computer service that restricts access to content because they consider it obscene or otherwise objectionable.[7]

While Congress’ motivating concern for the good Samaritan provision was allowing websites and service operators to restrict access to pornography, the language of the statute is much broader, covering any “excessively violent, harassing or otherwise objectionable” content.[8]

But websites and service operators do not have unfettered discretion to declare online content objectionable, and courts have held that, for example, blocking and filtering decisions driven by anticompetitive animus are not entitled to immunity.[9] Moreover, platforms have an affirmative obligation to block content promoting sex trafficking or terrorism.[10]

Courts over the years have refused to immunize online-based intermediaries under certain scenarios.[11] As stated by one court, “[t]he Communications Decency Act was not meant to create a lawless no-man’s-land on the Internet.”[12] Courts have, for example, held interactive service providers liable where their own acts, for example, contribute to or induce third parties to express illegal preferences[13] or engage in illegal activities.[14]

The Order

The order came days after Twitter flagged one of Trump’s tweets as containing misinformation under Twitter’s fact-checking policy. On its surface, Twitter’s flagging appears to fall within the good Samaritan safe harbor provision of Section 230(c). However, the order states that “[o]nline platforms are engaging in selective censorship that is harming our national discourse,” and references Twitter’s decision to “place a warning label on certain tweets that clearly reflects political bias.”

To address these perceived biases, the order directs the Commerce Department to petition the Federal Communications Commission to reexamine the scope of the CDA’s safe harbor provisions, including the interactions between the publisher and good Samaritan provisions as well as the conditions under which actions restriction access to or availability of material is taken in good faith.[15]

The order has therefore cast aspects of Section 230(c) protection into doubt — at least in the context of administrative action by executive agencies. Putting aside the high likelihood that the order will be given no legal weight by the courts, there are pragmatic steps that online platforms can take to reduce their Section 230(c) litigation risk.

Areas in Which to Reduce Risk

In view of existing and potential limitations in scope of the CDA’s safe harbor provisions, we offer a few best practices with respect to terms of service agreements to keep in mind in order to reduce risks from litigation or potentially adverse administrative actions.

Clearly distinguish third-party content from the service provider’s content.

The publisher safe harbor provision only protects service providers against claims arising from the publication of content provided by another information content provider.

The terms of service should clearly define information owned and created by a service provider, such as the code, application programming interfaces, and other features and intellectual property owned by the service provider, in addition to information owned by third parties such as users and advertisers.

In publishing or republishing third-party content on a website or app, service providers should be careful that their service at most merely transforms — rather than augments or modifies — such third-party content for publication on an app or service. The greater the lines are blurred between service provider and user-created content, the more risk service providers face in falling outside the scope of Section 230(c)(1).

Clearly disclose your online platform’s right to remove or restrict access to third-party content.

A service provider’s terms of service should document its right to remove or restrict access to content that may be in violation of the terms of service or any applicable community guidelines.

Consider building in consent to your moderation as a stand-alone aspect of your terms and conditions.

Most people dislike incivility, violence and hate on the online platforms that they frequent. Instead of placing a warning that you retain the right to moderate and ban certain types of speech, consider making this promise to establish a walled garden of civility as a separate feature of your online platform. This will likely reduce risk even beyond changes to the terms and conditions.

Update and adapt internal content moderation policies.

Technological developments will continue to pose new challenges to service operators, whether it is new and more harmful types of malicious code to deep-fake content generated by artificial intelligence technology. In order to ease the burdens of content moderation, consider automated means of screening content and enlisting users to help in the moderation process.

Some content moderation and take-downs will be necessary given the existing limitations in the scope of Section 230, but note that courts have held that notice of the illicit nature of third-party content is insufficient to make such content the service provider’s own speech.[16]

Make certain content standards publicly available to set expectations about acceptable postings.

Seizing this opportunity can serve to undercut complaints about partiality. For example, if you make it clear that all uses of a certain expletive will result in removal, it will be harder for a complainant to articulate bias. Bias is not, in and of itself, a Section 230(c) factor. However, because of the order, it would be wise to at least address this risk vector short of litigating Section 230(c) requirements.

Be mindful of industry regulations applicable to your service.

Section 230(c) has several carve outs, including federal criminal law, intellectual property law and electronic communications privacy law.

One court refused to immunize an entity providing services in the home rental space where its service allowed users to target prospective roommates based on race in violation of anti-discrimination laws.[17] Another entity faced potential liability where its advertising platform allowed landlords and real estate brokers to exclude persons of color, families with children, women, people with disabilities and other protected groups from receiving housing ads.[18]

Finally, remember to encourage civil discussion and debate. After all, the remedy for bad speech is more speech, not enforced silence. And be prepared to challenge the order in court in the event that any agency is foolish enough to seek to enforce it.

[1] 47 U.S.C. § 230.

[2] 47 U.S.C. § 230(c)(1).

[3] 47 U.S.C. § 230(c)(2)(A).

[4] See, e.g., Barnes v. Yahoo!, Inc. , 570 F.3d 1096, 1105 (9th Cir. 2009), as amended (Sept. 28, 2009).

[5] Section 230 has a few narrow exceptions, including liability for federal criminal law, intellectual property law, and the Electronic Communications Privacy Act. Additionally, in 2017, Congress passed the Fight Online Sex Trafficking Act (“FOSTA”), codified at 47 U.S.C. § 230(e), providing that Section 230 has “no effect on sex trafficking law” and shall not “be construed to impair or limit” civil claims brought under Section 1595 or criminal charges brought under state law if the underlying conduct would constitute a violation of Sections 1591 or 2421A. Woodhull Freedom Found. v. United States , No. 18-5298, 2020 WL 398625 (D.C. Cir. Jan. 24, 2020).

[6] Knight First Amendment Inst. at Columbia Univ. v. Trump , 953 F.3d 216, 222 (2d Cir. 2020) (noting “Section 230 of the Communications Decency Act explicitly allows social media websites (among others) to filter and censor content posted on their platforms without thereby becoming a ‘publisher”); Sen v., Inc. , 793 F. App’x 626 (9th Cir. 2020) (finding “district court properly granted summary judgment on Sen’s claim for tortious interference with prospective and actual business relations, and interference with an economic advantage, based on the third-party review posted on defendant’s website”); Dyroff v. Ultimate Software Grp., Inc. , 934 F.3d 1093 (9th Cir. 2019),cert. denied,No. 19-849, 2020 WL 2515458 (U.S. May 18, 2020) (finding site operator immune under Section 230(c)(1) where service allowed users to register with site anonymously and recommended groups to users, thereby facilitating a fatal drug transaction); Force v. Facebook, Inc. , 934 F.3d 53 (2d Cir. 2019),cert. denied,No. 19-859, 2020 WL 2515485 (U.S. May 18, 2020) (finding Facebook immune under Section 230 against anti-terrorism claims that Hamas, a U.S. designated foreign terrorist organization, used Facebook to post content that encouraged terrorist attacks in Israel); Marshall’s Locksmith Serv. Inc. v. Google, LLC , 925 F.3d 1263, 1265 (D.C. Cir. 2019) (finding Google immune from allegations that it “publish[es] the content of scam locksmiths’ websites, translat[es] street-address and area-code information on those websites into map pinpoints, and allegedly publish[es] the defendants’ own original content”).

[7] For example, Section 230(c)(2)(A) could apply to those who developed, even in part, the content in issue or from claims arising not from publishing or speaking, but for actions taken to restrict access to obscene or objectionable content. See, e.g., Barnes v. Yahoo!, Inc., 570 F.3d 1096, 1105 (9th Cir. 2009),as amended(Sept. 28, 2009).

[8] Enigma Software Grp. USA, LLC v. Malwarebytes, Inc. , 946 F.3d 1040, 1047 (9th Cir. 2019).

[9] Id.

[10] Section 230 was amended by the Stop Enabling Sex Traffickers Act (FOSTA-SESTA) in 2018 to require the removal of material violating federal and state sex trafficking laws.

[11] Jeff Kosseff, “The Gradual Erosion of the Law That Shaped the Internet: Section 230’s Evolution over Two Decades,” 18 Colum. Sci. & Tech. L. Rev. 1, 33–34 (2016).

[12] Fair Hous. Council of San Fernando Valley v. Roommates.Com, LLC , 521 F.3d 1157, 1164 (9th Cir. 2008).

[13] Id.

[14] J.S. v. Vill. Voice Media Holdings, L.L.C. , 184 Wash. 2d 95, 103, 359 P.3d 714, 718 (2015) (addressing need “to ascertain whether in fact Backpage designed its posting rules to induce sex trafficking to determine whether Backpage is subject to suit under the CDA”).

[15] The order also directs the Federal Trade Commission to evaluate potential anti-conservative bias on social media platforms under its Section 5 authority.

In addition, the order directs that each executive department and agency review and report its Federal spending on advertising and marketing paid to online platforms, and that the Department of Justice review any viewpoint-based speech restrictions imposed by each online platform identified in the report and “assess whether any online platforms are problematic vehicles for government speech due to viewpoint discrimination, deception to consumers, or other bad practices.”

This portion of the order is, in our view, particularly vulnerable to invalidation under the First Amendment.

[16] Marshall’s Locksmith Serv. Inc. v. Google, LLC , 925 F.3d 1263, 1265 (D.C. Cir. 2019); Universal Commc’n Sys., Inc. v. Lycos, Inc. , 478 F.3d 413, 420 (1st Cir. 2007).

[17] See, e.g., Fair Hous. Counsel of San Fernando Valley v., LLC, 521 F.3d 1157, 1163 (9th Cir. 2008).

[18] Nat’l Fair Housing Alliance v. Facebook, Inc., No. 18-cv-2689 (S.D.N.Y., March 2018) (Dkt. 1) (Complaint).

This article was originally published in Law360’s Expert Analysis section on June 22, 2020. To read the article on Law360’s site, please visit:

In the United States, transparency is the name of the game in privacy law.  (This is in contrast to the GDPR, which is focused on creating a “privacy by design” legal framework.)  Consistent with this trend with U.S. privacy laws is New York’s Public Oversight of Surveillance Technology Act (POST Act), which is expected to become law in the next month.

While the POST Act has been pending for years, the bill gained momentum in recent weeks in view of the nationwide protests following several killings by police of Black people, including the death of George Floyd.  The bill requires the NYPD to reveal details on the surveillance tools it uses to monitor people, including, inter alia, facial recognition software and cellphone trackers.  Importantly, the POST Act does not ban the use of any surveillance technology (unlike laws and corporate policies that have gained attention in recent weeks, which have banned the use of facial recognition software).


On June 1, 2020, the California Attorney General submitted the final text of the California Consumer Privacy Act (CCPA) regulations to the California Office of Administrative Law (OAL) for approval, which are substantially the same as the draft regulations released on March 11, 2020.  Despite the ongoing development of the regulations, the CCPA took effect on January 1, and the enforcement of the CCPA is slated to begin on July 1, 2020, regardless of whether the OAL approves the final text.

As a company that does business in one or more states outside of California, you may be asking whether it is fair that your company has to comply with a California law?  On a very general level, CCPA applies to companies that collect, share, or sell California consumers’ personal data, and (a) have annual gross revenue in excess of $25 million, or (b) possess the personal information of 50,000 or more consumers, households, or devices, or (c) earn more than half of its annual revenue from selling consumers’ personal information.  That means that a company based solely in Boston (or any other non-Californian city), with annual gross revenue in excess of $25 million, could be subject to CCPA simply because a California resident orders some products.

What seems unfair may very well render CCPA unconstitutional.  The basis for this constitutionality challenge is the Dormant Commerce Clause, which is judge-created doctrine, negatively implied by the Commerce Clause of the United States.  (The Commerce Clause grants Congress (not the States) the power “To regulate Commerce with foreign Nations, and among the several States, and with the Indian Tribes” (Article I, Section 8, Clause 3).)  Under the second prong of the Pike Balancing Test, the Court considers whether the burden imposed by the state law on interstate commerce is clearly excessive in relation to the putative local benefits.

In performing this balancing test, there are several considerations that could sway the outcome, including extraterritoriality, inconsistent regulation, and the question of what renders a burden on interstate commerce “clearly excessive”?

  • Extraterritoriality is the idea that Commerce Clause precludes the application of a state statute to commerce that takes place wholly outside the state’s borders, whether or not the commerce has effects within the State.
  • Inconsistent regulation is the idea that a company should not be subject to regulations from different states, calling for different, inconsistent actions with respect to the same subject matter. (This may become a bigger issue as more states pass privacy legislation.)
  • What renders a burden on interstate commerce “clearly excessive” looks at exactly what needs to be burdened. For example, would a substantial decrease in the number of transactions constitute a “clearly excessive” burden?  What about just a decrease in the profitability of companies doing business, given the additional expenses they face as a result of CCPA?

Some thought leaders on this issue have posited that Courts may, instead of striking down state privacy laws, respect the role of states as “laboratories,” pointing out that state statues in other areas, such as Internet regulations, have not been struck down under the Dormant Commerce Clause despite similar concerns.

With many businesses still hitting the “pause” button and employees working remotely for at least the foreseeable future, it is a great time to start considering whether the emerging trend of data privacy regulations may apply to your business, and if so, to start mapping out the steps towards compliance and overall better privacy hygiene.

The CCPA enforcement date (July 1, 2020) has yet to be pushed back, and many companies have already lost the first quarter for implementing compliance best practices. Since much of the work that needs to be done can be accomplished in-house, utilizing the downtime during quarantine is a great way to save money and to really get your ducks in a row. Companies are also looking for ways to engage employees and get the team together virtually to maintain a corporate sense of self. These are great opportunities to provide privacy and security training and to begin implementing a “privacy by design” philosophy and approach within your organization.

Bottom line: Now is the perfect time for you and your team to roll up your sleeves and freshen up your data privacy practices. And with our Data Privacy Spring Cleaning Guide, we’ve made it easy to get started with just 5 simple steps.

Starting last month, companies around the United States started to reopen their doors to their employees and customers, but not without first considering what “checks” should be done to ensure a safe environment for all.  Temperature checks, COVID-19 testing, symptom reporting, travel history questionnaires, geolocation tracking and other surveillance measures, and even using AI to intercept communications where relevant information, like symptoms, are self-reported are among the measures that businesses are taking.

Companies are also considering waivers of liability that their customers and employees may need to sign at the door, waiving any personal injury and potential liability in connection with COVID-19 damages.

But there is one thing that may have slipped a lot of companies’ minds… what about the data?  Companies must think about how they will manage the personally identifiable information (PII) and personal health information (PHI) that they collect from their employees and customers, and the best time to do that is now. 

Why now?  

In addition to the numerous reasons that have always existed ((i) the sooner you do it, the easier it is; (ii) there are hundreds of state and federal privacy laws and regulations out there, and you don’t want to be in violation of any of them; (iii) FTC, Section 5; (iv)  state UDAP laws; etc), now there is one more reason… and it is a compelling one: there are two pending federal bills that would temporary regulate the collection, transfer, and processing of certain personal data in connection with COVID-19 related purposes, and one of them includes a private right of action with significant fines. 

The two bills are: (1) Republican proposal: the COVID-19 Consumer Data Protection Act of 2020 (referred to herein as “CCDPA”) ; and (2) Democrat proposal: the Public Health Emergency Privacy Act (publicly referred to as “PHEPA”).  Both bills require express consent from individuals before their data is collected, transparency requirements, and use restrictions, but differ in several ways.  First, PHEPA covers a broader set of PII.  CCDPA applies to geolocation data, proximity data, and PHI, whereas PHEPA applies to (1) physical and behavioral health information, testing and examination information, information concerning infection or the likelihood of infection, and genetic data, biological samples and biometrics;(2) any information collected for the purpose of tracking, screening, monitoring, contract tracing, mitigation, or otherwise in connection with the COVID-19 public health emergency, such as geolocation data, proximity data, demographic data, contract tracing for identifiable individuals (such as an address book or call log); and (3) any data collected from a personal device.  Second, PHEPA applies to government entities and private organizations, whereas CCDPA applies only to private organizations.  Third, only CCDPA expressly preempts other federal and state laws.  Fourth, PHEPA creates a private right of action with considerable fines.

The last point is worth calling out.  PHEPA provides: “a violation of this Act with respect to the emergency health data of an individual constitutes a concrete and particularized injury in fact to that individual” (emphasis added), allowing individuals alleging violation to bring civil actions under PHEPA.  This language is important because it provides an argument that a violation of the statute alone would confer standing upon a plaintiff.  [Just as a reminder, the Constitution requires that a plaintiff bringing suit must have standing, i.e., (1) suffered an injury in fact, (2) that is fairly traceable to the challenged conduct of the defendant, and (3) that is likely to be redressed by a favorable judicial decision.  To establish “injury in fact” in data privacy suits, plaintiffs must prove that their injuries are “concrete and particularized” and “certainly impending.”  Spokeo, Inc. v. Robins, 136 S. Ct. 1540, 1542 (2016).]  Courts have recently been grappling with the issue of whether plaintiffs bringing suit for violations of other privacy statutes have standing to sue regardless of whether they are able to identify any specific injury (beyond violation of the statute), with plaintiffs often arguing that PII is extremely sensitive data for which the mishandling alone constitutes a concrete and particularized injury.  For example, just last month in Bryant v. Compass Group USA, Inc., the Seventh Circuit, in a unanimous decision, held that a plaintiff alleging a mere violation of Section 15(b) of the Illinois Biometric Information Privacy Act (BIPA) – which requires prior notice and consent before the collection of biometric information and data – had Article III standing without alleging further injury.  This is particularly noteworthy because BIPA – unlike PHEPA – does not expressly say that a violation “constitutes a concrete and particularized injury in fact to the individual.”

The reason the above standing discussion should matter to businesses is because of this—PHEPA allows damages of $100-$1,000 per violation in cases of negligent violation, and $500-$5,000 per violation in cases of reckless, willful, or intentional violation, as well as attorney’s fees, litigation costs, and “any other relief, including equitable and declaratory relief, that the court determines appropriate.”  So, if defendants have standing to sue just by virtue of the plaintiff violating the statute (for example, not obtaining express consent before data is collected), without showing any injury resulted, that opens the gateway to plaintiffs recovering $100-$5,000 per violation

What’s more is that “per violation” could be interpreted to mean each time a business collects, uses or discloses covered PII.  So if your business conducts daily temperature checks and collects the data associated therewith, you could be looking at damages of $100-$5,000 per day per person.  And presumably if your business is collecting multiple types of data on a daily basis – e.g., temperature checks, symptom checks, recent contacts with people with COVID-19, etc – then the aforementioned damages could be much higher, i.e., a multiple of the number of pieces of data you’re collecting.

Now, there is no reason to panic…yet.  CCDPA and PHEPA are just bills at this point.  But, their introduction should serve as a wake-up call for companies and government entities alike to start considering the data they are collecting, or considering collecting, from their customers and employees in connection with the COVID-19 health crisis, and their practices and compliance strategies associated therewith.


Are targeted ads the result of wiretapping?  Companies track your browsing history all the time through the use of, inter alia, cookies, and then mine the data they receive for purposes like targeted advertising.  Because the cookies make the users computer send electronic communications without the users’ knowledge is this wiretapping?

Put differently, can a defendant “wiretap” a communication that it receives directly from a plaintiff ?  This is the question that Facebook is asking the United States Court of Appeals for the Ninth Circuit to consider in its Petition for Panel Rehearing and for Rehearing En Banc in In re Facebook Internet Tracking Litigation, No. 17-17-486.

The Wiretaps laws have an interesting history that begins with listening in on private calls facilitated by the telephone companies but has long also embraced data transmissions across the internet. In 1986, Congress enacted the Electronic Communications Privacy Act (ECPA) to extend restrictions on government wire taps of telephone calls to include transmissions of electronic data by computer (18 U.S.C. § 2510 et seq.) and to add new provisions prohibiting access to stored electronic communications, i.e., the Stored Communications Act (SCA, 18 U.S.C. § 2701 et seq.). The ECPA has been amended by the Communications Assistance for Law Enforcement Act (CALEA) of 1994, the USA PATRIOT Act (2001), the USA PATRIOT reauthorization acts (2006), and the FISA Amendments Act (2008). Despite these amendments, Title I of the ECPA protects wire, oral, and electronic communications while in transit, requiring heightened search warrants that are more stringent than in other settings. Commentators have long wondered whether browsing and similar communications, which are routinely “listened-to” by ad technology, constitute a protected communication.

While both ECPA (commonly referred to as the federal WireTap Act) and the California Invasion of Privacy Act (CIPA) impose civil and criminal penalties on a person who “intercepts” an “electronic communication,” both statutes contain an exemption from liability for a person who is a “party” to the communication.  Thus, the question raised by Facebook’s petition is whether a company that installs code on users’ computers, such that the users’ computers automatically send information back to the company regarding the users’ browsing history (to be used for, inter alia, targeted advertising), constitutes a “party” that falls under the Wiretap laws’ exception.

Some of the relevant facts that plaintiffs in the In re Facebook Internet Tracking Litigation allege are as follows:

  • During a 16 month period, when plaintiffs visited third-party websites that contained Facebook “plug-ins” (such as its “Like” button), the code would direct plaintiffs’ browsers to send a copy of the URL to the visited page (known as a “referrer header”) to Facebook.
  • Facebook used “cookies” to compile these reference headers into personal profiles, and then used that data to improve targeting for advertisements.
  • Facebook never promised not to collect this data – but its disclosures suggested that it would not receive referrer headers from logged-out users.
  • Facebook tracked logged-out users’ browsing activities and sold that information to advertisers without the users’ knowledge.

The Northern District of California dismissed plaintiffs’ wiretapping claims pursuant to the “party” exception of the federal Wiretap Act and CIPA because Facebook received the data at issue directly from plaintiffs (more precisely, plaintiff’s computers, pursuant to the Facebook “plug-ins”).

The Ninth Circuit, relying on decisions from the First and Seventh Circuits (that “implicitly assumed” the “party” exception is inapplicable when the sender is unaware of the transmission), vacated the district court’s dismissal of the wiretapping claims, holding that “entities that surreptitiously duplicate transmissions between two parties are not parties to communications” under the wiretapping statutes.

Facebook now argues that the Ninth Circuit should grant rehearing because the panel’s April decision conflicts with precedent and purportedly “fundamentally changes the definition of ‘wiretapping’ under the Federal Wiretap Act and the California Invasion of Privacy Act (CIPA), both of which have not just civil but also criminal penalties.  Notably, the Ninth Circuit’s decision conflicts not just with precedent of other circuits (i.e., the Second, Third, Fifth, and Sixth Circuits), but it also conflicts with a prior ruling of the Ninth Circuit.

The prior Ninth Circuit ruling on this issue was in Konop v. Hawaiian Airlines, 302 F.3d 868 (9th Cir. 2003).  In this case, the Court focused on the “interception” element instead of the “party” exception.  The Court held that a defendant “intercept[s]” a communication under the Wiretap Act only if it “stop[s], seize[s], or interrupt[s]” a communication “in progress or course before arrival” at its destination,” and obtaining a communication directly from a sender – even if the sender did not have knowledge of the communication – is not an interception.

Among the other circuit cases that the Ninth Circuit Facebook decision conflicts with is In re Google Cookie Placement, 806 F.3d 125 (3d Cir. 2015), a Third Circuit case based on similar facts.  In In re Google Cookie Placement, the plaintiffs alleged that Google violated the Wiretap Act and CIPA by acquiring referrer headers “that the plaintiffs sent directly to the defendants.”  There, the court held that a direct “recipient of a communication is necessarily one of its parties,” and when it comes to wiretapping statute, whether the communication was obtained by deceit upon the sender is irrelevant.  The Third Circuit relied on opinions of the Second, Fifth, and Sixth Circuits, and concluded, based on the text and history of the federal Wiretap Act, that the applicability of the “party” exception does not turn on the sender’s knowledge or intent.

Any company that tracks browsing histories should be paying close to attention to this issue.  This case should also serve as a reminder for companies to revisit their terms and policies to ensure they accurately describe web tracking and data collection activities.

Partners Martin ZoltickJenny Colgate, and Christopher Ott will present a webinar with the Northern Virginia Technology Council (NVTC) on “Employee and Customer Health Data: Back to Work in COVID-19” on Friday, June 5, 2020, at 10 a.m. EDT. The virtual event is open to all interested in attending.

As organizations open their doors again, it’s important that they do so with certain safety measures in place to minimize COVID-19 risks for both employees and customers. With our unique perspective on privacy risks and complex, high-technology litigation, we will explore some safety measures being considered by employers with an eye toward privacy, data protection, and cybersecurity. For example, your workplace may be considering temperature checks to ensure those entering your workplace are temperature free; video monitoring and surveillance to ensure social distancing rules are followed; or questionnaires to evaluate people’s health condition and symptoms, contacts, and travel histories.

In the first of NVTC’s “Getting Back to Work” virtual roundtable series, attorneys from Rothwell Figg will discuss best practices for obtaining, storing, sharing, and disposing of this data, as well as how organizations can manage workplace privacy and security through the adoption of reasonable and effective practices while at the same time taking measures to protect employees and customers from the transmission of COVID-19.

The event is open to all. To register, please visit the page below, create an account or login to your existing NVTC account.