Have you seen the new headline about Twitter in the news?  It may be time to double-check your corporate practices and check-in with your employees.

The top new FTC privacy probe concerns Twitter, which has been charged by the FTC for breaching a 2011 consent decree by using phone numbers and email addresses that users provided for security purposes (e.g., two-factor authentication), and using that information to target users with advertisements.  According to Twitter, the FTC has alleged that this conduct breached the 2011 consent decree (resulting from a hacker incident), which purportedly “bars Twitter for 20 years from misleading users about the extent that it protects their privacy.”  Twitter’s misuse of user’s phone numbers and email addresses for direct advertising was self-revealed by the company in an October blog post, which noted that it did not know how many people were impacted.  Twitter called the misuse “inadvertent.”  Twitter said on Monday it expected to pay between $150-250 million to resolve the FTC investigation.

This story should have all corporations taking a look at their own corporate practices and making sure that similar actions are not happening within their closed doors.  All companies are “barred” from misleading users about the extent that they protect user privacy by virtue of, inter alia, FTC section 5 and state UDAP statutes.  (In Twitter’s case, because of its 2011 security incident, it also was barred via a consent decree.)  Also, with many employees working remotely these days, it may be harder for companies to oversee how different sections of the company are interacting.  Perhaps in Twitter’s case this alone led to the issue?  Who knows. 

In any event, let Twitter’s big headline be a reminder to all companies to: (1) review your privacy policy and any other representations that you make to customers regarding the privacy and security of their data; (2) review your corporate procedures (not just policies, but check in with the boots on the ground) to ensure they are consistent with your privacy policy and other representations that you make to customers; and (3) make sure corporate training events regarding privacy and security are in place, so as to create a corporate culture of data protection and privacy by design. 

Mistakes happen, but diligence can prevent them…and can help serve as a defense for when they do happen.

 

Last week, on July 16, 2020, Europe’s top court invalidated the EU-US data flow arrangement called Privacy Shield.  In a world with competing privacy regulations, many thousands of global businesses relied heavily Privacy Shield to conduct their business across EU-US borders (there are 5300+ current Privacy Shield participants, and the transatlantic economic relationship is valued at $7.1 trillion), so the decision sent shockwaves through the business/data privacy community.  Further, the decision has implications that extend beyond the Privacy Shield framework, and beyond EU-US data transfers.

The decision arose from a case brought by Mr. Maximillian Schrems, an Austrian lawyer, who requested in 2015 that the Irish Data Protection Commissioner (the “Irish DPA”) order the suspension or prohibition, in the future, of the transfer by Facebook Ireland of his personal data to Facebook, Inc. (the latter being located in the United States), a case commonly referred to in the privacy community as Schrems II.  (Notably, Schrems I was an earlier case brought by the same lawyer challenging Facebook’s transfer of personal data to the U.S. under a prior EU-US data transfer framework that had been determined adequate, the US-EU Safe Harbor framework, which was struck down as a result of that case.  Following that, Facebook turned to Standard Contract Clauses (SCCs) as a basis for cross-border data transfers, causes Schrems to file the Schrems II case.  And thereafter, the Privacy Shield framework was established and determined adequate, providing a second basis for Facebook’s cross-border data transfers.)

A copy of the Schrems II decision can be found here.  A copy of the European Court’s 3-page press release, summarizing the 31 page decision, can be found here.

In Schrems II, the grounds for review of Facebook’s cross-border data transfers was the United States’ digital surveillance policies and practices, including the Foreign Intelligence Surveillance Act (FISA) and executive order 12,333 (which sanctions bulk data collections)).  Schrems argued that these U.S. surveillance practices are inconsistent with European fundamental rights giving citizens the rights to privacy and data protection, as set out in EU Charter of Fundamental Rights, the European Convention on Human Rights, and several pieces of EU legislation, including the General Data Protection Regulation (specifically, Mr. Schrems called out Articles 7, 8 and 47 of the Charter).  In other words, transferred EU personal data may be at risk of being accessed and processed by the U.S. government (e.g., the CIA, FBI and/or NSA) in a manner incompatible with privacy rights guaranteed in the EU, and EU data subjects may have no right if this happens to an effective remedy.  [Notably, while the original request was focused solely on the SCCs, the Court found the validity of the Privacy Shield Decision relevant to assessing the sufficiency of SCCs, and also any obligations to which the supervisory authority may be subject to suspect or prohibit such a transfer.  See, e.g., Decision at 25.]

In reaching its decision to invalidate the Privacy Shield, the European Court pointed out issues with the U.S. surveillance framework, as it applies to EU-US data transfers, such as (1) that “E.O. 12333 allows the NSA to access data “in transit” to the United States, by accessing underwater cables on the floor of the Atlantic, and to collect and retain such data before arriving in the United States and being subject there to FISA”; (2) “activities conducted pursuant to E.O. 12333 are not governed by statute”; and (3) “non-US persons are covered only by PPD-28, which merely states that intelligence activities should be ‘as tailored as feasible’.”  See Decision at 14-15.  The Court also pointed out, and focused quite heavily on, the lack of remedies for EU citizens whose rights have been violated as a result of US surveillance practices.  For example, it pointed out that the Fourth Amendment to the U.S. Constitution does not apply to EU citizens; the NSA’s activities based on E.O. 12333 are not subject to judicial oversight and are non-justiciable; and the Privacy Shield Ombudsperson is not a tribunal within the meaning of Article 47 of the Charter, and thus, U.S. law does not afford EU citizens the protection required.  See Decision at 15, 29 (“the existence of such a lacuna in judicial protection in respect to interferences with intelligence programmes based on that presidential decree [E.O. 12333] makes it impossible to conclude, as the Commission did in the Privacy Shield Decision, that United States law ensues a level of protection essentially equivalent to that guaranteed by Article 47 of the Charter.”).

With respect to SCCs, the European Court held that “the competition supervisory authority is required to suspend or prohibit a transfer of data to a third country pursuant to standard data protection clauses adopted by the Commission, if, in the view of that supervisory authority and in the light of all the circumstances of that transfer, those clauses are not or cannot be complied with in that third country and the protection of the data transferred that is required by the EU law, in particular by Articles 45 and 46 of the GDPR and by the Charter, cannot be ensured by other means, where the controller or a processor has not itself suspended or put an end to the transfer.”  See Decision at 22.

On the same day as the European Court issued its decision, the U.S. Secretary of Commerce Wilbur Ross issued the following statement regarding the ruling: “While the Department of Commerce is deeply disappointed that the court appears to have invalidated the European Commission’s adequacy decision underlying the EU-U.S. Privacy Shield, we are still studying the decision to fully understand its practical impacts,” said Secretary Wilbur Ross. “We have been and will remain in close contact with the European Commission and European Data Protection Board on this matter and hope to be able to limit the negative consequences to the $7.1 trillion transatlantic economic relationship that is so vital to our respective citizens, companies, and governments. Data flows are essential not just to tech companies—but to businesses of all sizes in every sector. As our economies continue their post-COVID-19 recovery, it is critical that companies—including the 5,300+ current Privacy Shield participants—be able to transfer data without interruption, consistent with the strong protections offered by Privacy Shield.”  The full press release can be found here.  [https://www.commerce.gov/news/press-releases/2020/07/us-secretary-commerce-wilbur-ross-statement-schrems-ii-ruling-and]

The United Kingdom’s Information Commissioner’s Office made a similar statement, implicitly acknowledging that the impact of the Schrems II decision extends to all EU cross border data transfers, not just EU-US transfers under the Privacy Shield.  A copy of the ICO’s statement can be found here.

Since the decision, the big headline has seemingly been two-fold: (1) SCCs survive, but (2) Privacy Shield has been invalidated.  Respectfully, the first half of this is a half-truth.  Companies proceeding with cross-border data transfers using SCCs and binding corporate rules should consult with counsel to assess the risk involved in their transfers and evaluate alternative transfer frameworks.  Unless and until the United States changes its surveillance practices, including not conducting surveillance of in-transit data (i.e., before it arrives in the U.S.), and providing EU data subjects with a right of redress regardless of the surveillance program that their data is subject to, the Schrems II decision puts nearly all EU-US data transfers at risk in industries subject to government surveillance.  For companies that have received requests for information from U.S. law enforcement in the past, and who want to avoid risk, the safest way to proceed may be to (1) consider whether the data at issue is even needed in the first place, and (2) consider simply transfer data processing for European data subjects to Europe.  Other bases for data transfers, such as binding corporate rules and SCCs could also be considered, but back-up plans should be in place, as proceeding under these frameworks could be risky in view of the Schrems II decision.

Companies should also take a close look at their policies and practices for responding to requests for information from U.S. law enforcement, such as the number of requests the company has received; the number of user accounts the requests involved and how many of the user accounts were for EU data subjects; the types of requests the company received, e.g., subpoenas or search warrants; the records the company produced, and in what cases those records were for EU data subjects; the bases for the requests (e.g., were they pursuant to government surveillance programs that provide data subjects with a right to a remedy in the event their rights are violated, or subject to, e.g., E.O. 12333 which provides no such remedy); whether, to the company’s knowledge, EU data subjects whose information was shared ever contended that their rights had been violated.  The more information a company has to show that it has not provided information to U.S. law enforcement pursuant to surveillance programs that do not offer EU data subjects a remedy in the event their rights are violated, the safer footing the company should be on going forward with respect to their EU-US data transfer practices.

For companies that have not received requests for information from U.S. law enforcement pursuant to surveillance programs, the path forward has more options.  While Privacy Shield has been struck down for all companies, it is likely that a new or revised framework will be designed and an adequacy decision will be sought with respect thereto (just as it was for Privacy Shield, when Safe Harbor was struck down).  In the interim, it is prudent for these companies to consider alternative data transfer frameworks (such as SCCs), and in the future, try a “belt and suspenders” approach such that their business does not “hang their hat” on a single framework for cross-border data transfers (this is what Facebook and numerous other companies learned, thus causing them to rely on both Privacy Shield and SCCs).  Companies should also take a good look at the data they are processing, particularly with respect to EU data subjects, and ask whether it is even necessary, and whether processing in the U.S. is necessary.  In some cases the answer may be “yes,” but the more a company can practice data minimization – particularly when cross-border data transfers are at issue – the safer it may be.  Finally, just because your company has never received a request from U.S. law enforcement pursuant to a surveillance program yet, does not mean you never will—particularly in certain industries, such as tech and telecommunications.  You should plan for such requests prior to them happening.

If you need any help evaluating your company’s risks in view of the Schrems II decision, or determining best practices for going forward, please contact us at privacy@rothwellfigg.com.

Rothwell Figg attorneys Martin M. Zoltick and Jenny L. Colgate published a chapter titled “Privacy, Data Protection, and Cybersecurity: A State-Law Analysis” in the International Comparative Legal Guide to: Data Protection 2020, published by Global Legal Group Ltd.

While some countries have enacted comprehensive privacy and data protection laws, like the EU’s General Data Protection Regulation (GDPR), the United States does not have a single, comprehensive federal law regulating the collection, use, and disclosure of personal information. Instead, U.S. privacy and data protection legislation is comprised of a patchwork system of federal and state laws and regulations – hundreds of them! The chapter in the Guide aims to identify and provide some guidance regarding the state privacy and data protection laws broken into three sections: 1) technology-specific laws; 2) industry-specific laws, and 3) generally applicable laws.

To read the article in its entirety, access the article on the ICLG website here: https://iclg.com/practice-areas/data-protection-laws-and-regulations/2-privacy-data-protection-and-cybersecurity-a-state-law-analysis.

As of July 1, 2020, the California Attorney General began enforcing the California Consumer Privacy Act (CCPA). While many details about CCPA enforcement remain uncertain, many states have enacted or will enact their own privacy laws. Businesses clearly must wrestle with this mosaic of new and emerging privacy restrictions.  Some industries have explored legal challenges to these statutes. One of these legal challenge, against a new Maine privacy law, was just defeated.

In February 2020, lobby groups representing the broadband industry sued the state of Maine under the theory that the Maine privacy law violates their First Amendment protections on business speech The Maine privacy law in question prohibits ISPs from using, disclosing, or selling browsing history and other personal information without customers’ opt-in consent. The plaintiffs’ First Amendment argument draws from the theory that a company’s sale of private individual’s browsing history constitutes a First Amendment protected form of “commercial speech.”  The lawsuit’s second theory flows from the history of these types of browser privacy protections. Similar federal protections were rolled back by the President and Congress in 2017. After the federal roll-back, Maine legislators decided to revive these protections for their residents. However, the lawsuit argues that the federal deregulatory action had to effect of preempting and barring any supplementary state law protections.

In a July 7, 2020 order, Judge Lance Walker agreed in general with the characterization of the browser history sales as “commercial speech,” but held that “intermediate scrutiny” of the privacy law, not “strict scrutiny,” governed judicial review of this law.  Against that legal backdrop, Judge Walker held that the law is not unconstitutional on its face and declared the preemption arguments “dubious.” While Judge Walker allowed the case to continue, the decision dealt a major blow to the theories underpinning the case.

The legal fight against privacy regulations will continue.  The Maine law addresses only a discrete privacy issue. The CCPA sprawls by comparison and will face many legal challenges, especially now that enforcement activity has begun.  However, the CPPA and other similar privacy laws will likely face similar First Amendment and preemption claims, in addition to a few other interesting wrinkles. The fight will continue.

Today the U.S. Supreme Court found in Barr v. American Association of Political Consultants, Inc. that the federal debt collection exemption to the Telephone Consumer Protection Act’s general prohibition on autodialed calls violates the First Amendment.  The Supreme Court held that the exemption was a content-based restriction on speech because it favors speech made for the purpose of collecting government debt over political and other speech.  Such content-based restrictions are subject to the “strict scrutiny” standard, which the government conceded it could not satisfy.

Rather than strike down the TCPA in its entirety, as some advocates have proposed, the Supreme Court held that the appropriate remedy was to sever the provision from the statute.  Justice Kavanaugh wrote the majority opinion, noting that “Americans passionately disagree about many things.  But they are largely united in their disdain for robocalls.”  The decision further notes that the Federal Government received a staggering 3.7 million complaints about robocalls in 2019 alone.

Due to the increasing prevalence and sophistication of robocallers, industry has proposed a number of creative technical solutions to combat robocallers.  For example, there are bots that answer calls and use artificial intelligence generated speech designed to waste telemarketers’ time by keeping them on the line, voice biometric technology that automatically identifies synthesized speech, and services that will automatically gather a robocaller’s details and generate a demand letter and court documents for filing.

With today’s Supreme Court decision, TCPA lawsuits will likely remain one of the most filed types of class actions in courts across the country, especially since private parties can sue to recover up to $1500 per violation or three times their actual monetary losses.

The California Consumer Privacy Act (CCPA) went into effect on January 1, 2020, and enforcement begins tomorrow, July 1, 2020.  Is your privacy policy compliant?  Here are a few quick questions that may help you determine the answer –

  • Does your privacy policy have a “last updated” date that is less than a year old?
  • Do you fully identify the personal information that your company collects?
  • Do you fully explain the purposes for which you use the collected personal information?
  • Do you share or sell the information, and if so, do you explain to whom and why?
  • Do you identify the rights individuals have with respect to their personal data, including:
    • the right to erase or delete all or some of one’s personal data;
    • the right to a copy of one’s personal data, including in machine readable form;
    • the right to change, update, or fix one’s personal data if it is inaccurate;
    • the right to stop using all or some of one’s personal data (where you have no legal right to keep using it) or to limit use of one’s personal data; and
    • the right to opt-out of the same of information.
  • Do you provide at least two forms of contact, for individuals to submit requests for information, including at least a toll-free number (or email address if your business is online-only)?

For more information, see Rothwell Figg’s Privacy, Data Protection, and Cybersecurity Page, including our CCPA Compliance Guide, or contact us directly at privacy@rothwellfigg.com.

The question is – do wiretapping statutes apply in cases where there is no traditional third party interceptor?  And more practically speaking, how does an entity using plug-ins and cookies avoid liability under wiretapping statutes while there is so much uncertainty in the law?

We previously blogged about this issue In re: Facebook, Inc. Internet Tracking Litigation (here).  We reported how: (i) the district court dismissed the plaintiff’s action, which brought claims under, inter alia, the Electronic Communications Privacy Act (ECPA) and California Invasion of Privacy Act (CIPA), pursuant to the “party” exception (i.e., there was no third party intermediary); and (ii) the Ninth Circuit reversed on grounds that the “party” exception is applicable where the sender is unaware of the transmission.  We also explained how the result of this Ninth Circuit decision was a split between circuit courts on the applicability of wiretapping statutes where a sender’s own computer transmits messages, and also a split within the Ninth Circuit.  And at the time we blogged, Facebook had filed a motion for rehearing before the Ninth Circuit.

Earlier this week, the Ninth Circuit denied Facebook’s motion for reconsideration, thereby solidifying the aforementioned splits.  While according to public sources Facebook has thus far declined to comment on its next steps, it seems likely that Facebook may file a petition for a writ of certiorari in the Supreme Court.

Unless and until the Supreme Court clarifies the scope of the Wiretap Act, those using third party cookies or plug-ins to track users’ Internet activity would be wise to (1) review their disclosures and ensure that they provide detailed information about what third party plug-ins and cookies the site uses, and exactly when and how they work (while being careful to ensure that, at the same time, they do not reveal corporate trade secrets or other confidential information); (2) review their consent procedures to ensure that affirmative consent to the use of the disclosed plug-ins and cookies is sought; and (3) review any contracts and terms of service with the third party.

On the flip side, companies that offer plug-ins may want to contractually require website operators to provide detailed disclosures and seek affirmative consent from users before installing code, or alternatively, they may want to implement their own consent mechanisms for their plug-ins, such as a “2-click solution.”  A 2-click solution is where a user clicks on an image (such as a “Like” button), and then informed consent is obtained directly from the user before installing the plug-in (e.g., “By clicking ‘Like’ you install a plug-in from Company X, which will direct your browser to send a copy of the URL of the visited page to Company X”).

If you have any privacy questions related to your company’s use of plug-ins or cookies, please contact us at privacy@rothwellfigg.com.

Articles summarizing CCPA often state that it applies to for-profit businesses that do business in California that satisfy certain criteria, and they fail to ever mention that CCPA does apply to some non-profits.

The CCPA defines “business” as “a sole proprietorship, partnership, limited liability company, corporation, association, or other legal entity that is organized or operated for the profit or financial benefit of its shareholders or other owners, that collects consumers’ personal information, or on the behalf of which such information is collected and that alone, or jointly with others, determines the purposes and means of the processing of consumers’ personal information, that does business in the State of California, and that satisfies one or more of the following thresholders…[annual gross revenue in excess of $25M; buys/receives/sells/shares personal information of 50K or more consumers; or derives 50 percent or more of its annual revenues from selling consumers’ personal information].”  See CA 1798.140(c)(1).

However, the statute does not stop there.  It goes on to explain instances when a non-profit could be subject to CCPA.  Specifically, if the non-profit entity controls or is controlled by an entity that qualifies as a “business” under CCPA (i.e., meets the above criteria), and shares common branding with that business, then the non-profit is subject to CCPA.

“Control” or “controlled” is defined broadly by CCPA as “ownership of, or the power to vote, more than 50 percent of the outstanding shares of any class of voting security of a business; control in any manner over the election of a majority of the directors, or of individuals exercising similar functions; or the power to exercise controlling influence over the management of a company” (emphasis added).  What “the power to exercise controlling influence over the management of a company” means is yet to be determined.

“Common branding” is defined by CCPA as “a shared name, servicemark, or trademark.”

If you need help determining whether your non-profit is exempt from CCPA, please contact us at privacy@rothwellfigg.com.

On May 28, President Donald Trump issued an executive order on preventing online censorship targeting the Communications Decency Act, or CDA, titled “Protection for good Samaritan blocking and screening of offensive material.”[1]

While there remain serious doubts as to the legality of the order, including the extent to which it is a constitutionally impermissible viewpoint-based regulation of speech, the order makes it clear that the Trump administration will be urging, or even directing, regulators to scrutinize online speech with a view toward attaching consequences to such speech in circumstances in which regulators have, in the past, treated such speech as immune.

For this reason, no matter what the order’s legal merits may prove to be, we recommend that companies operating online platforms take this opportunity to review their terms of service agreements and content moderation guidelines. In addition to discussing some areas of focus, we also offer some practical tips for reducing litigation risks.

The CDA Safe Harbor Provisions

The order purports to circumscribe an important but rarely discussed law known as Title 47 of the U.S. Code, Section 230(c).

This law creates safe harbors that protect most online platforms from liability for the words and other communications of third parties who use those online platforms. The safe harbor provisions of Section 230(c) set forth two protections: (1) a publisher protection that “[n]o provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider,”[2] and (2) a good Samaritan blocking protection that no provider or user of an interactive computer service shall be held liable on account of “any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing or otherwise objectionable.”[3]

Courts have historically interpreted the publisher provision as shielding service providers from liability for all publication decisions, such as editing, removing or posting information, with respect to content entirely created by third parties.[4]

With decisions issued this year, courts continue to uphold that, with limited exceptions,[5] the publisher provision broadly shields websites and other online computer-based services from liability as a publisher for material posted by others on the service, even when such third-party content is directed to illicit drug sales or promote attacks committed by a terrorist organization.[6]

Thus, the publisher exception remains a vital shield for online platforms that choose to do little about the third-party content that they host.

The good Samaritan provision provides an additional shield for liability for any provider of an interactive computer service that restricts access to content because they consider it obscene or otherwise objectionable.[7]

While Congress’ motivating concern for the good Samaritan provision was allowing websites and service operators to restrict access to pornography, the language of the statute is much broader, covering any “excessively violent, harassing or otherwise objectionable” content.[8]

But websites and service operators do not have unfettered discretion to declare online content objectionable, and courts have held that, for example, blocking and filtering decisions driven by anticompetitive animus are not entitled to immunity.[9] Moreover, platforms have an affirmative obligation to block content promoting sex trafficking or terrorism.[10]

Courts over the years have refused to immunize online-based intermediaries under certain scenarios.[11] As stated by one court, “[t]he Communications Decency Act was not meant to create a lawless no-man’s-land on the Internet.”[12] Courts have, for example, held interactive service providers liable where their own acts, for example, contribute to or induce third parties to express illegal preferences[13] or engage in illegal activities.[14]

The Order

The order came days after Twitter flagged one of Trump’s tweets as containing misinformation under Twitter’s fact-checking policy. On its surface, Twitter’s flagging appears to fall within the good Samaritan safe harbor provision of Section 230(c). However, the order states that “[o]nline platforms are engaging in selective censorship that is harming our national discourse,” and references Twitter’s decision to “place a warning label on certain tweets that clearly reflects political bias.”

To address these perceived biases, the order directs the Commerce Department to petition the Federal Communications Commission to reexamine the scope of the CDA’s safe harbor provisions, including the interactions between the publisher and good Samaritan provisions as well as the conditions under which actions restriction access to or availability of material is taken in good faith.[15]

The order has therefore cast aspects of Section 230(c) protection into doubt — at least in the context of administrative action by executive agencies. Putting aside the high likelihood that the order will be given no legal weight by the courts, there are pragmatic steps that online platforms can take to reduce their Section 230(c) litigation risk.

Areas in Which to Reduce Risk

In view of existing and potential limitations in scope of the CDA’s safe harbor provisions, we offer a few best practices with respect to terms of service agreements to keep in mind in order to reduce risks from litigation or potentially adverse administrative actions.

Clearly distinguish third-party content from the service provider’s content.

The publisher safe harbor provision only protects service providers against claims arising from the publication of content provided by another information content provider.

The terms of service should clearly define information owned and created by a service provider, such as the code, application programming interfaces, and other features and intellectual property owned by the service provider, in addition to information owned by third parties such as users and advertisers.

In publishing or republishing third-party content on a website or app, service providers should be careful that their service at most merely transforms — rather than augments or modifies — such third-party content for publication on an app or service. The greater the lines are blurred between service provider and user-created content, the more risk service providers face in falling outside the scope of Section 230(c)(1).

Clearly disclose your online platform’s right to remove or restrict access to third-party content.

A service provider’s terms of service should document its right to remove or restrict access to content that may be in violation of the terms of service or any applicable community guidelines.

Consider building in consent to your moderation as a stand-alone aspect of your terms and conditions.

Most people dislike incivility, violence and hate on the online platforms that they frequent. Instead of placing a warning that you retain the right to moderate and ban certain types of speech, consider making this promise to establish a walled garden of civility as a separate feature of your online platform. This will likely reduce risk even beyond changes to the terms and conditions.

Update and adapt internal content moderation policies.

Technological developments will continue to pose new challenges to service operators, whether it is new and more harmful types of malicious code to deep-fake content generated by artificial intelligence technology. In order to ease the burdens of content moderation, consider automated means of screening content and enlisting users to help in the moderation process.

Some content moderation and take-downs will be necessary given the existing limitations in the scope of Section 230, but note that courts have held that notice of the illicit nature of third-party content is insufficient to make such content the service provider’s own speech.[16]

Make certain content standards publicly available to set expectations about acceptable postings.

Seizing this opportunity can serve to undercut complaints about partiality. For example, if you make it clear that all uses of a certain expletive will result in removal, it will be harder for a complainant to articulate bias. Bias is not, in and of itself, a Section 230(c) factor. However, because of the order, it would be wise to at least address this risk vector short of litigating Section 230(c) requirements.

Be mindful of industry regulations applicable to your service.

Section 230(c) has several carve outs, including federal criminal law, intellectual property law and electronic communications privacy law.

One court refused to immunize an entity providing services in the home rental space where its service allowed users to target prospective roommates based on race in violation of anti-discrimination laws.[17] Another entity faced potential liability where its advertising platform allowed landlords and real estate brokers to exclude persons of color, families with children, women, people with disabilities and other protected groups from receiving housing ads.[18]

Finally, remember to encourage civil discussion and debate. After all, the remedy for bad speech is more speech, not enforced silence. And be prepared to challenge the order in court in the event that any agency is foolish enough to seek to enforce it.

[1] 47 U.S.C. § 230.

[2] 47 U.S.C. § 230(c)(1).

[3] 47 U.S.C. § 230(c)(2)(A).

[4] See, e.g., Barnes v. Yahoo!, Inc. , 570 F.3d 1096, 1105 (9th Cir. 2009), as amended (Sept. 28, 2009).

[5] Section 230 has a few narrow exceptions, including liability for federal criminal law, intellectual property law, and the Electronic Communications Privacy Act. Additionally, in 2017, Congress passed the Fight Online Sex Trafficking Act (“FOSTA”), codified at 47 U.S.C. § 230(e), providing that Section 230 has “no effect on sex trafficking law” and shall not “be construed to impair or limit” civil claims brought under Section 1595 or criminal charges brought under state law if the underlying conduct would constitute a violation of Sections 1591 or 2421A. Woodhull Freedom Found. v. United States , No. 18-5298, 2020 WL 398625 (D.C. Cir. Jan. 24, 2020).

[6] Knight First Amendment Inst. at Columbia Univ. v. Trump , 953 F.3d 216, 222 (2d Cir. 2020) (noting “Section 230 of the Communications Decency Act explicitly allows social media websites (among others) to filter and censor content posted on their platforms without thereby becoming a ‘publisher”); Sen v. Amazon.com, Inc. , 793 F. App’x 626 (9th Cir. 2020) (finding “district court properly granted summary judgment on Sen’s claim for tortious interference with prospective and actual business relations, and interference with an economic advantage, based on the third-party review posted on defendant’s website”); Dyroff v. Ultimate Software Grp., Inc. , 934 F.3d 1093 (9th Cir. 2019),cert. denied,No. 19-849, 2020 WL 2515458 (U.S. May 18, 2020) (finding site operator immune under Section 230(c)(1) where service allowed users to register with site anonymously and recommended groups to users, thereby facilitating a fatal drug transaction); Force v. Facebook, Inc. , 934 F.3d 53 (2d Cir. 2019),cert. denied,No. 19-859, 2020 WL 2515485 (U.S. May 18, 2020) (finding Facebook immune under Section 230 against anti-terrorism claims that Hamas, a U.S. designated foreign terrorist organization, used Facebook to post content that encouraged terrorist attacks in Israel); Marshall’s Locksmith Serv. Inc. v. Google, LLC , 925 F.3d 1263, 1265 (D.C. Cir. 2019) (finding Google immune from allegations that it “publish[es] the content of scam locksmiths’ websites, translat[es] street-address and area-code information on those websites into map pinpoints, and allegedly publish[es] the defendants’ own original content”).

[7] For example, Section 230(c)(2)(A) could apply to those who developed, even in part, the content in issue or from claims arising not from publishing or speaking, but for actions taken to restrict access to obscene or objectionable content. See, e.g., Barnes v. Yahoo!, Inc., 570 F.3d 1096, 1105 (9th Cir. 2009),as amended(Sept. 28, 2009).

[8] Enigma Software Grp. USA, LLC v. Malwarebytes, Inc. , 946 F.3d 1040, 1047 (9th Cir. 2019).

[9] Id.

[10] Section 230 was amended by the Stop Enabling Sex Traffickers Act (FOSTA-SESTA) in 2018 to require the removal of material violating federal and state sex trafficking laws.

[11] Jeff Kosseff, “The Gradual Erosion of the Law That Shaped the Internet: Section 230’s Evolution over Two Decades,” 18 Colum. Sci. & Tech. L. Rev. 1, 33–34 (2016).

[12] Fair Hous. Council of San Fernando Valley v. Roommates.Com, LLC , 521 F.3d 1157, 1164 (9th Cir. 2008).

[13] Id.

[14] J.S. v. Vill. Voice Media Holdings, L.L.C. , 184 Wash. 2d 95, 103, 359 P.3d 714, 718 (2015) (addressing need “to ascertain whether in fact Backpage designed its posting rules to induce sex trafficking to determine whether Backpage is subject to suit under the CDA”).

[15] The order also directs the Federal Trade Commission to evaluate potential anti-conservative bias on social media platforms under its Section 5 authority.

In addition, the order directs that each executive department and agency review and report its Federal spending on advertising and marketing paid to online platforms, and that the Department of Justice review any viewpoint-based speech restrictions imposed by each online platform identified in the report and “assess whether any online platforms are problematic vehicles for government speech due to viewpoint discrimination, deception to consumers, or other bad practices.”

This portion of the order is, in our view, particularly vulnerable to invalidation under the First Amendment.

[16] Marshall’s Locksmith Serv. Inc. v. Google, LLC , 925 F.3d 1263, 1265 (D.C. Cir. 2019); Universal Commc’n Sys., Inc. v. Lycos, Inc. , 478 F.3d 413, 420 (1st Cir. 2007).

[17] See, e.g., Fair Hous. Counsel of San Fernando Valley v. Roommates.com, LLC, 521 F.3d 1157, 1163 (9th Cir. 2008).

[18] Nat’l Fair Housing Alliance v. Facebook, Inc., No. 18-cv-2689 (S.D.N.Y., March 2018) (Dkt. 1) (Complaint).

This article was originally published in Law360’s Expert Analysis section on June 22, 2020. To read the article on Law360’s site, please visit: https://www.law360.com/articles/1284674/risk-mitigation-for-social-media-cos-in-light-of-trump-order.

In the United States, transparency is the name of the game in privacy law.  (This is in contrast to the GDPR, which is focused on creating a “privacy by design” legal framework.)  Consistent with this trend with U.S. privacy laws is New York’s Public Oversight of Surveillance Technology Act (POST Act), which is expected to become law in the next month.

While the POST Act has been pending for years, the bill gained momentum in recent weeks in view of the nationwide protests following several killings by police of Black people, including the death of George Floyd.  The bill requires the NYPD to reveal details on the surveillance tools it uses to monitor people, including, inter alia, facial recognition software and cellphone trackers.  Importantly, the POST Act does not ban the use of any surveillance technology (unlike laws and corporate policies that have gained attention in recent weeks, which have banned the use of facial recognition software).