It has been nearly a year and a half since the Schrems II decision issued in July 2020, which invalidated the European Commission’s adequacy decision for the EU-US Privacy Shield Framework.  As a result, companies were forced to reexamine their transfers of personal information out of the EU, and the safeguards that they rely on for those cross-border data transfers.  Some companies, instead of addressing the safeguards they had in place, took a hard look at the data they were transferring.  Did they need to transfer it out of the EU?  Was it even personal information?  This latter issue was recently addressed by an Austrian data regulator, one of 27 GDPR enforcers.  While Google argued that the data was not personal information, the data regulator disagreed.  It is yet to be seen if other data regulators will issue similar decisions, and if so, what the fate will be of US technology companies in Europe.

In a recent decision by Austrian’s data regulator, it was held that a website’s use of Google Analytics violates the GDPR because it uses IP address and cookie data identifiers to track information about website visitors, such as the pages read, how long you are on the website, and information about users’ devices.  The Austrian decision held that IP addresses and cookie data identifiers are personal information.  Thus, when information tied with these identifiers is passed through Google’s servers in the United States, the GDPR is implicated.  Specifically, the GDPR provides that in the case of non-EU data transfers of personal data, there must be appropriate safeguards in effect to protect the data.  The problem is—after Schrems II, (1) there is no longer an adequacy decision by the EU for US data transfers, and (2) it is unclear if other safeguard measures, such as standard contractual clauses (SCCs) or binding corporate rules (BCRs) are sufficient in view of US surveillance practices under Section 702 of the Foreign Intelligence Surveillance Act (FISA) and Executive Order 12333.  In other words, there may be no appropriate safeguards that US technology companies can implement to allow for GDPR-compliant cross-border data transfers.

The recent Austrian decision provides that, “US intelligence services use certain online identifiers (such as IP address or unique identification numbers) as a starting point for the surveillance of individuals.”  Google had argued that it implemented measures to protect the data in the US, but these were found insufficient to meet the GDPR.  Indeed, the very “IDs” that Google pointed to as purportedly constituting pseudonymized safeguards were found to make users identifiable and addressable:

“…the use of IP addresses, cookie IDs, advertising IDs, unique user IDs or other identifiers to (re)identify users do not constitute appropriate safeguards to comply with data protection principles or to safeguard the rights of data subjects.  This is because, unlike in cases where data is pseudonymized in order to disguise or delete the identifying data so that the data subjects can no longer be addressed, IDs or identifiers are used to make the individuals distinguishable and addressable.  Consequently, there is no protective effect.  They are therefore not pseudoymizations within the meaning of Recital 28, which reduces the risks for the data subjects and assist data controllers and processors in complying with their data protection obligations.”

 

It remains to be seen whether other EU regulators will follow suit and hold that the GDPR has been violated where European websites use Google Analytics or similar US technology services.  It will also be interesting to see if European companies start transferring adtech and analytics services to national companies.

France recently fined Alphabet Inc’s Google $169 million and Meta Platform’s Facebook $67 million on grounds that the companies violated the EU e-Privacy directive (aka the EU “Cookie Law”) by requiring too many “clicks” for users to reject cookies.  The result was that many users just accepted the cookies, thus allowing the identifiers to track their data.  The French regulator gave the companies three months to come up with a solution that makes it as easy to reject cookies as it does to accept cookies.  This is an important message for all companies as they review their cookie compliance in 2022 – make it as easy to refuse a cookie as it is to accept one.

It is interesting to note that these recent fines were not issued under GDPR, but rather under the older e-Privacy directive which has been in effect since 2002.  Unlike the GDPR, which only allows regulators to fine companies that have their European headquarters in that country, regulators can issue fines under the e-Privacy directive against any company that does business in its jurisdiction.

The EU Cookie Law (which is not actually a law, but a directive) came into effect in 2002 and was amended in 2009 (amendment effective since 2011).  This directive regulates the processing of personal data in the electronic communications sector, and specifically it regulates the use of electronic cookies on websites by conditioning use upon prior consent of users.  Unless cookies are deemed strictly necessary for the most basic functions of a website (e.g., cookies that manage shopping cart contents), users must be given clear and comprehensive information about the purposes of processing data, storage, retention, and access, and they must also be able to give their consent and be provided with a way to refuse consent.

The U.K. released a National AI Strategy with a ten-year plan to make Britain a global AI superpower in our new age of artificial intelligence.  The Strategy intends to “signal to the world [the U.K.’s] intention to build the most pro-innovation regulatory environment in the world; to drive prosperity across the UK and ensure everyone can benefit from AI; and to apply AI to help solve global challenges like climate change.”

As part of its early key actions, the U.K. intends to launch before the end of the year a consultation through the IPO on copyright areas of computer generated works and text and data mining, and on patents for AI devised inventions.  (n.b., the UK Court of Appeal recently ruled 2-1 that an AI entity cannot be legally named as an inventor on a patent).  Additionally, the U.K. is engaged in an ongoing consultation on the U.K.’s data protection regime.  The data consultation highlights that, as a result of Brexit and the U.K.’s departure from the European Union, “the UK can reshape its approach to regulation and seize opportunities with its new regulatory freedoms.” For example, Article 22 of the EU’s data protection regime, which encompasses protections against automated decision making, may be on the chopping block for the U.K.’s data protection regime.

The U.K.’s approach to data in particular underscores the regulatory balance needed to ensure that data is readily accessible for a thriving AI ecosystem but is not used in a manner that causes harm to individuals and society.  For a deeper discussion on all things data with a focus on AI and ML-enabled technology, the American Intellectual Property Law Association is hosting a virtual Data Roadshow on September 30, 2021.  Leading attorneys from industry, private practice, and the public sector will provide insight and practice tips for navigating this evolving and exciting area of law.

On August 20, 2021, China passed its first general data protection law, called the Personal Information Protection Law (“PIPL”).  The law is set to take effect on November 1, 2021 (two months away), and it applies to both (1) in-country processing of personal information of natural persons; and (2) out-of-country processing of personal information of natural persons who are in China, if such processing is: (a) for the purposes of providing products or services to those people; (b) to analyze/evaluate the behavior of those people; or (3) other circumstances prescribed by laws and administrative regulations.  Thus, the PIPL will become one more thing that companies have to consider in weighing questions of where to store which user data.

While much of PIPL is similar to GDPR – such as the definitions of “personal information” and “processing”; requiring a legal basis for processing personal information; and providing individuals with various rights with respect to their personal information (e.g., portability, correct and delete, restrict and prohibit, etc.)—there are differences, and companies to whom the law applies should review their policies and practices carefully to ensure compliance.

Two ways in which PIPL stands out from some other general data protection laws are with regard to the data location requirement and the cross-border transfer requirements.

First, the law provides that critical infrastructure information (“CII”) operators (such as government system, utilities, financial system, public health) or entities processing a large amount of personal information must store personal information within the territory of mainland China.  Of note, every company operating in China is suggested to conduct a self-assessment to determine whether it may be deemed a CII operator.  In order for such information to be transferred to points outside of China, the transfer must pass a government-administered security assessment.

Second, cross-border transfer of information is allowed (for non-CII and large-volume companies) if the processor meets one of the following: (i) it passes a security assessment organized by the Cybersecurity Administration of China (CAC); (ii) it is certified by a specialized agency for the protection of PI by CAC; or (iii) it enters into a contract with the overseas recipient under the standard contract formulated by the CAC.  [Of note, it appears that despite the law going into effect in two months, there is not “standard contract” published yet.]

Penalties for violations of PIPL include, inter alia, an administrative fine of up to RMB 50 million or 5% of the processor’s turnover in the last year (it is unclear if this refers to local turnover or global turnover).

At this point you have probably heard about one of the many incidents where an AI-enabled system discriminated against certain populations in settings ranging from healthcare, law enforcement, and hiring, among others. In response to this problem, the National Institute of Standards and Technology (NIST) recently proposed a strategy for identifying and managing bias in AI, with emphasis on biases that can lead to harmful societal outcomes.  The NIST authors summarize:

“[T]here are many reasons for potential public distrust of AI related to bias in systems. These include:

  • The use of datasets and/or practices that are inherently biased and historically contribute to negative impacts
  • Automation based on these biases placed in settings that can affect people’s lives, with little to no testing or gatekeeping
  • Deployment of technology that is either not fully tested, potentially oversold, or based on questionable or non-existent science causing harmful and biased outcomes.”

As a starting place, the NIST authors outline an approach for evaluating the presentation of bias in three stages modeled on the AI lifecycle: pre-design, design & development, and deployment.  In addition, NIST will host a variety of activities in 2021 and 2022 in each area of the core building blocks of trustworthy AI (accuracy, explainability and interpretability, privacy, reliability, robustness, safety, and security (resilience), and bias).  NIST is currently accepting public comment on the proposal until September 10, 2021.

Notably, the proposal points out that “most Americans are unaware when they are interacting with AI enabled tech but feel there needs to be a ‘higher ethical standard’ than with other forms of technologies,” which “mainly stems from the perceptions of fear of loss of control and privacy.”  From a regulatory perspective, there currently is no federal data protection law in the US that broadly mirrors Europe’s GDPR Art. 22 with respect to automated decision making – “the right to not be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her.”  But several U.S. jurisdictions have passed laws that more narrowly regulate AI applications that have the potential to cause acute societal harms, such as the use of facial recognition technology in law enforcement or interviewing processes, and perhaps more regulation is likely as (biased) AI-enabled technology continues to proliferate into more settings.

In Van Buren v. United States, the Supreme Court resolved a circuit split as to whether a provision of the Computer Fraud and Abuse Act (CFAA) applies only to those who obtain information to which their computer access does not extend, or more broadly to also encompass those who misuse access that they otherwise have. By way of background, the CFAA subjects to criminal liability anyone who “intentionally accesses a computer without authorization or exceeds authorized access,” and thereby obtains computer information. 18 U.S.C. 1030(a)(2). The term “exceeds authorized accessed” is defined to mean “to access a computer with authorization and to use such access to obtain or alter information in the computer that the accesser is not entitled so to obtain or alter.”  18 U.S.C. 1030(a)(2).

The case involved a police sergeant that used his patrol-car computer to access a law enforcement database with his valid credentials in order to obtain license plate number records in exchange for money. The sergeant’s use of the database violated the department’s policy against using the database for non-law enforcement purposes, including personal use. At trial, the Government told the jury that the sergeant’s access of the database for non-law enforcement purposes violated the CFAA concept against using a computer network in a way contrary to what your job or policy prohibits. The jury convicted the sergeant, and the District Court sentenced him to 18 months in prison. The Eleventh Circuit affirmed, consistent with its precedent adopting the broader view of the CFAA.

The parties agreed that the sergeant accessed a computer with authorization when he used his valid credentials to log in to the law enforcement database, and that he obtained information when he acquired the license-plate records, but the dispute was whether the sergeant was “entitled so to obtain” the record. After analyzing the language of the statute and the policy behind the CFAA, the Court held that an individual “exceeds authorized access” under the CFAA when he accesses a computer with authorization but then obtains information located in particular areas of the computer—such as files, folders, or databases—that are off limits to him. Because the sergeant could use his credentials to obtain the license plate information, he did not exceed authorized access to the database under the terms of the CFAA.

In reaching its holding, the Court noted if “the ‘exceeds authorized access’ clause criminalizes every violation of a computer-use policy, then millions of otherwise law-abiding citizens are criminals.” Accordingly, with the narrowing of the CFAA, the decision is a good reminder to ensure that policies and agreements, including terms of use, that govern access to sensitive electronic resources are both enforceable and crafted with sufficient terms to cover insider threats and prohibit individuals with access to the electronic resource from using the resource in a damaging manner.

On April 22, 2021, the Supreme Court issued a unanimous decision finding that the FTC lacks authority to pursue equitable monetary relief in federal court under Section 13(b) of the Federal Trade Commission Act (the “FTCA”). The result means that defendant Scott Tucker does not have to pay $1.27 billion in restitution and disgorgement, notwithstanding that his payday loan business was found to constitute an unfair and deceptive practice.  The result also means that onlookers everywhere are scratching their heads thinking “what now?”  If the FTC is stripped of its authority to pursue equitable monetary relief, then will unfair and deceptive practices run rampant, knowing that the “worst case” scenario is that they will be enjoined (but get to keep any ill-gotten profits earned in the interim)?

Not entirely and probably not for long.

First, the FTC was never the only enforcement mechanism for policing unfair competition and deceptive practices violations.  Thus, there remain other agencies and statutes available to pursue wrongdoers, and many of these allow for the pursuit of equitable monetary relief.  For example, there are other federal agencies that have jurisdiction in certain situations (e.g., the Consumer Financial Protection Bureau).  Also, state attorneys general and state UDAP laws often have broad jurisdiction and authority to pursue monetary relief.

Second, Sections 5 and 19 of the FTCA give district courts the authority to impose monetary penalties and award monetary relief where the FTC has issued cease and desist orders.  So, the FTC is not currently stripped of all authority to pursue equitable monetary relief in federal court – it just needs to issue a cease and desist order first.

Third, the FTC has been pressuring and continues to pressure Congress to amend Section 13(b) of the FTCA to broaden the scope of relief available.

Fourth, the FTC could promulgate more rules and strengthen its existing rules under its rulemaking process (Section 18 of the FTCA).  Last month, acting Chairwoman Slaughter announced the formation of a new, centralized rulemaking group in the General Counsel’s office.

In sum, the Supreme Court’s decision will undoubtedly have some effects on the policing of unfair and deceptive trade practices, but there are numerous processes in place to ensure that the system is not derailed, and it is likely that the FTC will have authority to pursue monetary relief in the future.

As part of its three-part series on the future of human-computer interaction (HCI), Facebook Reality Labs recently published a blog post describing a wrist-based wearable device that uses electromyography (EMG) to translate electrical motor nerve signals that travel through the wrist to the hand into digital commands that can be used to control the functions of a device.  Initially, the EMG wristband will be used to provide a “click,” which is an equivalent to tapping on a button, and will eventually progress to richer controls that can be used in Augmented Reality (AR) settings.  For example, in an AR application, users will be able to touch and move virtual user interfaces and objects, and control virtual objects at a distance like a superhero.  The wristband may further leverage haptic feedback to approximate certain sensations, such as pulling back the string of a bow in order to shoot an arrow in an AR environment.

One general promise of neural interfaces is to allow humans to have more control over machines.  When coupled with AR glasses and a dynamic artificial intelligence system that learns to interpret input, the EMG wristband has the potential to become part of a solution that bring users to the center of an AR experience and frees users from the confines of more traditional input devices like a mouse, keyboard, and screen.  The research team further identifies privacy, security, and safety as fundamental research questions, arguing that HCI researchers “must ask how we can help people make informed decisions about their AR interaction experience,” i.e., “how do we enable people to create meaningful boundaries between themselves and their devices?”

For those of you wondering, the research team did confirm that the EMG interface is not “akin to mind reading”:

“Think of it like this: You take many photos and choose to share only some of them. Similarly, you have many thoughts and you choose to act on only some of them. When that happens, your brain sends signals to your hands and fingers telling them to move in specific ways in order to perform actions like typing and swiping. This is about decoding those signals at the wrist — the actions you’ve already decided to perform — and translating them into digital commands for your device. It’s a much faster way to act on the instructions that you already send to your device when you tap to select a song on your phone, click a mouse, or type on a keyboard today.”

Yesterday, Virginia passed the Virginia Consumer Data Protection Act (VCDPA), making it the second state (behind California, with its California Consumer Protection Act (CCPA) to enact a general consumer privacy law.  The VCDPA will take effect on January 1, 2023, which is also the same day the California Privacy Rights Act (CPRA), an act to strengthen the CCPA, will go into effect.

The VCDPA applies to “persons” that conduct business in Virginia (or produce products and services that are targeted to Virginia residents) that “control or process” the personal data (1) of at least 100,000 Virginia residents or (2) for an entity that derives over half of its gross revenue from the sale of personal data, of at least 25,000 Virginia residents.  Nonprofit organizations and institutions of higher education are exempt.

The VCDPA defines “personal data” broadly as any information that is “linked or reasonably linked to an identified or identifiable natural person.”  “Personal data” does not include publicly available information, or de-identified data (which by definition cannot be reasonably linked), although de-identified data would be subject to certain safeguards to limit the risk of re-identification.

The VCDPA stands out in that it is the first of its kind in the U.S. to require controllers to seek opt-in “consent” from consumers with respect to the processing of sensitive data, including health information, race, ethnicity, and precise geolocation data, and to mandate formal data protection assessments (similar to the European Union’s General Data Protection Regulation (GDPR)).  The data protection assessment obligation requires controllers to conduct data protection assessments that weigh the overall benefits of the processing activity against the potential risks of the consumer (as mitigated by applicable safeguards) before engaging in processing activities that involve sensitive data, targeted advertising, the sale of personal data, processing for purposes of profiling, and any other activities that “present a heightened risk of harm to consumers.”  Further, the attorney general may compel production of these assessments pursuant to an investigative civil demand, without court approval, and may evaluate the data protection assessment for compliance with VCDPA.  The assessments are considered confidential and exempt from Virginia’s Freedom of Information Act (FOIA), and any attorney-client privilege or work product protection with respect to an assessment or its contents cannot be considered waived.

The rest of the VCDPA provisions are more in-line with other consumer privacy laws.

The VCDPA provides consumers with data subject rights, including a right to confirm whether a controller is processing personal data, to access such personal data, to correct inaccuracies in the personal data, to delete personal data, to obtain a copy of the personal data in a portable and usable format, and to opt out of the processing of personal data for purposes of (i) targeted advertising, (ii) the sale of personal data, or (iii) profiling in furtherance of decisions that produce legal or similarly significant effects concerning the customer.

The VCDPA requires data controllers to, inter alia, limit collection of personal data to what is adequate, relevant, and reasonably necessary; to not process personal data outside of the disclosures of what/how personal data is being processed; and to establish, implement, and maintain reasonable administrative, technical, and physical data security practices (reasonableness is tied to the volume and nature of personal data at issue).

The law also requires controllers to provide consumers with a privacy notice including the categories of personal data processed by the controller; the purpose for processing personal data; how consumers may exercise their data subject rights; the categories of personal data the controller shares with third parties (if any); the categories of third parties, if any, with whom the controller shares personal data; if the controller sells personal data or processes personal data for targeted advertising, the controller shall disclose such processing as well as the manner in which a consumer may opt-out of such processing; and information on how consumers may submit requests to exercise their data subject rights (which  must take into account “the ways in which consumers normally interact with the controller, the need for secure and reliable communication of such requests, and the ability of the controller to authenticate the identity of the consumer making the request”).

The VCDPA does not provide for a private consumer right of action (unlike CCPA, which provides for a private right of action for violations of the duty to implement and maintain reasonable security procedures).  Instead, the state’s attorney general has exclusive authority to enforce the law.  Prior to initiating any action, the attorney general must give the controller or processor 30 days’ written notice, and if the controller or processor cures the noticed violation(s) and provides an express written statement to the attorney general stating so and that no further violations shall occur, then no action for statutory damages will be initiated.  The VCDPA provides for statutory damages of up to $7,500 for “each violation,” as well as injunctions and reasonable expenses for the attorney general investigating and preparing the case, including attorney fees.

All penalties collected for violations of the VCDPA will be paid into the newly-created “Consumer Privacy Fund,” which will be used “to support the work of the Office of the Attorney General to enforce the provisions of this chapter.”

In December 2020, Apple started requiring Apps to display mandatory labels that provide a graphic, easy-to-digest version of their privacy policies.  They are being called “privacy nutrition labels,” presumably a reference to the mandatory FDA-required “Nutrition Facts” labels that have appeared on food since 1990.  Below I offer ten thoughts related to these new labelling requirements.

  1. The idea for Apple’s labels may not have been Apple’s.  As reported by wired.com, in the early 2010s, academic researchers developed mobile app privacy label prototypes and ran tests in 20 in-lab exercises and an online test of 366 participants.  See here.  They found that by “bringing information to the user when they are making the decision and by presenting it in a clearer fashion, we can assist users in making more privacy-protecting decisions.”  For example, according to this Washington Post article, the “privacy nutrition labels” for the Zoom video chat app says it takes six kinds of data linked to your identity,  while rival Cisco Webex Meetings says it collects no data beyond what’s required for the app to run.  Just like the “Nutrition Facts” on cereal boxes may influence what cereal box you buy, will “privacy nutrition labels” influence what Apps you download?
  1. While the aforementioned study shows that the new Apple “privacy nutrition labels” could be a game-changer for consumer purchasing behavior with respect to Apps, the real effects will likely depend on a number of factors, including how easy the labels are to understand, how consistently they present information across different products/Apps, where they are presented/how far down one has to scroll, visually how they are presented/whether they stand out to customers, the extent to which Apps “market” privacy and try to compete on privacy policies, and importantly, whether the presented information is accurate.
  1. On the topic of understanding the labels, it seems there is a long way for most consumers to go before they understand what to look for on a “privacy nutrition label.”  But, most users would at least understand that the longer the list of data that the App uses, the more invasive it is with regard to their privacy.  Here is a great article on how to read and understand “privacy nutrition labels,” including some of the key details that consumers may want to scan for.
  1. On the topic of presentation of the labels, my personal view is that Apple has a long way to go if it wants these labels to stand out.  When I first looked for one, I couldn’t find it – I just scrolled right on past it.  Now that I know what I’m looking for, it’s easier to spot.  It will be interesting to see the extent to which users are educated on looking for and using these labels.  It will also be interesting to see if (and how) Apple updates the look and feel of these labels in the future.  Or if an agency ever steps in and mandates similar labelling in the future, similar to the FDA’s “Nutrition Facts” labels.
  1. On the topic of “marketing” privacy and competing based on privacy promises, I think we are already there in some industries.  For example, see my prior blog post: “WhatsApp – An Example of How Companies Compete Based on Privacy.”
  1. On the topic of accuracy, the Washington Post published this Consumer Tech Review: “I checked Apple’s new privacy ‘nutrition labels.’  Many were false.” The reviewer pointed out numerous cases where the privacy labels allegedly misrepresented the data collection practices of the App (and tips on how to determine this).  He concluded that “Apple’s big privacy product is built on a shaky foundation: the honor system.  In tiny print on the detail page of each app label, Apple says, “This information has not been verified by Apple.””
  1. Of course, is it Apple’s obligation to verify the information?  By Apple requiring that the information be disclosed on its “privacy nutrition labels,” it is taking a first step.  Consumer protection laws and regulations exist to police with respect to misrepresentations.  Of course, the chief federal agency on privacy policy enforcement (the Federal Trade Commission) may soon be ruled by the Supreme Court to lack the ability to demand monetary relief (see our post on this here).  Thus, to the extent an App is profiting from its misrepresented “privacy nutrition labels” – and a goal of enforcement would be disgorge those ill-gotten gains – avenues of enforcement outside of the FTC may need to be explored.
  1. Does the fact that Apple’s privacy nutrition labels may be wrong mean they should be disregarded altogether?  While it may be tempting to consider this, I did a quick search on whether food nutrition labels are ever wrong.  This article reported that the law allows a margin of error of up to 20 percent for the stated value versus the actual value of nutrients.  Wow.  Of course, this is different than a cereal representing that it has no fat, and it turns out that it is loaded with fat (which may be more analogous to what some companies are doing by misrepresenting their privacy policies on their Apple “privacy nutrition labels”), but it is still informative.  And what is maybe more informative is the fact that, per the same article, a 2010 study in the Journal of Consumer Affairs purportedly reported that those who read nutrition labels (but did not exercise) were more likely to lose weight than those who did not read labels and did exercise—“[i]n other words, the awareness of a food’s (approximate) nutritional content and portion size does appear to influence eating behaviors in a beneficial way.”  I would imagine that the same may be true with respect to those who read “privacy nutrition labels.”  While they may not always be accurate, paying attention to them may influence your App downloading behaviors in a beneficial way (if you care about privacy).
  1. Now what about the name “privacy nutrition label”?  Is the word “nutrition” in there just as a reference to the FDA “Nutrition Facts,” or is there an implication hidden in there that certain privacy policies are more or less “healthy”?  Here are some definitions of “nutrition” to help you ponder this:
    • gov: “Nutrition is about eating a healthy and balanced diet.  Food and drink provide the energy and nutrients you need to be healthy.  Understanding these nutrition terms may make it easier for you to make better food choices.”
    • Wikipedia: “Nutrition is the science the interprets the nutrients and other substances in food in relation to maintenance, growth, reproduction, health and disease of an organism.  It includes ingestion, absorption, assimilation, biosynthesis, catabolism, and excretion.”
    • Merriam-webster.com: Nutrition is “the act or process of nourishing or being nourished, specifically: the sum of the processes by which an anima or plant takes in and utilizes food substances.”
    • World Health Organization: “Nutrition is a critical part of health and development.  Better nutrition is related to improved infant, child and maternal health, stronger immune systems, safer pregnancy and childbirth, lower risk of non-communicable diseases (such as diabetes and cardiovascular disease), and longevity.”
  1. Finally, what’s the future of these labels?  Will competitors start requiring similar “privacy nutrition labels”?  Will websites start offering them as a “graphic version” of their longer privacy policies?  Will they start to appear at brick-and-mortar establishments on small displays?  While there are risks associated with additional statements of one’s privacy policy (i.e., one more document to maintain, verify the accuracy of, etc.), there may be benefits as well as consumers start to pay more attention to companies’ privacy policies, and companies increasingly compete on privacy promises.