The Internet of Things (IoT) is often defined as a network of interconnected devices, such as sensors, smartphones, and wearables, or the transfer of data between everyday objects with computing capabilities. It’s where physical infrastructure meets the digital universe, and where machines can “talk” to one another. The IoT creates a connected world, and it’s growing exponentially. One forecast from The Economist predicts that there will be “a trillion connected computers by 2035, built into everything from food packaging to bridges and clothes.”

But what exactly will our smart cities look like, and how will IoT affect how we live and work? Here’s a glance at some recent technologies that may shape how we relate to technology – and to one another – in the not-so-distant future:

  • Modern Medicine– like the new Stanford Hospital, which opened its doors back in November, touting futuristic features including a fleet of robots programmed to deliver linens, take out the trash, and prepare individual-dose prescriptions for medication-dispensing systems throughout the hospital; the ability for patients to adjust the room temperature or order a meal via a bedside tablet; and its MyHealth app, which provides updated electronic health records and allows patients and their families to view test results, pay medical bills, or send a message to a physician.
  • Tender-less Bars- like Sestra System’s TapWise platform, which allows for automated self-service beer stations with controlled access (e.g., via an app, RFID, or pin code); precision pouring to avoid waste and deliver just the right amount of foam; and real-time data on inventory, event schedules, weather, and point-of-sale records.
  • Smart Farming- like Microsoft Azure’s FarmBeats, a platform which utilizes a system of sensors, drones, TV white spaces, and cloud solutions to provide robust data analytics in the agricultural sector, with features ranging from soil moisture maps to livestock tracking, ultimately increasing crop yield and reducing waste.
  • Next Generation Fashion- like Levi’s new jean jacket that incorporates Google’s Jacquard™ technology, allowing wearers to connect the jacket to their smartphone and perform actions (e.g., play music, take a photo, or send a message) through the interactive jacket cuff and receive LED-light notifications without looking at their screen.
  • Automated Waste Management- like Songdo, South Korea’s truck-free waste management system, comprised of pneumatic pipes and automated waste bins, which automatically sorts, recycles, buries, or burns the waste through a network of underground pipes leading to the city’s processing center.

While the world may be setting the stage for embracing 5G and the benefits that come with more efficient energy usage, concerns about cybersecurity and hacking into connected devices loom on the horizon. With greater connectivity comes greater responsibility for privacy and data security, and tech companies should take heed to protect consumers.

The New Jersey attorney general recently made headlines when he made the decision on January 24, 2020 to have prosecutors immediately stop using a facial recognition app produced by Clearview AI (https://clearview.ai/).  Clearview AI is an app that markets itself as helping to stop criminals.  The Clearview AI website states: “Clearview helps to identify child molesters, murderers, suspected terrorists, and other dangerous people quickly, accurately, and reliably to keep our families and communities safe.”  While the app purports to only be available to law enforcement agencies and select security professionals to use as an investigative tool, and it purports that its results contain only public information, the NJ attorney general cited concerns.  Law360 reported that the AG’s office stated: “[W]e need to have a sound understanding of the practices of any company whose technology we use, as well as any privacy issues associated with their technology.”

Clearview AI’s database appears to have broader data than that of many of its competitors.  While most facial recognition programs allow law enforcement to compare images of suspects to databases composed of mug shots, driver’s license photographs, and other government issued or –owned photos (and usually confined to the state in which they operate), Clearview’s data appears to be national in scope and contain information from social media sites as well—like Facebook, Twitter, Venmo, YouTube and elsewhere on the Internet.   All told, the database contains more than three billion photos.  And it is used by more than 600 law enforcement agencies, ranging from local police departments to the F.B.I. and the Department of Homeland Security.

Clearview’s broader set of data raises a number of questions.  On the privacy front are questions as to whether the data was obtained legally.  For example, we recently discussed the Facebook settlement of the BIPA class action.  If Facebook biometric information was illegally obtained/used (i.e., without proper written consent under BIPA), then are all uses of that same illegally obtained information also violations of BIPA?  Or is the recovery for the BIPA violation exhausted by virtue of the recent $550 million Facebook settlement?  It appears, based on the plain language of the statute, that there would be no exhaustion here, i.e., that each unlawful collection of biometric identifiers/information would need written consent.  Of course, it also appears that Clearview may be exempt under BIPA.  See BIPA, Sec. 25(e) (“Nothing in this Act shall be construed to apply to a contractor, subcontractor, or agent of a State agency or local unit of government when working for that State or local unit of government.”).

Another possible Clearview privacy issue concerns data scraping.  While it’s unknown how Clearview obtained its social media data, it appears that at least some of its data was obtained via web scraping.  For example, Twitter sent Clearview a cease-and-desist letter on January 21, 2020, demanding that Clearview stop collecting images from Twitter and delete any data that it previously collected.  In the press, as support for its cease and desist letter, Twitter has pointed to its terms of service, which state that “scraping the services without the prior consent of Twitter is expressly prohibited.”

Of course, it is unclear whether Twitter’s terms of service will hold water in providing a legal basis for stopping Clearview’s web scraping.  For example, in a decision issued last fall, the Ninth Circuit in hiQ v. LinkedIn ruled in favor of a data scraper (albeit on different grounds).

On the flip side of the possible privacy violations – and the desire of many to know how Clearview is getting the data it has amassed and whether users have consented (e.g. the NJ attorney general, BIPA, etc) – is Clearview AI’s intellectual property rights.  Is its data collection proprietary?  Does its software and data collection processes contain trade secrets?  And if so, who has a right to know what Clearview is doing, and if so, how far does that right extend?

Last month, the UK Information Commissioner’s Office released a comprehensive Age Appropriate Design Code that seeks to protect children within the digital world.

The code sets out 15 flexible standards of age appropriate design for online services. These standards focus on providing “high privacy” default settings, transparency to children with regard to active location tracking and parental controls, and minimized data collection and use, while prohibiting nudge techniques that could encourage children to reveal more personal details and profiling that automatically recommends sexual or violent content to children based on their searches.

While there is some overlap between the UK’s code and our own Children’s Online Privacy Protection Act of 1998 (COPPA), the UK’s proposed regulation is far more extensive in scope. For instance, while COPPA only covers children under the age of 13, the UK code expands its protection to teenagers, covering all children under the age of 18. With regard to application, COPPA is directed toward websites or online services that knowingly collect and use data from children, whereas the Age Appropriate Design Code applies to “information society services likely to be accessed by children” in the UK. Such services include not just search engines, apps, online games, and social media platforms, but also connected toys and devices – a sweeping application likely to affect a number of big tech companies, some of whom think the code conflates the issue of inappropriate content with data protection.

The proposed regulation is issued under Britain’s Data Protection Act of 2018 and is set for Parliament’s approval. Once the code takes effect, online services within the scope of the code would have 12 months to become compliant. If implemented, violators could face fines of up to 4% of their global revenue.

Following up on our post of January 22, 2020 (“Big News in Biometrics – Supreme Court Declines to Weigh in on What Plaintiffs Must Show to Bring Biometric Privacy Suit”), Facebook has now agreed to pay $550 million to settle the BIPA class action lawsuit.  This is the largest BIPA settlement ever, and it will likely serve to encourage the filing of additional class action lawsuits.  If you are a business that engages with biometric information in Illinois beware… and revisit your written consent policies.

The National Institute of Standards and Technology (NIST) released version 1.0 of its Privacy Framework: A Tool for Improving Privacy Through Enterprise Risk Management, which follows the structure of the Framework for Improving Critical Infrastructure Cybersecurity (Cybersecurity Framework).  The Privacy Framework acknowledges that failure to manage privacy risks can have direct adverse consequences at both the individual and societal levels, with follow-on effects on organizations’ brands, bottom lines, and future prospects for growth.  “Finding ways to continue to derive benefits from data processing while simultaneously protecting individuals’ privacy is challenging, and not well-suited to one-size-fits-all solutions.”

The Framework includes three parts: Core, Profiles, and Implementation Tiers.

The “Core” is designed to enable a dialogue among the various stakeholders-executive level to implementation/operations level-and sets forth activities and outcomes, including:

  • Functions: organize foundational privacy activities, including Identify-P, Govern-P, Control-P, Communicate-P, and Protect-P
  • Categories: subdivision of a Function into groups of privacy outcomes
  • Subcategories: subdivision of a Category into specific outcomes of technical and/or management activities

Profiles can be used to describe the current state and the desired target state of specific privacy activities.  They are designed to enable the prioritization of the outcomes and activities that best meet organizational privacy values, mission or business needs, and risk.

Implementation tiers support organizational decision-making and communication about how to manage privacy risk by taking into account the nature of the privacy risks engendered by an organization and the sufficiency of the organization’s processes and resources to manage such risks.   The Framework specifies four Tiers that recognize a progression in managing privacy risks: (1) Partial, (2) Risk Informed, (3) Repeatable, (4) Adaptive.

The NIST Framework offers flexible and useful practices that can be adopted as appropriate by entities engaging in personal data processing activities.  In its accompanying Roadmap for advancing the Privacy Framework, NIST seeks continued collaboration with its stakeholders from government, academia, and industry on privacy risk management, including in the following priority areas for development, alignment, and collaboration: (1) Privacy Risk Assessment, (2)  Mechanisms to Provide Confidence, (3) Emerging Technologies (IoT and AI), (4) De-Identification Techniques and Re-identification Risks, (5) Inventory and Mapping, (6) Technical Standards, (7) Privacy Workforce, and (8) International and Regulatory Aspects, Impacts and Alignment.

Take note that while privacy standards are still in their infancy, they can be useful tools for showing that an entity is committed to privacy and has engaged in industry best practices.  Additional privacy systems management standards include ISO/IEC 27701 (Security techniques for privacy information management), ISO/PC 317 (Consumer protection: privacy by design for consumer goods and services), IEEE P7002 (Data Privacy Practices), and the International Association of Privacy Professionals has a Privacy Engineering Section. Which one(s) will you follow?

Since the GDPR went into effect in May 2018, there’s been a noticeable surge in the display of cookie consent notices (or cookie banners), with large variations in text, the choices presented to users, and even the position of the notice on the webpage.

But how are users actually interacting with these cookie banners, if at all? And more importantly, which cookie notice format ensures that users can make free and informed choices under the law?

A study conducted last Fall by the University of Michigan and the Ruhr-University Bochum in Germany (available here) examined how EU users interact with and respond to cookie consent notices. In summary, the study found the following:

  • Position: Users are most likely to interact with a cookie consent notice when it is in the form of a cookie banner displayed on the bottom of the screen (for mobile devices) or bottom left of the screen (for desktop computers).
  • Choice Mechanism: Users are more willing to accept cookie tracking given a binary choice (accept or decline), as opposed to selecting or deselecting the types of cookies in checkboxes (e.g., necessary, personalization & design, analytics, social media, marketing) or the types of vendors who may use the tracked cookies (e.g., FB, YouTube, Google Analytics, Google Fonts, Ionic, Google Ads).
  • “Nudging” (or using pre-checked boxes or highlighted “accept” buttons) has a large effect on the choices users make and increases acceptance rates.
  • Language: The use of technical language such as “cookies” in the notice (as opposed to “your data” or other) is more likely to result in a user declining cookie tracking.

To obtain valid consent for the processing of personal data, Recital 32 of the GDPR requires “a clear affirmative act” that is “freely given, [purpose-]specific, informed and unambiguous indication of […] agreement to the processing of personal data.” The regulation further specifies that “pre-ticked boxes or inactivity should not…constitute consent” and that “the request must be clear, concise, and not unnecessarily disruptive to the use of the service for which it is provided”…“When the processing has multiple purposes, consent should be given for all of them.”

According to the study, and pending further regulation or guidance on how to obtain clear, freely-given, and purpose-specific consent, “opt-in” cookie banners presenting users with a number of informed and unambiguous choices (which are not pre-selected) are predicted to be the trend for both multinational and US companies.

Thus, if you give a user a cookie notice, make sure it presents a clear and meaningful choice. If the cookie notice presents a clear and meaningful choice, is the user more likely to accept cookies? Possibly. But that’s another story.

One notable difference between the California Consumer Privacy Act (CCPA) and Europe’s General Data Privacy Regulation (GDPR) is that only the latter provides the right for individuals to not be subjected to automated decision-making, including profiling, which has legal or other significant effects on that individual.

But, the CCPA still creates issues for covered entities operating in the artificial intelligence (AI) and machine learning (ML) space.  For example, how does one comply with an individual’s request to delete their data–the so-called right to be forgotten–with respect to a “black box” ML model that used that individual’s personal information as training data?  When is a consumer’s data sufficiently “aggregated” or “deidentified” such that its use in a ML model escapes the CCPA’s scope?

If one thing is certain, it is far better to take a proactive approach and address these questions early in the design and development of new products and services.  Be sure to invite the appropriate stakeholders to that conversation, including your attorney!

The California Consumer Privacy Act (CCPA) went into effect on January 1, 2020.  The CCPA is front page news, and rightfully so.  While the major focus has been, and continues to be, on the CCPA, another piece of legislation went into effect on January 1, 2020, which deserves attention.

Bill SB-327 – “Information Privacy: Connected Devices” — is California’s new Internet of Things (IoT) security law, and it requires “manufacturers” of “connected devices” to equip those devices with “reasonable security features.”  While the language of SB-327 gives some guidance on what may deemed “reasonable,” there is plenty of gray and likely much more in the way of interpretation up ahead.

The applicability of CA’s IoT law is massive.  The definition of “connected device” is “any device, or other physical object that is capable of connecting to the Internet, directly or indirectly, and that is assigned an Internet Protocol address or Bluetooth address.”  Use your imagination… just think about all of those connected devices that you have on you right now, in your office, waiting for you at home.  And the law applies to all connected devices sold or offered for sale in California, regardless of where manufactured.

The law is clear that there is no private right of action and that the CA Attorney General, a city attorney, a county counsel, or a district attorney have exclusive authority to enforce the law.  Good.  What is absent from the legislation and, to date unknown, is the penalty for violating the law.  Bad.

Stay tuned.

On January 21, 2020, the Supreme Court denied Facebook’s Petition for Certiorari, raising the issues of (i) Whether a court can find Article III standing based on its conclusion that a state protects a concrete interest, without determining that the plaintiff suffered a personal, real-world injury from the alleged statutory violation; (ii) whether a court can find Article III standing based on a risk that a plaintiff’s personal information could be misused in the future, without concluding the possibility of misuse is imminent; and (iii) whether a court can certify a class without deciding a question of law that is relevant to determining whether common issues predominate under [Federal Rule of Civil Procedure,] Rule 23.

The underlying case, Patel v. Facebook Inc., hinges on a question of whether Facebook violated the Illinois Biometric Privacy Act (BIPA) by implementing a photo-tagging feature that recognized users’ faces and suggested their names without first obtaining adequate consent.

BIPA, which was passed into law in 2008, requires companies to obtain written consent from people before collecting biometric identifiers or biometric information, including (1) informing the subject that the biometric information is being collected/stored; (2) informing the subject of the purpose and duration for which the biometric information is being collected, stored, and used; and (3) receiving a written release.    “Biometric identifier” is defined by the statute to include a “retina or iris scan, fingerprint, voiceprint, or scan of hand or face geometry,” and expressly excludes a multitude of things, including but not limited to “writing samples, written signatures, photographs, human biological samples used for valid scientific testing or screening, demographic data, tattoo descriptions, or physical descriptions such as height, weight, hair color and eye color”; donated organs, tissues, blood, or serum; biological materials regulated under the Genetic Information Privacy Act; and information captured from a patient in a health care setting.  “Biometric information” is defined as “any information, regardless of how it is captured, converted, stored, or shared, based on an individual’s biometric identifier used to identify an individual.”

BIPA provides users with a private right of action, and statutory damages of $1,000 per violation (or actual damages, whichever is greater), or $5,000 per violation (or actual damages, whichever is greater) if the violation is intentional or reckless.  BIPA also provides for the recovery of attorneys’ fees and costs and other relief, including an injunction.  (Note: It is BIPA’s private right of action, and hefty fines, that distinguish it from other state laws regulation biometric data usage.)

Facebook argued in its cert petition that the plaintiffs in Patel v. Facebook lacked standing to bring the suit because, although the plaintiffs claimed that their privacy interests under the statute had been violated, they never alleged (or showed) “that they would have done anything differently, or that their circumstances would have changed in any way, if they had received the kind of notice and consent they alleged that [the Illinois law] requires, rather than the disclosures that Facebook actually provided to them.”

Of note, the challenged practice is no longer used by Facebook.  While Facebook previously disclosed that it was using facial recognition technology and gave users the option to turn it off, as of September 2019, Facebook changed its practice, replacing it with an interface that expressly asks users whether they want to turn on the feature.  This change was one of many that resulted from Facebook’s settlement of an ITC investigation (which also resulted in Facebook’s payment of a $5 billion fine).