The European Commission recently presented strategies for data and Artificial Intelligence (AI) focusing on promoting excellence in AI and building trust.  The Commission’s White Paper, “On Artificial Intelligence – A European approach to excellence and trust,” addresses the balance between the promotion of AI with regulation of its risks.  “Given the major impact that AI can have on our society and the need to build trust, it is vital that European AI is grounded in our values and fundamental rights such as human dignity and privacy protection.”  In addition to the benefits AI affords to individuals, the white paper notes the significant roles AI will have at a societal level, including in Europe’s Sustainable Development Goals, supporting democratic process, and social rights.  “Over the past three years, EU funding for research and innovation for AI has risen to €1.5 billion, i.e. a 70% increase compared to the previous period” (n.b. as compared to €12.1 billion in North America and €6.5 billion in Asia in 2016).  The white paper also addresses recent advances in quantum computing and how Europe can be at the forefront of this technology.

“An Ecosystem of Excellence.”  The White Paper outlines a few areas to build an ecosystem of excellence, including (a) working with member states, including revising the 2018 Coordinated Plan to foster the development and use of AI in Europe to be adopted by end of 2020; (b) focusing the efforts of the research and innovation community through the creation of excellence and testing centers; (c) skills, including establishing and supporting through the advanced skills pillar of the Digital Europe Programme networks of leading universities and higher education institutes to attract the best professors and scientists and offer world-leading masters programs in AI; (d) focus on SMEs, including ensuring that at least one digital innovation hub per member state has a high degree of specialization on AI and a pilot scheme of €100 million to provide equity financing for innovative developments in AI; (e) partnership with the private sector; (f) promoting the adoption of AI by the public sector; (g) secure access to data and computing infrastructures; (h) cooperate with international players.

“An Ecosystem of Trust: Regulatory Framework for AI.”  “The main risks related to the use of AI concern the application of rules designed to protect fundamental rights (including personal data and privacy protection and non-discrimination), as well as safety and liability-related issues.”  The White Paper notes that developers and deployers of AI are already subject to European legislation on fundamental rights (data protection, privacy, non-discrimination), consumer protection, and product safety and liability rules, but that certain features of AI may make the application and enforcement of such legislation more difficult.  With respect to a new regulatory framework, the White Paper proposes a “risk-based approach,” and sets forth two cumulative criteria to determine whether or not an AI application is “high-risk”: (a) the AI application is employed in a sector where significant risks can be expected to occur (e.g., healthcare, transport, energy and parts of the public sector); (b) the AI application in the sector in question is, in addition, used in such a manner that significant risks are likely to arise (e.g., will produce legal or significant effects for the risks of an individual or a company, that pose risk of injury, death, or significant damage, that produce effects that cannot reasonably be avoided by individuals or legal entities).

Does Europe’s risk-based approach adequately balance regulation v. AI innovation?  It is a difficult question that various jurisdictions have been grappling with globally.  The concept of a risk-based approach, however, should sound familiar to the private sector, particularly to large enterprises that have already embedded AI and other emerging technologies into their internal risk management framework.  For smaller scale experiments, perhaps there is room for more regulatory flexibility in encouraging innovation in AI.  For example, Arizona was early to welcome autonomous vehicle testing on its roads in 2015, and is experimenting with a “regulatory sandbox” for FinTech.