Policy Briefing on the White Paper on AI released by the European Commission on February 19, 2020
Overview
The European Commission (hereinafter, the Commission) released a White Paper on AI on February 19, 2020 “On Artificial Intelligence - A European Approach to Excellence & Trust”(Ec.europa.eu, 2020), which is centred on an excellence-aiming approach to foster AI and democratize its usefulness, and recognizing and resolving its potential risks when it comes to using it in the European infrastructure. The paper is based on certain basic conundrums that build a vision of a technocentric AI infrastructure under the von der Leyen Commission (von der Leyen, 2019), which are 1. Trust as a Constructive Policy Grundnorm; 2. Reinventing the Free Market Economy Approach; 3. Calibrating the limitations of the Liability Framework Approach; 4. A Risk-Assessment-First Approach; 5. Elevating the Role of Stakeholders’ Maximum Participation in AI Governance and Regularization; 6. Recognizing the Changing Dynamic of AI as a Legal and Technological System and its relationship with Humans; In this policy brief published with the Institute for a Greater Europe, I have provided specific issue-to-issue analysis of these 6 conundrums that I believe are integral to the White Paper as proposed by the European Commission.
Trust as a Constructive Policy Norm
The Commission, in the White Paper has reckoned to expand the pragmatic role and concept of embarking Trust in AI involvement. Artificial Intelligence is dynamic, and its usage will lead to industrial and individual-level risks, which must be adequately and regularly assessed. The Commission, therefore, believes that a trust-ecosystem (quoted as ‘ecosystem of trust’) will be helpful in discerning solutions with regards to the risk-based modalities of AI, and this would certainly help in rendering practical solutions. We, therefore, believe that expanding and not elasticizing the concept and ethics of ‘ecosystem of trust [1]’ can prevent any constitutional, existential and systemic redemptions within the EU ecosystem of law and order & economics, and it would certainly be convincing enough to control the political and legal scenarios of technocratic governance involved. The European governance model on data protection, in our view is way better than those of India, the US, People’s Republic of China & the UK. However, it has been historically inevitable that in matters of governance, trust can be a currency to the sovereign imperatives of regularized, reasonably transparent and practical governance infrastructures that the nation-states and key non-state actors in various EU economies would have to confirm themselves with, which would help the Commission and then the European Parliament to liberalize and expand the ethics of trust and avoid any formalistic and rudimentary approach to uphold rule of law via upholding Grundnorm of trust in the EU data protection law.
Reinventing the Free Market Economy Approach
The Commission understands the disparities in using and democratizing AI systems in the domains of management related to products and services, which itself is important to estimate. The White Paper, however, focuses more on the product liability issues and shows concerns over services in a more inherent and proceduralist manner, which is embracing. The Commission considers that a public welfare approach, which is fruitful for the citizens within the EU, the business operations within the EU (as per jurisdictional modalities) & the public services should be taken into due consideration. The Commission also reckons the fact the ICT[2] Sector will be growing fast and the environmental challenges cum implications of the ICT Sector in Europe, in lines with the von der Leyen Political Guidelines as proposed by the Commission President in September 2019 and the European Green New Deal must go through a green transformation with effective and unhampered transition. The agenda of reconciling the Green New Deal and combating the effects of climate change and other environmental implications of AI systems is appreciated. It is worth noting that in the initial agenda documents, the Commission has taken due notice of the issue and this certainly would benefit eveyone at large.
Calibrating the limitations of the Liability Framework Approach
The Commission believes that the impact of artificial intelligence is beyond individual effects, which means that the AI’s impacts are not limited to an individual entity, be it any, and are maybe collaterally attributive in terms of the outcomes and the opportunities of the outcomes of such problems. One of the exemplary references the Commission provides in the White Paper is about the case that the developers’ ability to control the risks during the phase of evaluation of the AI’s risks would certainly be limited with time. The Commission also believes that in case of product liability issues, the generic liability framework laid by the EU is important to decide the precedential aspects of the AI systems and their risk assessment requirements. Furthermore, the Commission asserts in the White Paper that any harm incurred to a person due to the involvement of artificial intelligence is needed to be equated with those harms that incur due to some other technologies under the data protection law regime in the EU in order to regularize protection mechanisms for the aggrieved parties. We believe that it is insightful for the Commission to recognize the fact that the liability framework of the EU legal system is set to be affected by the use of AI, although we accept the fact that it would not be imminent to consider that AI can adversely or otherwise explicitly influence the EU legislative system and its capacities as well as capabilities. We also believe that in order to determine some civil liability of AI, any possible & reasonable legal parameters of risk & impact assessment can work. These parameters (which can be case-to-case based) can be effectively used to convert the legal norms of civil liability into an approach, where any technological entity, be it a service or a product, can be connected with the social needs of the natural persons under EU law (General Data Protection Regulation (GDPR), 2020) and transnational law. However, this synergy between AI and humans can be done successfully on an aesthetic and individualistic perspective, which would fall under a libertarian approach of dealing with the issue. We endorse an aesthetic approach because under the conception of liberalism in data ethics, the role of data and its internal and external (whether human or non-human) actors are intimate and delicate enough to influence/affect the lives of common people in general. For example, activists are protesting in several countries to ban facial recognition like HongKong, India, the US and even the UK because their key argument is that these facial recognition devices are aesthetically biased due to their algorithmic failures, and may not understand the diversity of user activities, presence and existence via biometrics.
A Risk-Assessment-First Approach
The Commission has affirmed in the White Paper that a risk-centric approach would be appreciated to confirm in case of assessing the use of AI. For example, the Commission refers to the role of IoT and Robotics, where it affirms that both - the IoT and Robotics under the ambit of artificial intelligence can practically affect the effectiveness of the liability frameworks involved. The Commission renders the analysis that risks due to AI can (1) arise and (2) may be expected to occur, which lay down the cautionary understanding that any prohibited discrimination must be prevented. The White Paper also affirms the concept of ‘clear information’ notably, which is akin to data quality, where algorithmic bias[3] and accountability can be effectively understood and estimated. We believe that the Commission is in the right direction to centre risk as the primary imperative to assess and regularize artificial intelligence.
Elevating the Role of Stakeholders’ Maximum Participation in AI Governance and Regularization
The Commission has recognized some 70 actions in the White Paper through which it renders to ensure maximum participation from the relevant stakeholders of the AI ecosystem. In fact, as per the White Paper, research funding on Artificial Intelligence has seen an increase of 70%, according to the Commission. The Commission has further affirmed that in matters of technology sovereignty to build and democratize the AI ecosystem in the EU, it is expected that common ground and content cooperation in policymaking measures on AI among EU member-states is inevitable to some extent. The Commission encourages the adoption of AI to achieve excellence, which, according to the White Paper is to be done by all major economic stakeholders especially Small & Medium-Sized Enterprises (SMEs). We believe that currently, the Commission has endorsed relevant infrastructure to elevate the role of various stakeholders. However, we would recommend that the Commission endorses a special focus on the entrepreneurial design of artificial intelligence when it comes to technology-centric entrepreneurship and employability towards efficient and diverse human skill development measures.
Recognizing the Changing Dynamic of AI as a Legal and Technological System and its relationship with Humans
The Commission has signalled in the White Paper certain impressive conundrums with regards to the ontological and topological characteristics of artificial intelligence on a European perspective. In the White Paper, the Commission has certified that in matters of understanding the validity of AI, certain prospective intervention is required. The White Paper enumerates further that in case of risk assessments, the life cycle of the AI system would create many imperatives and implications in future. To ensure robustness and accuracy in the AI systems, the Commission stressed the importance of the concept that a life-cycle observation of AI is important. Another amazing enumeration of the characteristics of AI is provided in the White Paper other than the usual enumerations on AI Regulation as in previous AI Strategies and other relevant documents of EU and the Council of Europe, i.e, AI has an autonomous behaviour, which affects the market economics related to products, which may force entities to have newer consecutive risk assessments. We recommend to the Commission that as certain institutive changes are endorsed, we recommend that it is important for the EU to understand the organic and socializing nature of technology to endorse a calibrated and innovative way of legalizing, legitimizing, regulating and regularizing artificial intelligence. Pursuing the same would assist the cause better.
We have our recommendations and criticisms submitted in this analysis to the Institute for a Greater Europe in due partnership. The policy briefing is a part of the Indian Strategy for AI and Law, 2020.
Website: isail.in/strategy.
References
Ec.europa.eu. (2020). White Paper On Artificial Intelligence - A European Approach to Excellence & Trust. [online] Available at: https://ec.europa.eu/info/sites/info/files/commission-white-paper-artificial-intelligence-feb2020_en.pdf [Accessed 22 Feb. 2020].
von der Leyen, U. (2019). Political Guidelines for the Next European Commission 2019-24. [online] Ec.europa.eu. Available at: https://ec.europa.eu/commission/sites/beta-political/files/political-guidelines-next-commission_en.pdf [Accessed 22 Feb. 2020].
General Data Protection Regulation (GDPR). (2020). General Data Protection Regulation (GDPR) – Official Legal Text. [online] Available at: https://gdpr-info.eu/ [Accessed 7 Mar. 2020].
[1]The European Commission encourages the idea of an ecosystem of commercial and financial activities where trust is imperative to be built among relevant stakeholders. This concept is an integral part of European Liberalism and neoliberal economics.
[2]Abbreviation for Information & Communications Technology.
[3]Algorithms that enable AI work depend on the data they receive and work out on. When any biased outcome is reported owed to the working of the algorithms, this is called algorithmic bias.
[4]The way entrepreneurship is shaped so that innovative and sustainable startup opportunities are available among the people and the uniqueness in economic opportunities do not exhaust for future entrepreneurs.