Skip to content Skip to footer

Comments on the AI White Paper issued by the European Commission

Brief Introduction to the Regulatory Landscape

The White Paper on Artificial Intelligence – a European Approach to Excellence and Trust (the White Paper) issued by the European Commission (EC) follows a series of regulatory initiatives related to AI liability.

One of the first steps, part of the EU’s “Legislative Train Schedule”[1] concerning regulation of AI systems in Europe, was taken in January 2017, by the European Committee on Legal Affairs (JURI), that issued a report of Civil Law Rules on Robotics (the EU Civil Rules).[2] This report includes a series of recommendations for the European Commission concerning future regulations on AI Systems. JURI also issued a study based on which the EU Civil Rules were analysed and commented from a legal and ethical perspective.[3]

In April 2018, the European Commission released a Commission Staff Working Document, “Liability for emerging digital technologies”,[4] based on which it emphasizes the challenges raised by AI Systems in what concerns the applicability of traditional tort law mechanisms, with a special focus on Directive 85/374/EEC, the Product Liability Directive.[5]

The Product Liability Directive was also subject to an Evaluation issued by the European Commission, in May 2018 (the PLD Evaluation).[6] The document reveals that certain concepts under PLD (for example, the definitions for “product”, “producer”, “burden of proof”) will require a new approach in order to be applicable to advanced new technologies.

In parallel, in March 2018, the European Commission decided on the establishment of the Expert Group on Liability and New Technologies. A subdivision of this Expert Group (the New Technologies Formation group), released in November 2019 a report on “Liability for Artificial Intelligence and other emerging technologies” (the Expert Group Report).[7] With this new report, EU takes an important step in framing future liability regimes for AI.

The recommendations from the Expert Group Report were issued one month after the Opinion of the Germany’s Data Ethics Commission, dated October 2019. Both documents show similar views on various matters. For example, they both suggest that future AI regulations should not consider assigning legal personality to AI systems.

A. Recommendations on AI Governance

i. From acknowledging challenges posed by AI to initiating governance

The White Paper explains how AI challenges existing regulations and offers a series of solutions for addressing such challenges. Some of these challenges have been mentioned before in the works of the Commission. For example, the discussion regarding the need of a new definition of “products” that would also include “software”, under the Product Liability Directive, was also mentioned in Liability for emerging digital technologies, the Evaluation of the Product Liability Directive as well as in the Expert Group Report.

The White Paper builds upon the previous work of the Commission and of the expert groups. For example, according to the Expert Group Report, strict liability is considered an appropriate response to the risks posed by emerging digital technologies that have the potential to cause significant harm and it should lie with the person who is in control of the risk connected with the operation of emerging digital technologies and who benefits from their operation (i.e. the operator). The Expert Group Report recommends applying the strict liability regime to AI systems that have the potential to cause significant harm and that are operated in non-private environments (e.g. drones, vehicles). Therefore, a strict liability would be applicable given that: (i) the objects have a certain weight, (ii) they are deployed with a certain speed (i.e. they are not stationary objects) and (iii) a large number of people are exposed to risks.

For the all the other AI applications (i.e. the ones that are stationary and do not have the potential of harming a large number of people), pursuant to the White Paper, the Commission considers an adaptation of the burden of proof concerning causation and fault.

As such, it can be noted a tendency of the Commission to reiterate certain legal problems in subsequent AI reports and recommendations. Even though such reiterations have the purpose of emphasizing the importance of a topic, at this particular moment, a series of challenges have already been properly addressed multiple times by the Commission or by the expert groups.

Such challenges currently require mitigating actions, i.e. moving forward, from analysis and acknowledgements, to engaging into regulatory actions. A close analysis of all Commission’s work on AI regulatory topics (as such described above, under the Brief Introduction to the Regulatory Landscape) reveals the need of an “action-oriented” framework, as a next step in the Legislative Train Schedule.

This new framework would involve setting clear deliverables under a specific timeline, for example: delivering a report with a binding interpretation of the Product Liability Directive; gathering data and preparing an assessment report for identifying high-risk and low-risk AI applications; furthermore, if a strict liability regime is decided as being applicable to high-risk AI applications, a clear decision needs to be formulated in relation to an adaptation of the burden of proof in case of low-risk AI applications.

ii. Policy synchronization

In the quest of identifying high-risk and low-risk AI applications, a sector by sector analysis might be required. In such a case, a corresponding policy alignment between AI policies and sector specific regulations would prove necessary.

The White Papers identifies a number of key sectors, alongside AI, in which the EU has the potential of becoming a global leader. Such key sectors include industry, health, transport, finance, agri-food value chains, energy/environment, forestry, earth observation and space.

The Commission recommends a synchronization of projects for these key sectors, for example the future European Defence Fund and Permanent Structured Cooperation (PESCO) will provide opportunities for research and development in AI. As such, these projects should be synchronized with the wider EU civilian programmes devoted to AI.

In addition to project synchronization, it would also be recommended a synchronization in what concerns the policies associated to such sectors as well as an institutional cross-check in the relevant domains.

As two key sectors in which the EU has the potential of becoming a global leader, space policies and AI policies should be properly synchronised. Please find below several supporting arguments:

i. The Proposal for a Regulation of the European Parliament and of the Council establishing the space programme of the Union and the European Union Agency[8] (the Programme) already provides that synchronization of policies needs to be assessed.

The section “Consistency with other Union policies” mentions:

The operation of space systems such as EGNOS, Galileo or Copernicus directly complements actions engaged in under many other Union policies, in particular research and innovation policy, security policy and migration, industrial policy, the common agricultural policy, fisheries policy, trans-European networks, environment policy, energy policy and development aid.

ii. The increasing role of autonomous and innovative space technology needs to be taken into account. More and more space infrastructure is equipped with advanced autonomous capabilities.

Article 3 of the proposed Regulation (Components of the Programme), provides:

The Programme shall consist of the following components:

(a) an autonomous civil global navigation satellite system (GNSS) under civil control comprising a constellation of satellites, centres and a global network of stations on the ground, offering positioning, navigation and time measurement services and fully integrating the needs and requirements of security (‘Galileo’); (…)

(c) an autonomous, user-driven, Earth observation system under civil control, offering geoinformation data and services, comprising satellites, ground infrastructure, data and information processing facilities, and distribution infrastructure, and fully integrating the needs and requirements of security (‘Copernicus’);

In addition, Article 6 (Actions in support of an innovative Union space sector) provides:

The Programme shall support:

(a) innovation activities for making best use of space technologies, infrastructure or services; (b) the establishment of space-related innovation partnerships to develop innovative products or services and for the subsequent purchase of the resulting supply or services;

iii. The increasing number of space actors and activities are challenging existing legal norms. For example, who will be responsible for the damages caused by a collision between two space objects equipped with autonomous capabilities?

Such concerns were raised by European actors starting with 2016. The Space Strategy for Europe issued in 2016[9] provides:

Section “4. Strengthening Europe’s Role as A Global Actor and Promoting International Cooperation”

Increased human activity in space and the rapid growth of new entrants is testing the UN conventions on outer space to the limit, including on issues of space traffic management and mining. Europe should be among the leaders in navigating global challenges such as climate change or disaster risk reduction, while promoting international cooperation and building the global governance or appropriate legal frameworks for space. (…)

The EU should lead the way in addressing the challenges posed by the multiplication of space actors, space objects and debris in line with the UN conventions related to space.

As such, given the above, the recommendation of synchronizing the drafts of AI Policies with the Space Policies seems essential for achieving regulatory consistency and for ensuring security and economic development within Europe. The same approach should be used for each sector analysed for identifying high and low-risk AI applications.

B. Recommendations on assigning liability to AI Systems

The Liability Pyramid

One of the characteristics of AI systems is the plurality of actors involved in the supply chain, such as the AI engineer, the producer, the operator, the central backend provider, the owner or the user[10].

The White Paper identifies a challenge in allocating liability in case of a damage caused by AI systems, due to such plurality of actors. However, the plurality of actors should not be treated as a stand-alone problem, it should be correlated with the autonomy degree of AI systems, as well as with the various degrees of human control attached to such systems.

The degree of human control is inversely proportional with the degree autonomy of the AI system. For basic AI systems human control is essential, otherwise they would be unable to operate. The control starts to decrease, in case of advanced unsupervised machine learning techniques, such as deep learning techniques, where the AI system performs autonomous data identification, out of the data sets available.

Due to the various degrees of autonomy that an AI system may display, as well as the parties engaged in developing the system (such as the AI engineer/producer/operator/central backend provider/owner/user), the formula for the allocation of liability should correspond to a liability pyramid [11], where, the foundation of the pyramid represents the highest rate of control exercised by the AI engineer/producer/operator/central backend provider/owner/user (e.g. the case of basic AI) and therefore a high degree of responsibility. The top layers of the pyramid would represent the lowest level of human control with a corresponding high degree of AI autonomy (e.g. deep learning techniques). Nevertheless, a low degree of human control cannot entail entirely removing human responsibility.

Therefore, in the case of advanced autonomous AI systems, it is worth exploring the introduction of a special liability system. Even if human control in these cases is low, one cannot neglect the importance of the AI “teacher”[12] (i.e. the AI engineer), who is in charge of designing the framework in which the system will function. Given the autonomous feature that these systems will deploy once they are fully operational, their initial setting and the way they are “thought” (i.e. designed, structured) to function has a significant influence on how they will engage in decision making and initiation and/or refraining from taking actions. For example, in case of an AI system used in autism therapy, if the system is initially fed with data predominantly reflecting a large number of women from low-income countries affected by autism at early ages, there are high chances that such system will develop a bias in considering that, usually, young women from low-income countries are predilected in developing autism.

Since the “teacher” will lay the foundations for the AI future development, any envisaged regulations on AI liability should consider implementing a certification system for “teachers”. This would mean that AI engineers should acquire specific training in order to obtain additional skills for developing complex AI systems (such as IT ethics, socio-economic impact of new technologies, legal responsibility of AI etc.). The certification could be a part of a more complex endeavor for guaranteeing high quality standards for autonomous AI, together with specific quality seals (i.e. protective measures that would guarantee that the algorithms are reliable) and/or technical standards adopted by accredited standardisation organisations, that would ensure that the AI systems are reliable and secure.[13]

By implementing the liability pyramid formula, both types of AI applications (systems), basic or advanced, can be considered by the legislator, thus avoiding legal uncertainty and lack of clarity.

C. Conclusions

The European Commission has made important steps in the field of AI governance by evidencing a series of challenges of AI systems in the light of existing legislation. The next step part of the Legislative Train Schedule requires an action-oriented approach including a series of deliverables to be prepared by the expert groups.

The above commentaries were submitted with the European Commission as part of the public consultation process concerning the White Paper (April 2020)

[1] European Parliament, Legislative Train Schedule (2016) http://www.europarl.europa.eu/legislative-train/theme-connected-digital-single-market/file-artificial-intelligence-for-europe accessed 11 March 2019

[2] European Parliament, Report with recommendations to the Commission on Civil Law Rules on Robotics (2017) http://www.europarl.europa.eu/doceo/document/TA-8-2017-0051_EN.html accessed 11 March 2019

[3] European Parliament, Directorate-General for Internal Policies, Study for the JURI Committee (2016) http://www.europarl.europa.eu/RegData/etudes/STUD/2016/571379/IPOL_STU(2016)571379_EN.pdf accessed 11 March 2019

[4] Commission Staff Working Document, Liability for emerging digital technologies, Accompanying the document Communication from the Commission to the European Parliament, the European Council, the Council, the European Economic and Social Committee and the Committee of the Regions Artificial intelligence for Europe (2018) https://eur-lex.europa.eu/legal-content/en/ALL/?uri=CELEX%3A52018SC0137 accessed 11 March 2019

[5] Council Directive 85/374/EEC of 25 July 1985 on the approximation of the laws, regulations and administrative provisions of the Member States concerning liability for defective products

[6] Commission Staff Working Document, Evaluation of Council Directive 85/374/EEC of 25 July 1985 on the approximation of the laws, regulations and administrative provisions of the Member States concerning liability for defective products (2018) https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A52018SC0157 accessed 11 March 2019

[7] Report from the Expert Group on Liability and New Technologies – New Technologies Formation, European Commission, Justice for Consumers (2019) https://ec.europa.eu/newsroom/dae/document.cfm?doc_id=63199 accessed 5 December 2019

[8] The Proposal for a Regulation of the European Parliament and of the Council establishing the space programme of the Union and the European Union Agency for the Space Programme and repealing Regulations (EU) No 912/2010, (EU) No 1285/2013, (EU) No 377/2014 and Decision 541/2014/EU

[9] European Commission, Brussels, Space Strategy for Europe, 26.10.2016, COM(2016) 705 final, Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions

[10] Report from the Expert Group on Liability and New Technologies – New Technologies Formation, European Commission, Justice for Consumers (2019) https://ec.europa.eu/newsroom/dae/document.cfm?doc_id=63199 accessed 5 December 2019

[11] Assigning Liability for Damages to Artificial Intelligent Systems Used for Autism Therapy. A European Approach, Bratu, Linden, https://www.researchgate.net/publication/340004024_Assigning_Liability_for_Damages_to_Artificial_Intelligent_Systems_Used_for_Autism_Therapy_A_European_Approach/citation/download

[12] The term is used in the report of Civil Law Rules on Robotics issued by the European Committee on Legal Affairs

[13] Opinion of the Data Ethics Commission, Articles 60 and 63

Subscribe to our newsletter