Senior Associate Rome
The transparency of algorithms between the Artificial Intelligence Act and the Italian Courts12 July 2021
"The Artificial Intelligence Act aims to define the regulatory framework for the design and marketing of AI systems, following a risk prevention approach. If passed, the Artificial Intelligence Act will apply to both EU manufacturers and suppliers, as well as companies exporting their products into the EU. "
- The Artificial Intelligence Act and the obligation of transparency of algorithms
On April 21, 2021, the EU Commission issued a proposal for a European regulation laying down harmonised rules on artificial intelligence (“AI”), the so-called Artificial Intelligence Act.
The Artificial Intelligence Act aims to define the regulatory framework for the design and marketing of AI systems, following a risk prevention approach. If passed, the Artificial Intelligence Act will apply to both EU manufacturers and suppliers, as well as companies exporting their products into the EU.
AI systems are considered to be associated with risk that is unacceptable (which is prohibited), high, low or even minimal. High-risk AI systems (including those used in critical infrastructure such as transportation, the security components of certain products, educational and vocational training access systems, justice administration systems and migration management and border control systems) will be subject to specific design and implementation obligations and prior certification of conformity for marketing.
Among the obligations binding developers, manufacturers, and suppliers of high-risk AI systems is the obligation for transparency.
Article 13 of the Artificial Intelligence Act, requires that high-risk systems be designed, developed and supplied in such a way as to ensure the transparency of their operations, as well as providing instructions for appropriate use.
In particular, information must be provided on:
- the identity and contact details of the providers;
- the characteristics, capabilities and performance limits of the system (including its purpose);
- the level of security;
- any known or foreseeable circumstances related to the use of the system that may cause risks to health, safety and fundamental rights;
- the performance of the system as regards the group of people on which the system is intended to be used;
- the input data and, in general, data sets used;
- possible changes to the system and its operation during its life cycle; and
- the maintenance measures to be adopted.
"Article 13 of the Artificial Intelligence Act, requires that high-risk systems be designed, developed and supplied in such a way as to ensure the transparency of their operations, as well as providing instructions for appropriate use. "
Prior to the Artificial Intelligence Act, the principles of transparency had already emerged in several European soft law texts on AI.
For example, in the Ethics Guidelines for Trustworthy Artificial Intelligence (2019), drafted by a group of experts appointed by the EU Commission in the context of the 2018 Coordinated Plan on Artificial Intelligence, it was specified that transparency should concern the data, system and business model. Connected to transparency, explainability would be required both in relation to the technical processes of an AI system and the related human decisions (e.g., the application areas of an AI system). Explainability requires that the decisions made by an AI system can be understood and traced by human beings.
- The first decisions of the Courts
Within this framework, in an Italian context, the first jurisprudential decisions that have dealt with algorithms have also focussed on the issue of transparency. This point was recently addressed by the Court of Cassation, in the decision of 25 May 2021 no. 14381, in which it stated that “in terms of processing of personal data, consent is validly given only if expressed freely and specifically with reference to a clearly identified treatment; it follows that in the case of a web platform (with annexed computer archives) aimed at drawing up reputational profiles of individuals or legal entities, based on a calculation system with an algorithm aimed at establishing reliability scores, the requirement of awareness cannot be considered satisfied if the executive scheme of the algorithm and the elements of which it is composed remain unknown or cannot be known by the parties concerned”.
Again recently, with the decision of the Council of State no. 881 of 4 February 2020, the Council of State has established that:
“In order to allow for the full knowability of the form used and the criteria applied with the algorithm, it is necessary to guarantee broad transparency, which must cover every aspect of the formation and use of the computer medium, so as to ensure the knowability of the identity of its authors, the procedure used for its elaboration, the decision mechanism and the imputability of the responsibilities arising from the adoption of the automatic measure.”
Similarly, with the decision of 13 December 2019 n.8472, the Council of State had specified that – in the matter of placing teachers on a roster and assigning them to certain locations made through an automated process – it is necessary to ensure transparency, ensuring that the “mechanism through which the robotised decision (i.e. the algorithm) is realised must be “knowable”, according to a reinforced declination of the principle of transparency, which also implies that of the full knowability of a rule expressed in a language different from the legal one.
This knowability of the algorithm must be guaranteed in all aspects: from its authors to the procedure used for its elaboration, to the decision mechanism, including the priorities assigned in the evaluation and decision-making procedure and the data selected as relevant. This is to be able to verify that the criteria, assumptions and outcomes of the robotised procedure comply with the prescriptions and purposes established by law or by the same administration upstream of that procedure and so that the modalities and rules on the basis of which it has been set are clear – and consequently open to scrutiny”.
"In addition, it should be kept in mind that discussing transparency could be misleading – in the presence of a technology characterised by such complexity that it would be difficult for the recipient of the information to effectively understand all aspects of the operation of the system with which it interacts – (complete transparency would mean making a matrix of numbers available to the recipient). The risk is that this is merely a formal protection."
- Some unresolved issues
The issue of transparency therefore appears to be central to the issues regarding AI systems, but some aspects remain unclear: (i) the indications of the texts of law and of the courts do not seem sufficient to narrow down the content of the disclosure obligation; (ii) it will be necessary to verify whether complete disclosure is feasible, taking into account the particular complexity of the technology (for example, consider the so-called black box algorithms, i.e., those systems whose mechanisms and internal logic are opaque or inaccessible to human understanding); and (iii) an obligation of transparency obviously risks conflicting with the protection of intellectual property, in particular industrial secrecy.
As far as the first two aspects are concerned, the legislator’s and the interpreter’s approach could appear somewhat naïve in the face of a technology which is not only characterised by a particularly high level of complexity, but which also bases its particular effectiveness on the characteristic of self-learning: the so-called machine learning is at the basis of most current AI systems. Through this mechanism, the instructions provided are not completely pre-determined, but the machine can learn from experience (the so-called self-learning), through data input. These systems, therefore, are not limited by the instructions of the programmer, but can produce unexpected solutions, in relation to the data they acquire during their operation.
Consequently, much of the information relating to the operation of the system, possible risks and possible changes may not be in the possession of the party required to fulfil the disclosure obligations, since it will be the result of interactions between the system and the data entered.
Even more obscure may be the systems of deep learning (a category of the machine learning), based on artificial neural network, whose effectiveness is proven empirically despite the fact that their operation is mostly inexplicable in a theoretical way.
In addition, it should be kept in mind that discussing transparency could be misleading – in the presence of a technology characterised by such complexity that it would be difficult for the recipient of the information to effectively understand all aspects of the operation of the system with which it interacts – (complete transparency would mean making a matrix of numbers available to the recipient). The risk is that this is merely a formal protection.
The other point raised by several commenters (as highlighted above) is the balance between the requirements of intellectual property protection and information obligations.
Some insights may come from Directive (EU) 2016/943 of the European Parliament and of the Council of 8 June 2016 on the protection of undisclosed know-how and business information (trade secrets) against their unlawful acquisition, use and disclosure. Recital 11 provides that the Directive should not affect the application of EU or national rules providing for the disclosure of information, including trade secrets, to the public or to public authorities. Recital 15 foresees the need to identify the circumstances in which the legal protection of trade secrets is justified, thus highlighting how it cannot be a protection without limits.
"The Ethics Guidelines for Trustworthy Artificial Intelligence provide a number of guidelines that developers should consider for self-assessment, including ensuring that: "the explanation of why a system has made a certain decision that has produced certain results can be made understandable to users who want it”."
It will be a matter of selecting that information which may be relevant for the user of the system, omitting, where possible, that which pertains to strictly industrial profiles. This is a difficult and uncertain balancing act which will have to be evaluated on a case-by-case basis.
- Concluding remarks
The question of transparency of algorithms is likely to become increasingly important, especially since – in the event of the approval process being completed – the Artificial Intelligence Act provides particularly severe sanctions in the event of non-compliance with its obligations.
Uncertainty surrounding obligations forces operators in the sector to carry out evaluations on a case-by-case basis, with the consequent creation of a context in which companies may have difficulty in foreseeing the risks connected with their investments and products.
The Ethics Guidelines for Trustworthy Artificial Intelligence provide a number of guidelines that developers should consider for self-assessment, including ensuring that: “the explanation of why a system has made a certain decision that has produced certain results can be made understandable to users who want it,” the verification that a system has been planned from the start with interpretability in mind, verifying that they have carried out searches on the simplest model and interpretable possible, that the possibility of analysing the training data and testing updating them over time has been considered, that the possibility once the model has been developed of approaching the inner workflow has also been considered.”
What seems clear is that companies developing these systems (or using them) must adopt (or verify that they have adopted) an approach of transparency by design. A change in perspective in relation to this would instigate a need to provide users of an AI system with the ability to understand independently at least the rationale of the systems, the main lines of its operation, the impact and consequences.
Once this approach has been adopted in their business model, companies will then be able, on a case-by-case basis, to assess the limits of disclosure where the need to protect intellectual property prevails or where there are obvious technical impossibilities.
The hope, however, is that the definition of a more certain regulatory framework and the consolidation of a jurisprudential practice will make it possible to achieve greater clarity for companies in the sector. This should be done in the context of an ongoing dialogue between jurists and computer scientists, or in any case AI experts, so as to create rules and jurisprudential guidelines that are appropriate and consistent with the technologies to which they are to be applied.