< Back to insights hub

Article

Trustworthy AI: self-assessment tools for companies27 May 2022

Whilst the legislative progress of the Proposal (“the Proposal”) for a “European Regulation on Artificial Intelligence” (the so-called Artificial Intelligence Act, “AIA”) continues, joint research by the universities of Bologna and Oxford has produced the first supportive tool designed to help companies comply with the pending legislation, a conformity assessment procedure for AI systems, “capAI”.

"In such an uncertain and evolving regulatory framework as AI, soft law tools and self-regulation provide an essential landmark, both for the protection of AI operators and to safeguard management bodies."

CapAI uses innovative auditing methodology to provide organisations with a means of measuring their implemented AI systems’ compliance with the AIA and ultimately share a scorecard with customers to prove their reliability.

The tool specifically addresses two target audiences based on different levels of risk foreseen in the Proposal:

(a) providers of high-risk AI systems, to verify, at least through an internal control, compliance with AIA requirements for placing those systems on the market; and

(b) providers of low-risk AI systems, not interested in such compliance assessments, to operationalise commitments set out in their own codes of conduct.

Consistent with the Proposal, the main feature of the project is the opportunity to adopt an ethics-based approach. CapAI’s core goal is to translate ethical principles, which have legal relevance, into concrete criteria in order to verify whether an AI system can be considered in line with fundamental EU values and rights and therefore trustworthy.

This trustworthiness is an all-encompassing principle of capAI, embracing three essential requirements that AI systems should fulfil of being “legally compliant, ethically sound, and technically robust”. By investigating compliance with these principles, capAI’s auditing methodology considers the entire life cycle of relevant products, divided into the five phases of design, development, evaluation, operation and retirement.

The assessment conducted at each of the above phases results in three documents, which organisations can use to demonstrate their systems’ adherence to the AIA:

(i) an Internal Review Protocol (“IRP”), consisting of an in-depth assessment of the tools the organisation has put in place to prevent possible failures and their processes for correcting them. This document, whilst designed to remain confidential, may be used externally, for example in B2B contracting or litigation;

(ii) a Summary Datasheet (“SDS”), in which key information about the system is summarised. This document is strictly for the fulfilment of Article 60 of the Proposal, which requires that certain key data of stand-alone, high-risk AI systems be registered in a special database set up by the European Commission; and

(iii) an External Scorecard (“ESC”). This is an optional tool to provide an overview of the system’s health status with regard to the adoption of good practices and adherence to ethical principles.

The output of the capAI procedure demonstrate how it is clearly aimed at promoting best practice in the AI market and preventing the ethical failures, with consequent legal risks, to which the market itself is exposed.

CapAI’s goal is to provide organisations with an assessment procedure which is easy to apply and, above all, at a limited cost, particularly when compared to the financial penalties from potential legal sanctions. Given the laborious progress of the Proposal, capAI also allows for a long experimentation phase, thereby possibly mitigating any disruptive consequences that might follow the former’s entry into force.

In addition, given the current legal uncertainty around the liability of various parties involved in the life cycle of AI systems (including designers, manufacturers, suppliers and users), the adoption of an internal risk assessment and prevention toolbox may be regarded as a demonstration of a company’s prudent and diligent conduct.

AI operators will therefore need to move forward while considering existing and incoming rule as well as soft law and self-regulatory instruments. The latter are currently an important point of reference in an evolving regulatory framework, both for the protection of companies and their managing bodies.

< Back to insights hub

< Back to insights hub