AI-generated Evidence and the EU AI Act
- Elena Zambelli
- Jan 30
- 6 min read
Updated: Apr 10
Author:
Elena Zambelli - Associate Editor at iGlobal.Lawyer
Abstract
The question posed by Alan Turing in 1950 ‘Can machines think?’ remains relevant today as AI tools increasingly enter judicial proceedings.
With the European Regulation on AI (hereinafter "EU AI Act" or "AI Act") explicitly admitting the use of AI for fact-finding, this article examines the admissibility and reliability of machine-generated evidence in criminal trials. Evidence can be generated by both forensic and non-forensic AI, such as face recognition and DNA typing systems or Machine Learning (ML) applications in transportation and medical diagnostic.
However, if a ML tool is used–for instance–to establish the cause of death in a trial, perhaps the question that’s worth asking is, can machines tell the truth? And more importantly, how can we make sure that they’re telling the truth?
AI in Criminal Trials
The EU AI Act classifies as high-risk those AI systems “intended to be used by a judicial authority […] in researching and interpreting facts and the law and in applying the law to a concrete set of facts” (1), thus implicitly allowing their use in judicial trials. Therefore, arises the question as to how – rather than whether–machine-generated evidence can be used in criminal trials without compromising the defendant’s right of defence and the principles of the adversarial process.
The AI Act establishes a framework for incorporating AI systems into the justice system in which human oversight plays a fundamental role: in fact, artt. 8-15 of the Regulation impose specific requirements that high-risk systems must meet, which include risk assessment and data governance techniques, human oversight and transparency standards.
That raises the question of whether the free evaluation of the machine-generated evidence conducted by the court is guarantee enough for a fair adversarial trial, and most importantly, how such evaluation can be conducted, given the particular nature of the algorithmic evidence.
Facing such evidentiary findings, human oversight can either be post-hoc, in the form of a scientific validation of the algorithmic output that does not imply the use of an Artificial Intelligence, or ante-hoc, as an evaluation of its judicial acceptability. It is clear that problems arise only in cases where a post-hoc human review of the automatedly generated output is not feasible, specifically when traditional non-AI techniques fail to provide satisfactory outcomes.
However, the judge is still the peritus peritorum, and therefore must be equipped with the necessary technical tools to fulfil their role in an effective rather than superficial manner. Naturally, in the adversarial process control over evidence is also a fundamental guarantee for the defendant, and must therefore be assured.
Unfortunately, the requirements outlined by the European AI Act seem to fall short of their intended purpose, which is to prevent violations of the human rights of the people involved (which include the right to a fair trial, as enshrined in the European Convention on Human Rights and the Charter of Fundamental Rights of the European Union), since, as this article will attempt to demonstrate, they do not guarantee the explainability of AI-generated claims, nor do they effectively illuminate the "black box" of algorithmic processes.
On the contrary, explainability of AI-generated evidence is necessary to ensure the defendant’s right of defence, in particular the right to effectively challenge the evidence against them. This right is realized through the adversarial nature of exposing evidence in court, and lies at the constitutional core of the judicial process as well as provides democratic legitimacy to the judicial system.
The European regulation's requirements for high-risk systems are fundamentally based on two key principles: data transparency and human oversight.
In fact, in order for high-risk systems to be placed on the market and be lawfully used, they must comply with the standards of Art. 8 ff. of the AI Act, which include establishing a risk management system, implementing data governance practices, and meeting transparency obligations regarding datasets, dataset validation, testing processes, data collection and origin. Given the enormous amounts of data, it is clearly not possible for developers to provide transparency of every single piece of data used; however, what is feasible and (supposably) useful is to provide information about the design choices, quantity and suitability of the data sets, as well as the potential bias.
Transparency & The Right To an Explanation
The goal of this legislation seems to be enhancing AI systems and preventing them from being affected by bias “that could have a negative impact on fundamental rights or lead to discrimination prohibited under Union law” (Art. 10) (2), while it lacks any effort to address the opacity of the algorithmic “black box”. In fact, no mention is made to the causal explainability of the AI output nor to the transparency of the training processes of predictive algorithms.
Therefore, the transparency required by the European Union appears to be merely superficial, stopping at the level of data. This is probably due to the awareness that, at the current level of development, it is technically impossible to give an explanation for the algorithmic output that may provide causal basis for its prediction.
Hence, human oversight risks remaining highly superficial–necessary, yet insufficient for verifying the reliability of the output and ensuring its contestability. It may ultimately amount to little more than a confirmation that the computer is performing the calculation correctly.
In reality, the AI Act actually mentions the right to an explanation, specifically in Article 86, which states that:
“Any affected person subject to a decision which is taken by the deployer on the basis of the output from a high-risk AI system, […] and which produces legal effects or similarly significantly affects that person in a way that they consider to have an adverse impact on their health, safety or fundamental rights shall have the right to obtain from the deployer clear and meaningful explanations of the role of the AI system in the decision-making procedure and the main elements of the decision taken.” (3)
In the context of judicial administration, this rule could imply an obligation for “enhanced” reasoning every time a court’s decision relies, fully or partially, on AI outputs. While this rule is valuable since it addresses the issue of explainability in algorithmic assertions, it doesn’t truly resolve the earlier concern. Rather, it merely shifts the problem to the level of the judgment reasoning, ultimately placing it under the scrutiny of a higher court, which is still not well-defined or specifically regulated.
The merit of this rule, however, lies in the fact that it implicitly provides guidelines for judges on how to articulate the reasoning of the judgment when dealing with digital evidence. It stresses the need to account for the role the system played in shaping their decision-making process – clearly and meaningfully, rather than superficially.
Conclusion
It seems that Article 86 is trying to safeguard, as far as judicial administration is concerned, the so-called “overall fairness of the proceeding”, the parameter used by the European Court of Human Rights (ECtHR) to assess violations of the right of defence in criminal proceedings.
Under the ECtHR scrutiny, even in cases where the defendant didn’t have the opportunity to effectively challenge the evidence against them, a violation is still not found if other elements supported the conviction.
Nonetheless, while such an “enhanced reasoning” obligation may prevent violations of the fundamental right to a fair trial, it fails to address the concern about the admissibility in (adversarial) trial of algorithmic evidence. Such evidence remains a “black box” and cannot be reliably assessed through scientific methods or effectively challenged by the parties involved.
This raises a real concern about the admissibility of this type of evidence, as the principles of the adversarial process are not only guarantees for the persons involved in a criminal proceeding, but they are also the pillars for the social acceptability of judicial decisions and the democratic legitimacy of the justice system.
References
Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 Laying Down Harmonised Rules on Artificial Intelligence and Amending Regulations, Annex III, n.8
AI Act, supra note 1, Art. 10. Here the Regulation likely references the well-known issue related to the COMPAS software used by U.S. courts to assess the risk of recidivism.
AI Act, supra note 1, Art. 86
Comentarios