top of page

Must all Algorithmic Discrimination be Prevented and Eliminated - or Need it only Be Minimised?

  • Writer: Douwe Korff
    Douwe Korff
  • Oct 2
  • 27 min read

Author:

Douwe Korff - Comparative and International Lawyer

Emeritus Professor of International Law at London Metropolitan University

Associate of the Oxford Martin School of the University of Oxford


Introduction


Under traditional---but still very much valid---international Human Rights law including the EU Charter of Fundamental Rights, discrimination---the making of “distinction[s] of any kind, such as race, colour, sex, language, religion, political or other opinion, national or social origin, property, birth or other status” to use the words of the Universal Declaration of Human Rights---is forbidden, i.e., unlawful; it must be “eliminated”: see the titles of the main anti-discrimination treaties (1).


In that regard, it is important to stress that international Human Rights law defines 'discrimination' as: (2)


Any distinction, exclusion, restriction or preference based on race, colour, descent, or national or ethnic origin,sex or religion or belief [or other characterics as set out above] which has the purpose or effect of nullifying or impairing the recognition, enjoyment or exercise, on an equal footing, of human rights and fundamental freedoms in the political, economic, social, cultural, civil or any other field of public life.” (emphases added). If anything (as indicated in the square brackets), this now extends to distinctions (etc.) on the basis of health or disability, sexual orientation, age and, to some extent, nationality.


The above definition indisputably also covers discriminatory treatment in the granting of political rights such as freedom of expression or association, or of economic, social or cultural rights such as the rights to housing, education, welfare, financial services, etc., of women or people belonging to ethnic or religious minorities, or people with physical or mental handicaps, or with a specific sexual orientation, when such discrimination results from biases in algorithmic/AI-based processing of personal data (even if this is unintentional).


In section 2, I will show, first, at 2.1, how the EU General Data Protection Regulation confirms the above in its rules on automated decision-making and profiling. At 2.2., I show how more recent EU regulations of digital matters seem to depart from this, and could---in my opinion, wrongly---be argued to require Big Tech companies to only “minimise” or “mitigate” discrimination. I set out my analysis in section 3, and my summary and conclusion in section 4.


The GDPR and more recent EU regulation of digital matters


The rules on automated decision-making and profiling in the GDPR


The first paragraph of Article 22 of the General Data Protection Regulation (GDPR) stipulates that, subject to certain conditional exceptions, set out in the second paragraph: "The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her."


Recital 71 expands on this by making clear, inter alia that: in order to ensure fair and transparent processing [in automated decision-making and profiling] [...] the controller should [...] secure personal data in a manner [...] that prevents, inter alia, discriminatory effects on natural persons on the basis of racial or ethnic origin, political opinion, religion or beliefs, trade union membership, genetic or health status or sexual orientation, or that result in measures having such an effect [...].


In spite of some lawyers, especially from common law systems, stressing that "recitals are not legally binding," in fact they are highly persuasive in any teleological interpretation of not-completely-clear provisions in an EU act.(3)


Article 22 GDPR definitely needs interpretation.


The UK data protection supervisory authority, the ICO, rightly also links the principle of fairness with the general prohibition of discrimination: (4) "Any processing of personal data using AI that leads to unjust discrimination between people, will violate the fairness principle."


The rules on biases in the processing of personal data in more recent EU instruments regulating digital matters


Article 34 of the EU Digital Services Act (DSA) adopted in 2022, requires providers of very large online platforms (VLOPs) and very large online search engines (VLOSEs) to regularly carry out risk assessments of their systems, in order to: diligently identify, analyse and assess any systemic risks in the Union stemming from the design or functioning of their service and its related systems, including algorithmic systems, or from the use made of their services.


This must include, specifically, also any “systemic risk” consisting of “any actual or foreseeable negative effects for the exercise of fundamental rights” including in particular “[the right] to non-discrimination enshrined in Article 21 of the Charter” (Article 34(1)(b)).


Recital 94 expands on this and clarifies that the obligations on assessment and mitigation of risks should trigger, on a case-by-case basis, the need for providers of very large online platforms and of very large online search engines to assess and, where necessary, adjust the design of their recommender systems, for example by taking measures to prevent or minimise biases that lead to the discrimination of persons in vulnerable situations, in particular where such adjustment is in accordance with data protection law and when the information is personalised on the basis of special categories of personal data referred to in Article 9 of the Regulation (EU) 2016/679. (6)


Article 10(1) of the EU Artificial Intelligence Act (AIA), adopted in 2024, stipulates that: "High-risk AI systems which make use of techniques involving the training of AI models with data shall be developed on the basis of training, validation and testing data sets that meet the quality criteria referred to in paragraphs 2 to 5 whenever such data sets are used.


The first such quality criterion, set out in Article 10(2), is that: "Training, validation and testing data sets [that are to be used in high-risk AI systems] shall be subject to data governance and management practices appropriate for the intended purpose of the high-risk AI system."


This paragraph goes on to specify, in clauses (f) and (g), that those practices must concern, inter alia: examination in view of possible biases that are likely to affect the health and safety of persons, have a negative impact on fundamental rights or lead to discrimination prohibited under Union law, especially where data outputs influence inputs for future operations; [and] [the taking of] appropriate measures to detect, prevent and mitigate possible biases identified according to [the previous point].


The words “prevent and mitigate” are confusing: which is it?


But the aim is made clear in Recital 67 AI Act: "High-quality data and access to high-quality data plays a vital role in providing structure and in ensuring the performance of many AI systems, especially when techniques involving the training of models are used, with a view to ensure that the high-risk AI system performs as intended and safely and it does not become a source of discrimination prohibited by Union law. High-quality data sets for training, validation and testing require the implementation of appropriate data governance and management practices. Data sets for training, validation and testing, including the labels, should be relevant, sufficiently representative, and to the best extent possible free of errors and complete in view of the intended purpose of the system. In order to facilitate compliance with Union data protection law, such as Regulation (EU) 2016/679, data governance and management practices should include, in the case of personal data, transparency about the original purpose of the data collection. The data sets should also have the appropriate statistical properties, including as regards the persons or groups of persons in relation to whom the high-risk AI system is intended to be used, with specific attention to the mitigation of possible biases in the data sets, that are likely to affect the health and safety of persons, have a negative impact on fundamental rights or lead to discrimination prohibited under Union law, especially where data outputs influence inputs for future operations (feedback loops)."


Analysis (7)


The data protection-related issues raised by the use of AI generally have been well mapped-out by the UK data protection authority, the Information Commissioner’s Office or ICO, in its March 2023 updated Guidance on AI and Data Protection. (8)

It follows on from an earlier (2017) ICO report on Big data, artificial intelligence, machine learning and data protection (9) and from 'co-badged' guidance by the ICO and the Alan Turing Institute on Explaining decisions made with AI (2022). (10)


In most respects, the ICO guidance reflects the common views of European data protection authorities on the above matters---indeed, in many respects, the ICO guidance is more developed than the guidance offered by other European bodies. I therefore use this guidance as a general frame within which to address the issues in this paper, even if the ICO is of course no longer a data protection supervisory authority of an EU Member State, and the “UK-GDPR” in some respects (but not yet in relation to automated decision-making or profiling) is no longer fully in line with the EU GDPR.


However, in some respects, including in particular the risk-based approach to data protection generally adopted by the ICO, I feel the ICO guidance does not tally with European standards–although it does chime with that approach to AI under the AI Act, (11) and with how Big Tech would like to read the rules, as I will also note.

Specifically, there is a tension–albeit, in my opinion, not an irreconcilable one: see below–between EU law, on the one hand, confirming the general Human Rights obligation to “prevent” and “eliminate” discrimination; and on the other hand, the stipulations in the more recent EU instruments that bias in datasets that can lead to discrimination must be “minimised” or “mitigated.


On the former point, the ICO guidance rightly stresses that the issue of discrimination is especially problematic in the context of AI and AI-based decision-making (12): If you use an AI system to infer data about people, you need to ensure that the system is sufficiently statistically accurate and avoids discrimination. This is in addition to considering the impact of individuals' reasonable expectations for this processing to be fair.

Any processing of personal data using AI that leads to unjust discrimination (13) between people, will violate the fairness principle. This is because data protection aims to protect individuals' rights and freedoms with regard to the processing of their personal data, not just their information rights.


This includes the right to privacy but also the right to non-discrimination. The principle of fairness appears across data protection law, both explicitly and implicitly.


More specifically, fairness relates to:

• how you go about the processing; and

• the outcome of the processing (i.e., the impact it has on individuals).


The guidance goes on to discuss the practical steps that need to be taken to ensure compliance with this non-discrimination principle, including the need for data protection by design and default (meaning that the system should be designed from the very start to avoid discrimination, and be constantly monitored for discriminatory outputs and outcomes) and for the carrying out of a data protection impact assessment (DPIA) and a risk assessment. This is of course in line with the “risk assessment” and “data governance” requirements under the EU DSA and AIA.


However, controllers who are subject to the EU DSA and/or AIA could be tempted to read the references in the recitals to the DSA and AIA, noted at 2.2 above, to “minimis[ing] biases that lead to the discrimination of persons in vulnerable situations” and to “mitigat[ing] possible biases” that could lead to discrimination, as allowing a certain level of bias to remain in their systems and in the outputs of their systems, even if that would result in some (the least possible/minimal) discriminatory outcomes of the use of their systems.


Put simply, the output of a system is what the system produces, which can take the form of a score, whereas the outcome of the use of the system is what is done by the user of the system (or just of the output), which can take the form of the user totally or heavily relying on the output in the taking of certain decisions.(14) In my opinion, such a reading would be wrong.


Specifically, if there is bias in an AI system that generates a score, and that therefore has bias baked into the scores it creates (its outputs), and that score is then “decisively” relied on by another entity, the bias in the system and in the scores (outputs) inevitably results in discriminatory outcomes.


In that regard, the EU Court of Justice has held in its SCHUFA judgment that (15): "Article 22(1) of the GDPR must be interpreted as meaning that the automated establishment, by a credit information agency, of a probability value based on personal data relating to a person and concerning his or her ability to meet payment commitments in the future constitutes ‘automated individual decision-making’ within the meaning of that provision, where a third party, to which that probability value is transmitted, draws strongly* on that probability value to establish, implement or terminate a contractual relationship with that person."

*The other language versions use “maßgeblich abhängt” (DE) and “dépend de manière déterminante” (FR), which can be translated as “decisively depends.


This of course also applies to other scores (“probability values”) created by one entity that are “decisively” relied on by other entities, e.g., in hiring (or not hiring) someone for a job or admitting them (or not admitting them) to an educational establishment.


It follows from this that, in relation to any decisions that are based “decisively” on scores created by AI, the creation of the score already constitutes an “automated decisionwithin the meaning of Article 22(1) GDPR–and that in turn means that any discriminatory effects of the score must be “prevented”: see section 2.1 and the quote from recital 71, above.


Merely trying to “minimise” such effects is not good enough.


Moreover, given that such scores are typically created in 'black box' AI systems, it will be well-nigh impossible–and for third parties relying on the score without insight into the algorithms used, effectively completely impossible–to “mitigate” the discriminatory effects (outcomes).(16)


Regrettably, the ICO provides support for the erroneous reading that the GDPR allows for a certain amount of discriminatory outcomes in the context of what it refers to as the “risk-based approach” to AI. The ICO describes this as follows (17):

"Your compliance considerations therefore involve assessing the risks to the rights and freedoms of individuals and judging what is appropriate in [the] circumstances. In all cases, you need to ensure you comply with data protection requirements. This applies to the use of AI just as to other technologies that process personal data. In the context of AI, the specific nature of the risks posed and the circumstances of your processing will require you to strike an appropriate balance between competing interests as you go about ensuring data protection compliance. This may in turn impact the outcome of your processing. It is unrealistic to adopt a 'zero tolerance' approach to risks to rights and freedoms, and indeed the law does not require you to do so. It is about ensuring that these risks are identified, managed and mitigated. We talk about trade-offs and how you should manage them [...] .

To manage the risks to individuals that arise from processing personal data in your AI systems, it is important that you develop a mature understanding of fundamental rights, risks, and how to balance these and other interests. [...]"


The assertion that “[i]t is unrealistic to adopt a 'zero tolerance' approach to risks to [fundamental] rights and freedoms”, and that “indeed the law does not require you to do so” is dubious at the best of times as it suggests that fundamental rights and other interest (e.g., the commercial interests of companies, or the budgetary interests of employers or schools) can be simply evenly balanced against each other, which is not how Human Rights law approaches such issues. Rather, matters that affect fundamental rights should only be permitted if they are "necessary” and “proportionate” to a societally important–primarily public–overriding interest, and they should never result in discrimination.


The “risk-based”/“balanced” approach can to some extent be acceptable in relation to other Fundamental Rights including privacy and private life: a minimal negative effect on privacy of a measure that has very significant societal benefits can be in accordance with the European Convention on Human Rights and the EU Charter of Fundamental Rights, provided it was “necessary” and “proportionate” to the relevant “pressing social need” and subject to procedural safeguards (which are the standards applied under Article 8 – 11 of the ECHR).

Thus, the law of all developed countries allows for tapping of telephone calls (and other forms of communications) in relation to the investigation of sufficiently serious criminal offences, subject to judicial orders.


But the prohibition of discrimination is different, it is unqualified: there are no exception clauses in Article 14 ECHR or Article 21 CFR on the lines of those in Articles 8 – 11 ECHR. (18)


Indeed, under the UN International Covenant on Civil and Political Rights (ICCPR), to which all the EU Member States (and the UK) are a party, the prohibition of discrimination is even non-derogable: it should be respected even in times of war or threats to the life of a nation (See Article 4 ICCPR).


The ICO approach is incompatible with this.


Yet unfortunately this is further supported in the ICO/ATI guidance on how AI-based decisions should be explained to persons affected by them. In an example on the use of AI in recruitment it recommends that the recruiting company should “consider[ ] the impact of the AI system on the applicant”, which, it says, “relates to whether they think the decision was justified, and whether they were treated fairly.” In such a case, the paper recommends that (19): "Your explanation should be comprehensive enough for the applicant to understand the risks involved in your use of the AI system, and how you have mitigated these risks."


In my opinion, this is wrong. It suggests that the concerns of job applicants who feel they may have been discriminated against and thus unfairly treated (denied a job) can be allayed by somehow “explaining the risks” involved in the system and “how you have mitigated those risks” to them. That makes no sense, in two ways.


First of all, the reasoning of many AI systems–in particular “self-learning”/“black box” AI systems–is so opaque that that their reasoning–and the risks involved–cannot be sensibly understood, often not even fully by those who develop the system, and effectively never by those who rely on the outputs of those systems, and can therefore also not in any meaningful way be explained (or reviewed or challenged/contested).


Moreover, in relation to such systems, it is well-nigh impossible to “mitigate” the risks in relation to any specific individual: if the end-user (in the example, the recruiter) cannot understand how the system reached its conclusion and produced its output (typically in the form of a score), it is also impossible to correct any individual erroneous outputs–which in such cases automatically translate into outcomes. It may be possible to evaluate the outcomes in aggregate, over time, e.g., to note discriminatory outputs and outcomes (for instance, too many applicants from all-male schools or a certain ethnicity offered jobs).

But such a statistical finding does not help specific individuals who feel they were unfairly treated (denied a job).


Secondly, in any case, it is never sufficient that the “risks” to the applicant’s rights and interests, and the “mitigation measures” to counter those risks, are somehow “explained” to the applicant. Rather, if a computer-generated output (score or pre-decision) was wrong, it should be rectified: an applicant who is not offered a position but who, on the basis of clear and fair criteria should have been offered one, should be offered one.


“Explaining” to a job applicant that they are unjustly denied a position because the system is not perfect is not good enough. Under international Human Rights law, the risk of such an injustice should not be “mitigated” but remedied, or better still, “prevented”: the applicant must be offered an appropriate position.


Similar errors in thinking can be found elsewhere.


A 2018 European Commission study on the impact of AI on learning, teaching, and education observed that machine-learning neural AI systems–“black box” systems in the ICO/ATI terminology–“do not understand the data they process” and are “difficult or impossible to interpret”. (20) Yet the report confirms that such systems are being used in educational contexts such as assessing student performance.(21) It continues (22):

"Because of this [inexplainability of “black box” systems] there is now considerable interest in creating 'explainable AI.' The current systems, however, lack all the essential reflective and metacognitive capabilities that would be needed to explain what they do or don’t do. (23) [...] As there may be fundamental theoretical and practical limits in designing AI systems that can explain their behaviour and decisions, it is important to keep humans in the decision-making loop."


This is confused and irrational, in that the passage on the one hand seems to accept that all AI systems currently used in education “lack all the essential reflective and metacognitive capabilities that would be needed to explain what they do or don’t do”, yet on the other hand suggests that “keep[ing] humans in the decision-making loop” can somehow always be an adequate remedial measure. Placing “humans in the decision-making loop” is meaningless if they cannot understand–or are not given access to–the data or the algorithms used in the system they rely on.


Summary and Conclusions


The title of this paper put a (not so simple) question: must all algorithmic discrimination be “prevented” and “eliminated”–or need it only be “minimised”?

My analysis shows that the GDPR supports the strong international Human Rights prohibition of discrimination: recital 71 stresses that all discriminatory outcomes of automated decision-making and profiling must be “prevented”.


In its SCHUFA judgment, the EU Court of Justice has further clarified that, in relation to any decisions that are based “decisively” on scores created by AI, the creation of the score already constitutes an "automated decision” within the meaning of Article 22(1) GDPR–and it follows from recital 71 that therefore any discriminatory effects of the score must be “prevented”. Merely trying to “minimise” such effects is not good enough.

However, the complete avoidance of bias in the datasets used in AI systems to generate outputs such as scores (“profiles” within the meaning of the GDPR) is in many cases effectively impossible–which is why the more recent EU rules on digital matters merely require that such biases in outputs be “minimised.”


The ICO extrapolates from this to suggest that the effects of the outputs–in particular, any discriminatory outcomes of the use of the outputs–also only need to be “mitigated (rather than altogether “prevented”).

It wrongly asserts that: It is unrealistic to adopt a 'zero tolerance' approach to risks to rights and freedoms [of individuals], and indeed the law does not require you to do so.


This is likely to also be a view taken by Big Tech companies developing AI systems. And indeed, perhaps, also by the European Commission.


In fact, the law–more specifically, a massive corpus of international anti-discrimination law (24)–does precisely that: like the GDPR, it requires a “zero tolerance” approach to discrimination. The European Commission suggests that “keeping humans in the loop” somehow can remedy biased outputs from AI systems. However, given that such scores are typically created in black box AI systems, it will be well-nigh impossible–and for third parties relying on the score without insight into the algorithms used, effectively completely impossible–to “mitigate” the discriminatory effects (outcomes).


There is a serious dilemma here: should we follow the suggestions of the UK ICO and the Alan Turing Institute and accept a certain level of discriminatory outcomes from the use of (in particular, black box) AI systems? Or should we maintain the old concensus that all discrimination on the basis of race, religion, beliefs, health, etc., should be eliminated? Even if this means foregoing the use of AI systems that could bring significant societal benefits (e.g., in identifying possible/probable fraudsters)?


As a Human Rights lawyer, I do not want to see a society in which opaque systems wrongly and disproprotionally “identify” more black people, or immigrants, or people with handicaps, as 'probable' criminals. We should uphold the most basic principles of Human Rights law–not sell them out by rejecting them as 'unrealistic.'



Post script:

Douwe Korff (Prof.), Cambridge (UK), September 2025


Since writing the above, I noted the excellent analysis by leading UK technology lawyer and blogger, Graham Smith, of the proposed guidance to U2U platforms and search engines on “proactive technology: automated systems intended to detect particular kinds of illegal content and content harmful to children, with a view to blocking or swiftly removing them,” under the UK Online Safety Act, issued for consultation by the UK communications services regulator, Ofcom.(25) In this, he focusses on whether the proposed rules meet the European and UK “quality of law” requirements of adequate clarity, precision and foreseeability, and prevention of arbitrariness.


If the rules do not meet these requirements, they are not 'law' for Human Rights purposes.


He concludes that: [A]s Ofcom recognises, the impact [of the law and the proposed rules] on users’ freedom of expression will inevitably vary significantly between services, and Ofcom’s documents do not condescend to what is or is not an acceptable degree of interference with legitimate speech, it is difficult to see how a user could predict, with reasonable certainty, how their posts are liable to be affected by platforms’ use of proactive technology in compliance with Ofcom’s principles-based recommendations.


Nor is it easy to see how a court would be capable of assessing the proportionality of the measures. As Lord Sumption observed,26 the regime should not be couched in terms so vague or so general as, substantially, to confer a discretion so broad that its scope is in practice dependent on the will of those who apply it. Again, Ofcom's acknowledgment that the flexibility of its scheme could lead to significant variation in impact on users’ freedom of expression does not sit easily with that requirement. Ofcom, it should be acknowledged, is to an extent caught between a rock and a hard place. It has to avoid being overly technology-prescriptive, while simultaneously ensuring that the effects of its recommendations are reasonably foreseeable to users and capable of being assessed for proportionality. Like much else in the Act, that may in reality be an impossible circle to square. That does not bode well for the Act’s human rights compatibility.


That is a typical British understatement.


But here, I want to focus on three specific elements in all this. First of all, Smith notes that under the UK scheme, content can be suppressed by fully-automated means, with even the second (legally required) review assessment step not involving a human review. In my opinion, this brings the decision to suppress or remove content from platforms or search results squarely within the scope of Article 22 GDPR: it is “a decision based solely on automated processing, including profiling, which produces legal effects concerning [the data subject, which is here: the person posting or releasing the content] or similarly significantly affects him or her”.

There is no doubt that this is a decision with significant effect–it actually amounts to prior constraint on freedom of expression. As Smith observes in that regard, with reference to European case-law (27):

Prior restraint

[P]roactive detection and filtering obligations constitute a species of prior restraint (Yildirim v Turkey (ECtHR), Poland v The European Parliament and Council (CJEU)). Prior restraint is not impermissible. However, it does require the most stringent scrutiny and circumscription, in which risk of removal of legal content will loom large. The ECtHR in Yildirim noted that “the dangers inherent in prior restraints are such that they call for the most careful scrutiny on the part of the Court”.


Moreover, the fully-automated decision (suppression of content) is not based on–let alone “necessary for [the] performance of”–the contract between the platform or search engine and the user/poster of content, nor of course on the “explicit consent” of the data subject/poster of the content: cf. Article 22(2)(a) and (c) GDPR.

And if Smith is right–as I strongly believe he is–in concluding that the Online Safety Act read with the proposed rules do not meet the “quality of law” requirements of European and UK law, then the automated decisions are also not justified under sub-clause (b) in Article 22(2) GDPR: that the decision is “authorised by law”. And that is the case even without looking at whether the law in question “lays down suitable measures to safeguard the data subject's rights and freedoms and legitimate interests” (to which I will turn in my third point).


Secondly, Article 22(4) GDPR stipulates that decision based solely on automated processing which produces legal or otherwise significant effects on the data subject “shall not be based on special categories of personal data referred to in Article 9(1) (28), unless point (a) or (g) of Article 9(2) applies and suitable measures to safeguard the data subject's rights and freedoms and legitimate interests are in place.” Article 9(2)(a) covers the taking of such decisions with the ”explicit consent” of the data subject, which of course does not apply. Article 9(2)(g) again refers to processing “authorised by law” – and can again not be relied on for the reasons just mentioned.


In other words, the proposed rules do not only breach Article 10 ECHR (and the corresponding article in the UK Human Rights Act), as Smith rightly notes, but also Article 22 GDPR.


Thirdly, the proposed rules explicitly accept that Proactive technology used for detection of harmful content involves making trade-offs between false positives and false negatives. Understanding and managing those trade-offs is essential to ensure the proactive technology performs proportionately, balancing the risk of over-removal of legitimate content with failure to effectively detect harm. (para 5.14)


In other words, Ofcom explicitly acknowledges that under the proposed rules there will be cases in which legitimate content–expressions of views or opinions that are protected by the right to freedom of expression–will be wrongly blocked by platforms and search engines subject to UK law and the OSA in particular; and it equally explicitly holds that this is acceptable as long as false positives of this kind are not “consistently high and cannot be meaningfully reduced or mitigated” (cf. para. 5.19).


As Smith rightly asks: How high is high? How significant is significant?

No answer is given, other than that the permissible level of false positives is related to the nature of the subsequent review of detected content. As we shall see, the second stage review does not require all content detected by the proactive technology to be reviewed by human beings. The review could, seemingly, be conducted by a second automated system.

The result is that two service providers in similar circumstances could arrive at completely different conclusions as to what constitutes an acceptable level of legitimate speech being blocked or taken down. Ofcom acknowledges that the flexibility of its scheme: “could lead to significant variation in impact on users’ freedom of expression between services.” (Consultation, para 9.136)


That must raise questions about the predictability and foreseeability of the regime.


This can be related to the main issue in my note, by rephrasing the core question as follows: Should companies that provide U2U platforms and search engines be required to correct all false positives (wrongly suppressed views and opinions that are protected by the right to freedom of expression), or are they only required to “reduce or mitigate” them to some (unspecified) “acceptable” level?


As noted in my note, most fundamental rights other than discrimination can be made subject to limitations provided those are based on law (i.e., a legal rule that meets the European Court of Human Rights “quality of law” requirements that are also reflected in UK law, as discussed by Smith), serve a “legitimate interest” in a democratic society, and are “proportionate” and “necessary” to the achievement of that legitimate interest. Removing illegal content from U2U platforms and search engine results is of course a legitimate interest and can in most cases be said to be “necessary” and “proportionate” in a democratic societ–provided “illegality” is defined in a human rights-compatible way (29) and provided there are appropriate remedies available to those affected. But Removing legal content from U2U platforms and search engine results does not just not serve a legitimate interest but rather violates the freedom of expression rights of those issuing the legal content.


Under global, European and UK human rights law it is not good enough for a platform or search engine company to say–or for a regulator such as Ofcom to accept–that they “reduce” the suppression of legal content to a minimum, as a trade-off against the maximum suppression of illegal content. Rather, each wrongly suppressed legal expression of views or opinions constitutes a violation of the right to freedom of expression that must be remedied. It is not justified by the need to suppress illegal content.

In this regard, the rules on “proactive technology: automated systems intended to detect particular kinds of illegal content and content harmful to children, with a view to blocking or swiftly removing them” therefore raise a similar dilemma as is raised by the rules on prevention of discrimination.


Bibliography


(1) I.e., the 1969 UN Convention on the Elimination of Racial Discrimination (CERD), the 1979 UN Convention on the Elimination of Discrimination Against Women (CEDAW), and the 1981 UN General Assembly Declaration on the Elimination of All Forms of Intolerance and of Discrimination Based on Religion or Belief.


(2) This is a composite definition, based on the definitions of “racial discrimination”, “discrimination againstwomen” and "intolerance and discrimination based on religion or belief" in the instruments listed in the previous footnotes: CERD, Article 1(1), CEDAW, Article 1, and the Declaration on the Elimination of All Forms of Intolerance and of Discrimination Based on Religion or Belief, Article 2(2).


(3) See: Douwe Korff, In praise of recitals (& Explanatory Memoranda), 10 September 2025, available at: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5467126


(4) ICO, Guidance on AI and data protection, updated version of 15 March 2023, p. 64, available at: https://ico.org.uk/for-organisations/guide-to-data-protection/key-dp-themes/guidance-on-ai-and-data-protection/

The ICO makes this link specifically in its guidance on AI, but as noted at 1, above, it is of course a general principle of international human rights- and EU law, enshrined in Article 21 of the EU Charter of Fundamental Rights.

Note that there is a semantic issue in relation to the term “discrimination.” Literally, it can cover any making of distinctions: “she was very discriminate in her choices.” That is why the ICO, in the above quote, uses the words “unjust discrimination.” But as noted in section 1, above, in international law, the term is used to denote any “any distinction, exclusion or restriction made on the basis of [sensitive characteristics] which has the effect or purpose of impairing or nullifying the recognition, enjoyment or exercise by [protected groups] of human rights and fundamental freedoms in the political, economic, social, cultural, civil or any other field.” The phrase used by the ICO should not be read–and I am sure was not intended–as suggesting that there are “just” and “unjust” forms of that kind of discrimination.


(5) The French and German texts use the same odd phrase: FR: “sécuriser les données à caractère personnel d'une manière ... qui prévienne [...] ”; DE: “personenbezogene Daten in einer Weise sichern, dass ... verhindert wird. [...] ”.


(6) I.e.: “personal data revealing racial or ethnic origin, political opinions, religious or philosophical beliefs, or trade union membership, [...] genetic data, biometric data [when used] for the purpose of uniquely identifying a natural person, data concerning health or data concerning a natural person's sex life or sexual orientation [...] ”


(7) This section draws on a paper on AI & the GDPR, produced by me for Privacy International, which used it in preparing its May 2024 response to the call for contributions on artificial intelligence in education and its human rights-based use at the service of the advancement of the right to education, issued by the UN Special Rapporteur on the right to education, available at: https://privacyinternational.org/sites/default/files/2024-

08/Privacy%20International%20Contribution%20AI%20in%20education.pdf


(8) ICO, Guidance on AI and data protection, updated version of 15 March 2023, at: https://ico.org.uk/media2/ga4lfb5d/guidance-on-ai-and-data-protection-all-2-0-38.pdf

See also the ICO blog, Generative AI: eight questions that developers and users need to ask, 3 April 2023, at: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/generative-ai-eight-questions-that-developers-and-users-need-to-ask/


(9) ICO, Big data, artificial intelligence, machine learning and data protection, 4 September 2017, at: https://ico.org.uk/media2/migrated/2013559/big-data-ai-ml-and-data-protection.pdf


(10) ICO & Alan Turing Institute, Explaining decisions made with AI, 17 October 2022, at: https://ico.org.uk/media/for-organisations/guide-to-data-protection/key-dp-themes/explaining-decisions-made-with-artificial-intelligence-1-0.pdf (“The ICO/ATI paper”)

Note that, as stated on the website: “Due to the Data (Use and Access) Act coming into law on 19 June 2025, this guidance is under review and may be subject to change.” The DUA Act, when in force, will further relax UK data protection safeguards–but this is not further discussed here.


(11) See Johanna Chamberlain, The Risk-Based Approach of the European Union’s Proposed Artificial Intelligence Regulation: Some Comments from a Tort Law Perspective, European Journal of Risk Regulation (2023), 14, 1–13.

She notes that “The risk-based approach in the proposed EU regulation on AI is new at the EU level but has parallels in already existing legal instruments in the AI area [in the USA and Canada]” and believes that “These developments suggest that a risk-based approach may become the global norm for regulating AI.” (pp. 4 – 5).


(12) ICO, Guidance on AI and data protection [footnote 7, above], p. 53, emphases added.


(13) See footnote 4, above.


(14) For more details on the distinction between outputs (of AI systems) and outcomes, see EDRi, Beyond Debiasing: Regulating AI and its inequalities, report by Agathe Balayn and Seda Gürses, Delft University of Technology, the Netherlands, September 2021, p. 24, available at: https://edri.org/wp-content/uploads/2021/09/EDRi_Beyond-Debiasing-Report_Online.pdf


(15) CJEU, First Chamber judgment of 7 December 2023 in Case 634/21, OQ v. Land Hessen & SCHUFA Holding AG, (“SCHUFA judgment”), para. 73, repeated verbatim in the conclusions at para. 75.


(16) The ICO/ATI Paper (footnote 11, above), defines a ‘black box’ model as “any AI system whose inner workings and rationale are opaque or inaccessible to human understanding.” (p. 74) The main kinds of such opaque models are described in more detail in Annexe 2 to the paper. Note that although the paper goes on to say that “You should only use ‘black box’ models if you have thoroughly considered their potential impacts and risks in advance. The members of your team should also have determined that your use case and your organisational capacities/resources support the responsible design and implementation of these systems”–as noted in the main text, above, it goes on to accept the use of such systems as long as the risks–and any discriminatory outcomes–are “minimised” and “explained” (rather than eliminated).


(17) ICO, Guidance on AI and data protection [footnote 7, above), p. 18, emphases added.


(18) Article 52 CFR, that reflects the second paragraphs (exceptions) in Article 8 – 11 ECHR, appears to also apply to the prohibition of discrimination in Article 21. However, because of the stipulation in Article 53 that “nothing in this Charter shall be interpreted as restricting or adversely affecting” the rights set out in the ECHR, in effect the absolute nature of the prohibition of discrimination set out in the Convention must also be respected under the Charter.


(19) ICO/ATI paper (footnote 11, above), p. 58, emphases added.


(20) European Commission, Joint Research Centre (JRC), The Impact of Artificial Intelligence on Learning, Teaching, and Education: Policies for the future (Author: Ilkka Tuomi), EUR 29442 EN, November 2018, p. 35, available at: https://www.researchgate.net/publication/329544152_The_Impact_of_Artificial_Intelligence_on_Learning_Teaching_and_Education_Policies_for_the_Future


(21) Idem.


(22) Idem, original emphasis in bold.


(23) Luckin, Rosemary. 2018, Machine Learning and Human Intelligence: The Future of Education for the 21st Century, London, UCL Institute of Education Press. [original reference]


(24) Listed in section I in the Annex to this paper.


(25) Graham Smith, Ofcom’s proactive technology measures: principles-based or vague?, 4 August 2025, avilable at: https://www.cyberleagle.com/2025/08/ofcoms-proactive-technology-measures.html


(26) In R (on the application of Catt) (Respondent) v Commissioner of Police of the Metropolis and another (Appellants), UKSC/2013/0114, 10 May 2013–reflecting long-established European Human Rights law.


(27) Respectively:

- European Court of Human Rights, Second Chamber judgment in the case of Ahmet Yildirim v. Turkey, 18 December 2012; and

- Court of Justice of the European Union, Second Chamber judgment in Case C-5/16, Poland v The European Parliament and Council, 21 June 2018.


(28) Listed in footnote 6, above.


(29) Which UK law in some respects does not. Specifically, under the Terrorism Act 2000, the UK authorities have proscribed the direct action group “Palestine Action” and made expressions of support for the organisation a serious criminal offence. More than 1700 non-violent demonstrators have been arrested on this charge–in my opinion (and the opinion of many international human rights lawyers) this is wrong and in breach of European and global–and UK–human rights law. But I will not discuss this further here.


Biography of the Guest Expert


Douwe Korff is a Dutch Comparative and International lawyer, specialized in Human Rights and data protection, living in the UK.


He is Emeritus Professor of International Law at London Metropolitan University, Visiting Professor at the universities of Zagreb and Rijeka in Croatia,  an Associate of the Oxford Martin School of the University of Oxford, a Visiting Fellow at Yale University (Information Society Project), and a Fellow at the Centre for Internet and Human Rights of the European University of Viadrina, Berlin.


He has carried out many projects, studies and reports relating to data protection for the the UN, the Council of Europe, the EU and the UK Information Commissioner. He is a member of the Advisory Council of the Foundation for Information Policy Research (FIPR), a leading UK technology policy think-tank.


His recent work includes: study carried out with Prof. Ian Brown at the request of the European Parliament’s Civil Liberties (LIBE) Committee into the future of EU – US flows of personal data after the Schrems II judgment of the Court of Justice of the EU (2021); several submissions to the EU and a briefing note written at the request of the EU EP LIBE Committee on UK data protection law in terms of the EU GDPR (2021-23); opinion on the core issues raised in the PNR case before the CJEU (2021) and an analysis of whether the Court addressed these issues (2024); several analyses of the pending EU Artifical Intelligence Act and the proposed Council of Europe Convention on Al including a paper on Al & the GDPR (2022-24).


 
 
 

Recent Posts

See All
bottom of page