Author: Gaia Patarini, Visiting PhD Researcher at the Assisting Living and Learning (ALL) Institute, Maynooth University and University of Milan (Università degli Studi di Milano Statale)
Artificial Intelligence (AI) represents an extremely current and challenging issue: the implications of using AI systems – especially in relation to fundamental rights – have been at the centre of constitutional law debates for some time, leading institutions into a race to regulate them which has also involved the European legislator.
Within this context, the issue of the relationship between AI and inclusion of persons with disabilities is inevitably central, and disability rights represent the ideal field for discussing the dual nature of AI as a potential facilitator on one hand or as a potential barrier on the other.
This blog post, after briefly recalling the advantages and the risks that AI could entail for persons with disabilities, focuses on the European regulatory experience of the AI Act (Regulation EU 2024/1689), which, with its pros and cons, poses a significant problem in terms of accessibility.
In doing so, the blog post discusses such a framework by highlighting how it could end up creating new digital barriers for persons with disabilities instead of generating measures designed to ensure their autonomy, social and professional integration, and participation in the life of the community.
While on the one hand AI could potentially be a facilitator and promoter of inclusion and equality for persons with disabilities, on the other, it involves some ethical challenges that could instead contribute to spreading and even creating new forms of discrimination which are likely to intensify, particularly with regard to persons with disabilities.
In terms of opportunities, AI systems can support the development of assistive technologies that enhance independence, autonomy, participation, and access to goods and services. Examples include advances in autonomous driving, which may significantly improve mobility for persons with disabilities or image-translation tools or voice assistants that facilitate communication for persons with sensory or intellectual disabilities.
In terms of risks, the main concern relates to the danger that AI may perpetuate existing biases and algorithmic distortions that reinforce longstanding social discrimination. These distortions can stem from the inadequate representation of disability, often shaped by widespread stereotypes, within AI training data.
In other words, the discrimination experienced by persons with disabilities – historically and in the present – becomes embedded in the datasets used to train AI systems, imprinting itself onto the system’s internal logic.
In this scenario, the AI Act, one of the first attempts in regulating AI, promotes the update of human centric and trustworthy AI while ensuring a high level of protection of health, safety and fundamental rights, including democracy, the rule of law and environmental protection, to protect against the harmful effects of AI systems in the Union, and to support innovation. Specifically, Article 5(b) prohibits:
“the placing on the market, the putting into service or the use of AI systems that exploit any of the vulnerabilities of a natural person or a specific group of persons due to their age, disability or a specific social or economic situation, with the objective, or the effect, of materially distorting the behaviour of that person or a person belonging to that group in a manner that causes or is reasonably likely to cause that person or another person significant harm.”
The broader debate on bias also potentially raises additional concerns in relation to sensitive issues, such as personal data processing, privacy, and consent. However, alongside the need to address such concerns, we must also consider the broader importance of digital accessibility; An AI system (even one which is properly designed and trained with high-quality data to avoid such bias) will nonetheless fall short of delivering inclusivity for persons with disabilities, unless it is fully accessible and usable. There is, however, another crucial point: even assuming an AI system is properly trained with high-quality data, it will still fail to be inclusive for persons with disabilities unless it is fully accessible and usable, bringing us to the question of digital accessibility.
However, the AI Act fails to adequately ensure accessibility for persons with disabilities.
Recital 80 recognises the fundamental importance of digital accessibility, highlighting that the EU and its Member States, as parties to the UN Convention on the Rights of Persons with Disabilities (UNCRPD), are “legally obliged to protect persons with disabilities from discrimination and promote their equality, to ensure that persons with disabilities have access, on an equal basis with others, to technologies and information and communication systems, and to ensure respect for their private life”, also asking to ensure compliance with the principles of universal design in new technologies, services and with accessibility requirements, including Directives 2016/2012 (Web Accessibility Act) and 2019/882 (European Accessibility Act).
However, on one hand, these directives apply only to specific categories of products and services; on the other, accessibility obligations in the AI Act change depending on the type of AI system involved: for high-risk AI systems, Member States must ensure compliance with accessibility requirements (Article 16(l)); but for non-high-risk systems, Article 95(1)(e) merely encourages and facilitates the development of voluntary codes of conduct to promote the adoption of certain high-risk requirements, including environmental sustainability and accessibility.
This gap was highlighted and strongly criticized by the European Disability Forum (EDF) during the negotiation of the AI Act. The EDF urged the European Parliament to introduce a universal accessibility requirement for all AI systems – a requirement absent from the Commission’s proposal – and warned that its omission would create additional barriers for persons with disabilities.
Nevertheless, the final text of the AI Act did not incorporate such an obligation, leaving no general accessibility requirement applicable to all AI systems covered by the Act. This omission raises a relevant issue: without a universal accessibility obligation, the risk of discrimination against persons with disabilities in their interactions with AI-based technologies is significantly heightened.
Despite representing a major step forward in regulating the risks associated with AI and regardless of its declaratory recognition of the need to uphold the rights of persons with disabilities in line with the UNCRPD, the AI Act proves notably inadequate with respect to ensuring digital accessibility.
By adopting a risk-based framework, the EU legislature limited the accessibility obligation only to some of the existing AI systems. Therefore, this approach fails to provide effective protection for persons with disabilities. For them, the issue extends far beyond preventing manipulation or algorithmic bias: it involves the broader and more fundamental risk of exclusion and marginalization stemming from inaccessible technology itself.
The result is a fragmented and incomplete regulatory framework that risks creating new digital barriers and undermines the EU’s broader commitment to preventing discrimination and safeguarding the rights of persons with disabilities in the age of AI.

