The Technology and Disability Intersection: Reflections on Technical and Policy Challenges

Jason J.G. White

Purpose

  • To introduce issues rather than to offer solutions.
  • To consider difficult problems that have consequences for policy as well as technological development.
  • Centrality of machine learning technology to the issues discussed.
  • Early work in progress - comments appreciated.

Expansion of the Technology and Disability Field

  • Historically, the field has almost entirely been devoted to user interface accessibility and assistive technology.
  • Algorithmic discrimination expands its scope.
  • Implications for the knowledge and skills needed by researchers and practitioners.

The Application of Machine Learning to Decision-Making

  • Use of machine learning to make or to inform decisions affecting peoples’ rights and interests (including those with disabilities).
  • Example: predicting recidivism risk.
  • Example: first-level evaluation of applicants for employment.
  • Example: evaluation of loan risk.
  • Example: determining eligibility for welfare benefits.

Note on the Examples

  • Several of the examples are controversial.
  • Prison abolitionists deny that imprisoning offenders is morally justified.
  • Others may argue that the purpose of imprisonment is solely punishment - risk of future offending should not be considered.
  • Proponents of universal basic income may argue that at least some welfare benefits should be provided unconditionally.
  • These and other concerns about the examples are important - the moral questions should not be set aside.
  • I’m setting them aside here on purely pragmatic grounds: my topic is the role of machine learning.

Algorithmic Decision-Making and Disability

  • General approach 4-5 years ago, as assumed in the literature: machine learning would be trained on a large collection of cases of the assessment to be made.
  • Future cases then evaluated by the trained algorithm.
  • Argument by Jutta Treviranus and others: people with disabilities are outliers, hence underrepresented in training corpora, hence unlikely to be treated appropriately in algorithmic decisions or recommendations about decisions.
  • The diversity of people with disabilities (biological, social, experiential) - explains outlier status.
  • Departure from species-typical functioning characterizes people with disabilities, but within this category there is great diversity of capabilities, opportunities, and experiences.

The Superiority of Human Decision-Making

  • Positive argument for a human decision: at their best, human decision-makers can interpret rules and policies with contextual understanding.
  • Human decision-makers can engage in moral deliberation.
  • The machine learning algorithms described earlier cannot do so.
  • Granted, human decision-makers may be biased, even subtly, e.g., Sunstein’s argument that human judges in recidivism evaluations tend to have a “current offence bias”.
  • Conclusion: an informed and skilled human decision-maker is better than an algorithm alone.
  • Algorithms may be biased against people with disabilities as outliers.
  • Human evaluators may trust machine learning algorithm unduly.
  • These are arguments against the role of machine learning in deciding matters involving substantive rights and opportunities.
  • Careful design of machine learning systems may mitigate or perhaps even eliminate biases in specific applications.
  • The possibility of developing systems that can identify outliers as cases requiring human appraisal.

Will Large Language Models Exhibit Moral Deliberation?

  • Growing sophistication of large language models (e.g., capacity for “chain of thought”) reflection.
  • Unclear how far the capacity for reasoning-like processes will progress as the technology develops further.
  • Potential to “close the gap” (to some extent at least) between algorithmic evaluation and human legal/moral reasoning.
  • Does this undermine the arguments against algorithmic decision-making given earlier?
  • Analogical reasoning, the ability to interpret rules/policies, and moral analysis may tend to overcome the problem of treating outliers unfairly - including people with disabilities.
  • It may become less obvious that human legal/moral reasoning is superior - more difficult to decide in what contexts to rely on machine learning.
  • Potential for improved transparency: the giving of reasons for automated decisions/recommendations.

Interesting Argument for Automated Moral Reasoning

  • Savulescu et al.: using an “artificial moral advisor” to overcome evolutionary limitations of human decision-making (e.g., tendencies toward intergroup conflict).
  • The argument of Savulescu et al. for human “moral enhancement”, where AI is one such possible enhancement, alongside biological interventions.
  • These arguments radically challenge presumptions in favour of unaided human moral reasoning and decision-making.

Questions

  • What should be the role of machine learning in making decisions about rights, opportunities and the application of laws/policies?
  • As LLM technology evolves: how strong does the case remain for the superiority of human decision-making?
  • How, if at all, do LLMs change our appraisal of the risk that outliers such as people with disabilities will be discriminated against in algorithmic decisions or recommendations?
  • What are the implications for system design and bias mitigation strategies?
  • Are people entitled to a human decision? If so, under what conditions?

Machine Learning and User Interface Accessibility

  • Returning to the traditional topic of user interface accessibility and assistive technology.
  • What are the potential contributions and challenges posed by machine learning-based artificial intelligence here?
  • For simplicity, I consider first the role of AI/ML in the creation of user interfaces and digital content, then as an assistive technology invoked by the user.

Machine Learning in User Interface and Digital Content Development

  • Potential to apply LLMs to programming tasks involving the accessibility of user interfaces (i.e., automated source code generation and manipulation).
  • Its potential to generate image descriptions in the development/maintenance of documents and Web sites.
  • Its potential, via improved automatic speech recognition, to add captions to video.
  • Sign language translation is still considered a difficult problem.
  • Potential for automatic summarizing of text, or conversion of documents better to meet the linguistic needs of people with learning/language/cognitive disabilities.

Consequences for Anti-Discrimination Policy

  • Code or content generated by AI/ML in the authorial process can be subject to human review and correction.
  • The basic scheme of disability discrimination law: limiting the duty not to discriminate by the cost imposed upon the provider of goods or services.
  • Notions of “undue burden”, “unjustifiable hardship”, etc.
  • This legal arrangement is rightly controversial in that it constrains disability rights.
  • In so far as AI/ML improves productivity, it may increase the scope of an individual or organization’s anti-discrimination obligations by bringing more accessibility-related tasks within the realm of costs that are considered justifiable according to the law.

Potential Challenges

  • Inadequate human review and correction of automatically generated or revised code/digital content.
  • The risk that uncorrected or inadequately corrected material is imposed on users with disabilities.
  • This is discriminatory, but complaint-based legal procedures establish barriers to enforcement of the law.
  • Possible outcome: widely varied quality of implementation, perhaps even more so than prior to the introduction of AI/ML.
  • Negative effects on users - discussed further below.

Application of AI/ML to Assistive technology

  • I propose to discuss an example before considering the general case.
  • Illustration: using LLMs to describe images (e.g., on the Web).
  • Generated descriptions are often more detailed than those typically written by human authors.
  • Automated descriptions are more flexible - the user can ask questions of the multimodal LLM.
  • Generated descriptions are often plausible even in the context of the image.
  • They may also be highly inaccurate.

Challenges of Automated Image Descriptions

  • A user who is blind cannot determine how accurate a description is without human assistance, if the description is very plausible in context.
  • The user has to decide whether to obtain human assistance - possibly at social or financial cost.
  • The creator of the image contributes nothing to its accessibility.
  • In fact, there may be few, if any, reliable means of creating an image so that it is described accurately by the LLM.

Inaccessibility Necessitates “Invisible Work”

  • Application by Grue of the concept of “invisible work” developed in feminist theory to disability.
  • Invisible work: originally conceived as the unpaid/unacknowledged work performed by women in a patriarchal society.
  • Extension of the concept to unpaid/unacknowledged work done by people with disabilities in virtue of living in a discriminatory, ableist society.

LLMs Demand Invisible Work

  • The need for the user to monitor the output of the LLM and to obtain assistance in the event of failure.
  • Unrecognized LLM inaccuracy may lead to a failure of users in completing tasks involving user interfaces or digital content.
  • Users may unduly trust the output of LLMs.

The General Case: Artificial Agents as Assistive Technologies

  • Proposal by Gregg Vanderheiden et al. for the long-term development of an artificial agent that can
    • Interact with user interfaces much as an average human being would do.
    • Make the task accessible to the user, satisfying the user’s access-related needs and preferences.

Recent Progress

  • Artificial agents that interact with Web-based interfaces have been announced/demonstrated.
  • Some applications (e.g., by Google and Microsoft) support interventions by LLM-based tools directly.
  • The second component of Vanderheiden’s proposal - the flexible capacity to communicate with the user to make the interaction accessible - is not yet well developed.
  • Independently of Vanderheiden’s proposal, it seems likely that artificial agents (interacting with applications via APIs or graphical interfaces) will continue to be the subject of research and software development efforts.
  • The following discussion applies to Vanderheiden’s proposal and to artificial agents used to enhance accessibility more generally.

Principal Advantages of Vanderheiden’s Proposal

  • Addressing the problem of widespread non-compliance with technical accessibility standards by reducing the interface implementer’s responsibility for accessibility.
  • If a UI can be used by a typical person, Vanderheiden suggests, it can be operated by his proposed agent.
  • If a single agent is freely available to the public, UI developers can meet accessibility requirements by making their interfaces interpretable by this agent.
  • Meeting access needs that are poorly served by current assistive technologies (consider, e.g., language/learning/cognitive disabilities especially).
  • Vanderheiden argues that existing policies and accessibility efforts should continue, at least while such an agent is under development; there should be no change to current policies.

Do Artificial Agents Serving as an Assistive Technology Impose Invisible Work on Users?

  • It is important to consider how interactions conducted via artificial agents may fail.
  • Assumption: if UI developers start relying on the presence of artificial agents, conventional accessibility standards may not be followed as a cost reduction measure (even if regulations are not formally revised).
  • Conclusion: the underlying UI is likely to be inadequately accessible to the user, hence the need for the agent.
  • The user can’t reliably examine the underlying UI of the application to judge the correctness of the agent’s conduct or the information it provides.
  • Risk: erroneous information or actions by the agent may be indistinguishable from correct functioning (from the user’s perspective).
  • Recall the earlier example of image descriptions.
  • The user may need to monitor the operation of the agent and engage human intervention in the event of suspected failures/shortcomings.

Can User Interfaces be Designed or enhanced to Support Interaction by Agents?

  • As in the case of image descriptions, if the agent’s interactions with a UI fail to meet users’ expectations, is there anything the UI implementer can do to correct it?
  • Risk: there may not be UI design guidance that can be followed to ensure reliable interaction via artificial agents (including Vanderheiden’s proposed agent).

Can Artificial Agents Used as Assistive Technologies be Incrementally Improved?

  • Risk: enhancing an artificial agent to perform better on some tasks and for some users may result in a decline in performance in other circumstances.
  • It may not be possible to improve agents incrementally.
  • The result would be inconsistent accessibility as experienced by users with disabilities, including improvements and regressions over time.

Addressing the Risks

  • Proposal: treating the risks as design requirements for agents.
  • Failure of agents must occur in informative and predictable ways, allowing users to detect this condition and to take action.
  • There should be design guidance UI implementers can follow to make their interfaces reliably agent-interpretable (applies to GUI interactions, possibly also to APIs).
  • It must be possible to improve the performance of agents incrementally (without causing declines elsewhere).
  • Recall that these requirements need to be met across a range of applications and users’ access needs.
  • Question: are the design requirements jointly satisfiable?

Conclusions

  • How will machine learning (including large language models) affect people with disabilities, especially after the media attention and speculative investing surrounding AI have evaporated?
  • The need for an interdisciplinary approach to algorithmic discrimination that includes people with disabilities and with disability-related expertise.
  • Using AI/ML to solve problems of accessibility and to create powerful assistive technologies raises complex issues for ML researchers, implementers, and policy specialists.

Acknowledgments

  • Clayton Lewis, University of Colorado Boulder.
  • Participants in the Research Questions Task Force of the W3C’s Accessible Platform Architectures Working Group.

References

  • See generally White, Jason JG. “Artificial intelligence and people with disabilities: a reflection on human–ai partnerships.” Humanity Driven AI: Productivity, Well-being, Sustainability and Partnership. Cham: Springer International Publishing, 2021. 279-310, and the works cited therein.
  • Giubilini, Alberto, and Julian Savulescu. “The artificial moral advisor. The “ideal observer” meets artificial intelligence.” Philosophy & technology 31.2 (2018): 169-188.
  • Grue, Jan. “The CRPD and the economic model of disability: undue burdens and invisible work.” Disability & Society 39.12 (2024): 3119-3135.
  • Vanderheiden, Gregg, and Crystal Yvette Marte. “Will AI allow us to dispense with all or most accessibility regulations?.” Extended Abstracts of the CHI Conference on Human Factors in Computing Systems. 2024.