BSL users face daily exclusion, and AI-driven British Sign Language (BSL) translation could transform accessibility. But without community input, it risks ethical pitfalls.
We are leading a ground-breaking project to ensure AI (Artificial Intelligence – or ‘machine learning’) in BSL is developed ethically, inclusively, and with the right oversight. This first of its kind project has an immediate need and will be delivered under the Sciencewise programme.
The challenges associated with machine learning and AI
There are four key challenges which arise from the way in which AI functions, which raise specific concerns for how they might be effectively applied to signed languages:
- Accuracy requires big data which does not exist for signed languages.
- All systems require a written form of the signed language which isn’t a natural part of BSL.
- Current systems struggle with fine detail, like hands, which is essential for communicating meaning in signed languages.
- AI systems are concerned with statistical likelihoods, not accuracy and these systems are structurally difficult to have oversight over.
BSL and social exclusion
There are about 151,000 BSL users in the UK and, of these, 87,000 are deaf and 21,000 using it as their main language.
BSL users face significant social exclusion and face challenges in accessing education, employment, and mainstream media. They also face barriers to a wide range of services, including essential services such as public transport and health.
AI as a potential solution to barriers?
One development that is being promoted as a potential solution to these barriers is AI-enabled BSL.
AI-enabled BSL translation and recognition theoretically could provide automated translation of websites, media content or eventually even real-time translation of conversations. However, while activity regarding this is increasing in academia and industry, tackling the engineering challenges and developing prototypes, the wider BSL community has not been meaningfully involved.
This is important. As with all AI language models there are issues around diversity and bias, quality assurance and accuracy, etc. AI in BSL poses additional issues related to cultural relevance; the protection of the language (a cultural and linguistic asset); and the use of human interpreters’ likeness.
Most critically, there are questions about when it would or would not be appropriate to use such AI tools. For example, while it might be that providing public transport information via an AI avatar would be acceptable to the majority, receiving a cancer diagnosis in this way might not be.
RNID and Sciencewise: Meaningful deliberative engagement with the BSL community
While there has been academic engagement on the topic, to date there has been no deliberative dialogue engagement with grassroots signing communities about the development and use of AI-enabled sign language translation, or it’s ethical implications – in the UK or globally.
At RNID we’ll be leading a project that will change this.
A first of its kind, meaningful deliberative engagement project with the BSL community to determine what are appropriate uses of AI technologies for BSL, and what oversight is necessary to ensure high quality and ethical use.
This project will be delivered under the UKRI Sciencewise programme.