Deaf/Hard of Hearing Technology Rehabilitation Engineering Research Center
Improving the Accessibility, Usability, and Performance of Technology for Individuals who are Deaf or Hard of Hearing
Current RERC Projects
The current Deaf/Hard of Hearing Technology Rehabilitation Engineering Research Center (DHH-RERC) supports four research projects:
Research Projects
Education and Workplace Accessibility in Immersive Environments (Project R1). The project is investigating how information is best accessed by DHH learners in extended reality (XR) platforms. The team is examining the modality of information (e.g., captions and signing avatars) and the amount of information most effective for adult learning in XR environments. There will be a specific emphasis on the utility of multi-modal information displays in XR, including immersive video, pass-through video, English captions, sign language avatars, haptic/tactile feedback, and the conditions in which each can efficiently represent information in an XR environment.
Assistive Listening Technologies in Museums & Performing Arts Venues (Project R2). Accessibility practitioners from The John F. Kennedy Center for the Performing Arts and Smithsonian Institution will collaborate to investigate and evaluate the viability of assistive listening technologies for applications within museum and performing arts venues. The project will explore the functionality and usability factors of assistive listening technologies (such as induction loops, infrared, radio frequency, Wi-Fi, and Auracast) across cultural genres and in a range of environments, at exhibition areas, classrooms and/or meeting rooms, information desks, large and small-scale theaters and/or auditoriums, and portable systems for guided tours and programs. The project will be conducted with substantial input from user-experts and members of the DHH community at all stages, and technical expertise from the industry, ensuring fidelity and responsiveness to the user experience as a critical success factor.
Development Projects
AI-Based Voice Generation and Customization for DHH Individuals (Project D1). AI-based voice generation via text-to-speech (TTS) technologies has started to become available commercially. It offers significant quality improvements compared to traditional text-to-speech approaches, to the point where it is possible to clone a specific person’s voice. However, these developments are not considering what DHH people really need to benefit from such technology. Project outcomes include TTS models with customization options that let DHH individuals pick the manner of delivery in an intuitive visually oriented user interface. Another outcome will be TTS models that can be adapted to their voice and remove their deaf accent to improve intelligibility.
Technology Readiness Indicators for Deployments of AI-Based Sign Language Recognition, Translation and Generation (Project D2): With the proliferation of artificial intelligence, there are renewed efforts to develop technologies for recognizing, generating, and translating sign language. However, compared to AI-based spoken language technologies, sign language technologies remain in their infancy. There are widespread concerns among deaf communities about the harms that could be induced by deploying such technologies prematurely without regard for whether they are robust enough to fulfill their stated purpose. These potential harms are especially significant in contexts that aim to reduce or replace human sign language interpretation, as well as those that aim to provide access to information in sign language. This project will develop readiness indicators to guide the implementation, procurement and deployment of sign language technologies with the goal of maximizing benefits to deaf stakeholders while minimizing potential harms.