Although cochlear implants (CIs) have helped many people improve their hearing, individually the outcomes vary greatly. Some CI users can carry on a conversation over the telephone. Others have more limited benefit and rely on visual information, such as lip-reading, or on linguistic context, to understand speech. A major factor contributing to outcomes is the programming of the speech processor. Past research has shown that each CI user requires an individualized set of programming parameters to achieve optimal performance. There is no one-size-fits-all solution–there is no single set of parameters that is optimal for all CI users.
The programming of the processor follows a procedure recommended by the manufacturer. For example, there are adjustments to the ranges of electric current, frequency ranges, gain factors, and activation and deactivation of channels. All these parameters are typically adjusted in clinics on a case-by-case basis, based on user feedback. The interplay of these makes programming a very complicated task, on which clinicians receive insufficient guidance. Clinics also have limited resources, and often are not paid adequately for the time they spend on providing audiology services. All these factors contribute to clinicians making a limited number of common parameter adjustments, rather than exploring a wider range of other options that may be a better fit for the individual user. As a result, CI users often end up with programming that is suboptimal for their own speech understanding.
This project aims at giving the CI user control over their device programming. The main goal is to research options for a consumer-driven user interface for assisting the programming process. Users will explore the complex parameter space themselves, under clinical supervision. This provides them with new opportunities for fine-tuning the CI fit that they do not have in today’s typical clinics. In addition, this frees up time for clinicians to engage in more counseling, and in further individual optimization of the programming.
The interface is being developed in collaboration with Washington University in St. Louis. It will be tested and validated both there and at Gallaudet University. Team members include:
- Boojum Kwon, PhD (Investigator), Associate Professor, Department of Hearing and Language Sciences, Gallaudet University
- Jill Firszt, PhD (Consultant), Professor in the Department of Otolaryngology and Director of the Cochlear Implant Program at Washington University School of Medicine
- Laura Holden, AuD, CCC-A (Consultant), Research Scientist at the Cochlear Implant Program at Washington University School of Medicine
- Jeffrey Cooper (Student Research Assistant), PhD student, Department of Hearing, Speech, and Language Sciences, Gallaudet University