Developing an accessible automatic speech recognition (ASR) system that can accurately process speech from individuals who stutter by leveraging fine-tuning on a curated dataset and a novel data augmentation technique to enrich the training data with diverse disfluency patterns.
Introducing "Signmaku" to enhance inclusivity in video-based learning for the Deaf and Hard-of-Hearing through ASL comments.
AI-powered scene description applications offer valuable daily tools for blind and low vision individuals, but improvements are needed for user satisfaction and trust.
Blind and low vision individuals have varied video accessibility preferences across different viewing scenarios, emphasizing the need for scenario-specific approaches.
The author introduces the MAIDR system to make statistical visualizations accessible to blind users through multimodal data representation, emphasizing user autonomy and control.
Customization is crucial for making visualizations accessible to blind and low-vision individuals with varying needs. The author presents a model of customization using content tokens to meet design goals effectively.