toplogo
Đăng nhập

Assessing Sign Language-Based and Touch-Based Interaction Methods for Deaf Users of Intelligent Personal Assistants


Khái niệm cốt lõi
Deaf users exhibit a preference for sign language-based interaction with intelligent personal assistants compared to touch-based methods, but usability remains a challenge across input modalities.
Tóm tắt
This study evaluated the usability and preferences of deaf users when interacting with an intelligent personal assistant (IPA) like Alexa using three different input methods: American Sign Language (ASL), touch-based "Tap to Alexa", and smart home apps. The key findings are: Usability, as measured by the System Usability Scale (SUS), was slightly higher for ASL input (SUS 71.6) compared to touch-based methods (Tap to Alexa SUS 61.4, Apps SUS 56.3), but the differences were not statistically significant. In post-experiment surveys, deaf participants consistently expressed a preference for ASL input over touch-based alternatives. Linguistic analysis of the participants' signing revealed a diverse range of expressions and vocabulary, with a median of 41 unique signs and 9 fingerspelled words per participant. The total essential vocabulary size across all participants was 117 signs. Participants exhibited cultural preferences, such as using hand-waving gestures to get the device's attention, which should be considered in the design of sign language-based IPAs. Challenges remain in achieving good usability across input modalities, suggesting the need for further research and development to improve accessibility of IPAs for deaf users.
Thống kê
The median vocabulary size was 41 signs plus 9 fingerspelled words per participant. Across all participants, the total vocabulary size was 246 distinct signs, 93 distinct fingerspelled words, and 18 distinct gestures. Only 117 out of the 246 total signs were considered essential for interacting with the IPA in the given smart home domain.
Trích dẫn
"Fingerspelling is often, but not exclusively, used to identify titles and names." "Many participants gestured via an attention-wave to the camera, which occurred a total of 60 times. This is a culturally appropriate attention-getting technique within the deaf community."

Yêu cầu sâu hơn

How can the design of intelligent personal assistants better accommodate the diverse linguistic preferences and cultural norms of deaf users?

Intelligent personal assistants (IPAs) can better accommodate the diverse linguistic preferences and cultural norms of deaf users by incorporating features that cater to sign language communication. One approach is to develop robust sign language recognition capabilities that can accurately interpret a wide range of signs and gestures. This would enable deaf users to interact with IPAs using their preferred mode of communication, ASL. Additionally, IPAs should allow for customization of wake words and activation methods to align with the cultural norms of the deaf community. For example, incorporating eye gaze, waving gestures, and name signs as alternative wake methods can enhance the user experience for deaf individuals. Furthermore, IPAs should provide support for multimodal communication, allowing users to seamlessly switch between sign language, touch-based input, and other modalities based on their preferences. This flexibility in communication modes can accommodate the diverse linguistic preferences of deaf users and ensure that they can interact with IPAs in a way that is natural and intuitive for them. Additionally, incorporating features such as visual feedback, captions, and tactile responses can enhance the overall accessibility and usability of IPAs for deaf individuals.

How might intelligent personal assistants leverage multimodal interaction, combining sign language, touch, and other modalities, to provide an optimal user experience for deaf individuals in smart home environments?

Intelligent personal assistants can leverage multimodal interaction to provide an optimal user experience for deaf individuals in smart home environments by offering a seamless and intuitive communication experience. By combining sign language, touch, and other modalities, IPAs can cater to the diverse communication preferences of deaf users and enhance accessibility in smart home settings. One way to achieve this is by integrating sign language recognition capabilities into IPAs, allowing users to interact with the assistant using ASL. This can be complemented by touch-based input methods for tasks that are more efficiently performed through touch interactions. By offering a combination of sign language and touch input, IPAs can provide a versatile communication platform that accommodates the varied needs and preferences of deaf users. Additionally, incorporating visual feedback, such as on-screen captions and visual indicators, can enhance the user experience for deaf individuals interacting with IPAs. Tactile feedback mechanisms, such as vibration alerts or haptic responses, can also provide additional cues and notifications for deaf users in smart home environments. By leveraging multimodal interaction, IPAs can create a more inclusive and user-friendly experience for deaf individuals, ensuring that they can effectively communicate and interact with smart home devices.

What are the key technical challenges in developing sign language recognition capabilities that can match the flexibility and expressiveness observed in human-to-human sign language interactions?

Developing sign language recognition capabilities that can match the flexibility and expressiveness observed in human-to-human sign language interactions poses several technical challenges. One key challenge is the complexity and variability of sign language, which encompasses a wide range of gestures, facial expressions, and body movements that convey meaning. Sign languages are rich and nuanced, with regional variations, dialects, and cultural nuances that must be accounted for in recognition systems. Another challenge is the need for robust and accurate machine learning algorithms to interpret and analyze sign language data. Sign language recognition systems must be trained on diverse datasets that capture the full range of signs, gestures, and expressions used in sign language communication. This requires extensive annotation and labeling of sign language data, which can be time-consuming and labor-intensive. Furthermore, real-time recognition of sign language poses a challenge due to the dynamic nature of signing, including the speed, fluidity, and simultaneous use of multiple linguistic features. Sign language recognition systems must be able to process and interpret signs in context, taking into account the sequential and simultaneous aspects of signing. Additionally, ensuring the accessibility and usability of sign language recognition systems for deaf users is crucial. The systems must be designed with input from the deaf community to address their specific needs and preferences. This includes considerations for user interface design, feedback mechanisms, and customization options that enhance the user experience for deaf individuals interacting with IPAs.
0
visual_icon
generate_icon
translate_icon
scholar_search_icon
star