Exploring Bias and Psychological Impact of AI at Howard University Symposium
Howard University researchers, Dr. Lucretia Williams and Dr. Denae Ford Robinson, discussed AI’s inherent biases against Black users and its psychological implications during the April 15 AI And Machine Learning Symposium. Williams focused on collecting African American speech data to improve accessibility, while Robinson explored AI as a companion and its potential emotional impact.
Artificial intelligence has now woven itself into the fabric of everyday life, with innovations like ChatGPT and Siri becoming staples. However, as AI becomes more sophisticated, concerns arise around its ability to serve diverse users without perpetuating biases, particularly against Black individuals. During the AI And Machine Learning Symposium at Howard University, researchers addressed these significant issues and proposed actionable solutions.
A pressing concern is the inherent bias that automated speech recognition systems (ASRs) exhibit, particularly against Black users. The absence of sufficient African American English data leads to alarmingly high error rates for Black users. Dr. Lucretia Williams, a senior research scientist at Howard, stated, “you shouldn’t need to code switch to use technology.” In collaboration with Google, she and her team set out to bridge this gap by collecting 600 hours of African American speech data through community engagement events across the country.
Participants in these events shared their experiences with AI, focusing on how Black culture influences technology. In an avenue to foster genuine expressions, questions posed were informal, inviting participants to reveal their natural dialects. Dr. Williams emphasized that collecting datasets requires a human-centered approach, stating, “We wanted to provide a human element while using it to improve this technology.” The dataset, entirely owned by Howard University, is expected to be publicly accessible soon.
In another realm of AI’s reach, Dr. Denae Ford Robinson from Microsoft Research explored the psychological relationship users have with AI systems. Many now perceive AI as companions rather than mere tools, with implications that can affect mental well-being. Robinson articulated that while these AI agents, such as therapy bots or “love bots,” proliferate, research remains scarce about their influence. She noted, “There’s been limited research to really understand how these AI social bots and chat bots can provide more meaningful social and emotional support…”
Through investigations involving over 200 users facing psychological challenges, Robinson and her team developed a framework to discern AI’s psychological impacts. They identified 19 specific agent behaviors linked to over 21 psychological impacts, equipping their AI red teams with invaluable insights to enhance user security and address risks. This multifaceted research underscores the critical need for qualitative, human-centered methods in AI design to ensure these technologies enhance, rather than hinder, human relationships.
The integration of artificial intelligence into daily life presents both opportunities and challenges, particularly concerning bias and psychological impact. Howard University’s continued commitment to community engagement and ethical practices in AI data collection, as demonstrated by Dr. Lucretia Williams’ research, exemplifies a proactive approach to mitigating bias for Black users. Similarly, Dr. Denae Robinson’s exploration of AI’s companionship role reveals significant psychological considerations that merit attention. Overall, it is vital for advancements in AI to prioritize human-centered methodologies to foster inclusive, supportive technologies.
Original Source: thedig.howard.edu
Post Comment