Department of Ethnic Studies Celebration of Black History Month Takes on 21st Century Technology
February 22, 2024

While we often reflect on the history of Civil Rights, Black History Month is indeed about far more than just Martin Luther King Jr.’s “I Have a Dream” speech. This is evidenced by Stanislaus State’s Sixth Annual Black Power Matters program, sponsored by the Ethnic Studies Department. 

“Navigating Power and Inequality in the Age of Artificial Intelligence,” scheduled for 6-8 p.m. Wednesday, Feb. 28, in Room 202 of the University Student Center, will feature keynote speaker Nicole Turner-Lee of the Brookings Institute, where she is a senior fellow in governance studies and director of the Center for Technology Innovation. 

The focus of the program, which will include brief remarks from Stan State Vice President and Chief Diversity, Equity and Inclusion Officer Sacha Joseph-Mathews and the Black Student Union’s Chioma Chibuko, is how racial bias impacts artificial intelligence (AI), just as it impacts other sectors of life. 

“Recently I was reading ‘The Algorithm: How AI Decides Who Gets Hired, Monitored, Promoted, and Fired and Why We Need to Fight Back Now’ by Hilke Schellmann,” said Goshu Tefera, an assistant professor in the Ethnic Studies Department and the event’s organizer. “Injustices and inequalities find their way into newer and emerging technologies, perpetuating the already existing imbalances and inequities.” 

Turner-Lee travels extensively to speak about bias in AI. Her most recent book is “Digitally Invisible: How the Internet is Creating the New Underclass.”  

At the Brookings Institution, a Washington, D.C.-based public policy organization, Turner-Lee researches public policy designed to bring equitable access to technology across the U.S. and to use its power to change communities across the world. She studies internet governance issues and how global and domestic broadband is deployed. 

More recently, her focus has turned to biases in machine learning algorithms, the digital divide, 5G mobile infrastructure and telehealth.  

“As we are increasingly having our lives being dictated by AI, a whole community is being left out and the technologies are only working for people with opportunity and access.” 

- Goshu Tefera, Assistant Professor of Ethnic Studies

Tefera assisted Black Power Matters founder Mary Roaf in planning last year’s program and organized this year’s while Roaf began serving as department chair. Tefera was intrigued by the topic when reading about the work of fellow Ethiopian Abeba Berhane. 

Berhane, a professor at Trinity University in Dublin, Ireland, and her colleague Vinay Uday Prabhu, found that the 80 Million Tiny Images database, created at the Massachusetts Institute of Technology (MIT) using material found through internet search engines, may have contaminated AI systems with racist, misogynistic and other slurs. 

The researchers found that linking images to slurs and offensive language infuses prejudice and bias into AI and machine learning models. Due to the research, MIT apologized and withdrew the database. 

Berhane continues to be a leader in the field. Last October, she was named to Time Magazine’s initial list of 100 Most Influential People in Artificial Intelligence. 

Tefera sought a scholar to address the subject at the Black Power Matters program and felt fortunate Turner-Lee was available. 

“She has extensive experience, but she’s very busy,” Tefera said, noting Turner-Lee’s schedule has taken her to Vancouver, British Columbia, Barcelona, Atlanta, Washington, D.C. and California in recent weeks. 

He is particularly eager to listen to an authority in a field he is learning more about. 

“In my preliminary research for this event I’ve learned the problem is not necessarily due to the fact it’s run by humans, but it’s based on data sets and the availability of data we have,” he said. “If you have bad and biased data in, you have bad and biased data out.  

Research has shown, for example, that facial analysis algorithms often demonstrate higher error rates for darker-skinned individuals, particularly Black women, compared to lighter-skinned men. This discrepancy suggests a lack of diversity in the faces that populate these databases, which leads to biased outcomes in facial recognition technologies.” 

That bias is evident in police work, education and health care needs in Black communities because of AI predictive analysis, Tefera said. 

“As we are increasingly having our lives dictated by AI, a whole community is being left out, and the technologies are only working for people with opportunity and access. 

“People, especially Black women, are saying, ‘If technology is going to go this way, it must consider our experiences, who we are and how it affects us as well. You cannot leave us behind. We need to have access to it. We need to be represented by it,’” Tefera said. 

For example, Tefera said if he asks ChatGPT for a picture of a person in a beautiful city eating a delicious meal in a fancy restaurant, AI’s default is to show a middle-aged white man. In 2020, when he used a search engine for images of professional men’s hairstyles, all that came up were images of white men. Non-professional styles were of Black men. Though there has been some change four years later, the bias remains. 

“These kinds of injustices are prevalent and indicate deeper systemic issues within the rapidly evolving tech world,” Tefera said. “It is these inequalities that people are questioning.”