Jul. 9th, 2021

kestrell: (Default)
Posted to Slate
BY AMBER M. HAMILTON
JULY 07, 20211:55 PM

Algorithmic bias is a function of who has a seat at the table.Benjamin Child
In late June, the MIT Technology Review reported on the ways that some of the world’s largest job search sites—including LinkedIn, Monster, and ZipRecruiter—
have attempted to eliminate bias in their artificial intelligence job-interview software.
https://www.technologyreview.com/2021/06/23/1026825/linkedin-ai-bias-ziprecruiter-monster-artificial-intelligence/
In late June, the MIT Technology Review reported on the ways that some of the world’s largest job search sites—including LinkedIn, Monster, and ZipRecruiter—have attempted to eliminate bias in their artificial intelligence job-interview software.
These remedies came after incidents in which A.I. video-interviewing software was found to
discriminate against people with disabilities that affect facial expression
https://benetech.org/about/resources/expanding-employment-success-for-people-with-disabilities-2/
and
exhibit bias against candidates identified as women.
https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G
When artificial intelligence software produces differential and unequal results for marginalized groups along lines such as
race,
https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
gender,
https://www.forbes.com/sites/carmenniethammer/2020/03/02/ai-bias-could-put-womens-lives-at-riska-challenge-for-regulators/?sh=16920ec0534f
and
socioeconomic status,
https://www.nytimes.com/2018/05/04/books/review/automating-inequality-virginia-eubanks.html
Silicon Valley rushes to acknowledge the errors, apply technical fixes, and apologize for the differential outcomes. We saw this when
Twitter apologized after its image-cropping algorithm was shown to automatically focus on white faces over Black ones
https://www.theguardian.com/technology/2020/sep/21/twitter-apologises-for-racist-image-cropping-algorithm
and when
TikTok expressed contrition for a technical glitch that suppressed the Black Lives Matter hashtag.
https://www.cnbc.com/2020/06/02/tiktok-blacklivesmatter-censorship.html
They claim that these incidents are unintentional moments of unconscious bias or bad training data spilling over into an algorithm—that the bias is a bug, not a feature.

But the fact that these incidents continue to occur across products and companies suggests that discrimination against marginalized groups is actually central to the functioning of technology. It’s time that we see the development of discriminatory technological products as an intentional act done by

the largely white, male executives of Silicon Valley
https://revealnews.org/article/heres-the-clearest-picture-of-silicon-valleys-diversity-yet/
to uphold the systems of racism, misogyny, ability, class and other axis of oppression that privilege their interests and create extraordinary profits for their companies. And though these technologies are made to appear benevolent and harmless, they are instead emblematic of what Ruha Benjamin, professor of African American Studies at Princeton University and the author of Race After Technology,
terms “The New Jim Code
“: new technologies that reproduce existing inequities while appearing more progressive than the discriminatory systems of a previous era.

....It’s time for us to reject the narrative that Big Tech sells—that incidents of algorithmic bias are a result of using unintentionally biased training data or unconscious bias. Instead, we should view these companies in the same way that we view education and the criminal justice system: as institutions that uphold and reinforce structural inequities regardless of good intentions or behaviors of the individuals within those organizations. Moving away from viewing algorithmic bias as accidental allows us to implicate the coders, the engineers, the executives, and CEOs in producing technological systems that are
less likely to refer Black patients for care,
https://www.nature.com/articles/d41586-019-03228-6
that may cause disproportionate harm to disabled people,
https://slate.com/technology/2020/02/algorithmic-bias-people-with-disabilities.html
and
discriminate against women in the workforce.
https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G
When we see algorithmic bias as a part of a larger structure, we get to imagine new solutions to the harms caused by algorithms created by tech companies, apply social pressure to force the individuals within these institutions to behave differently, and create a new future in which technology isn’t inevitable, but is instead equitable and responsive to our social realities.

Read the rest of the article at
https://slate.com/technology/2021/07/silicon-valley-algorithmic-bias-structural-racism.html#main
kestrell: (Default)
I just deleted my Facebook account. Facebook has never been as accessible as I thought it should be, and it's always been a pain in the ass to use. In addition, this recent deal to sell advertising to Oculis so it appears on people XR headsets is really creepy: seriously, sighted people, advertisers are not going to be happy until they can feed advertising to you in your sleep.

I signed up for many of these social media accounts when I was in the media studies program at MIT, when social media seemed all shiny and new and full of promise, but the accessibility hasn't improved all that much since then, while the privacy and security risks have increased exponentially, especially over the past year. It's past time to sign off.

My next step is to delete my Twitter account.

So don't panic: I'm not dead, I'm just not on Facebook or Twitter anymore.

February 2024

S M T W T F S
    123
456789 10
11121314151617
18192021222324
2526272829  

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated May. 30th, 2025 11:52 pm
Powered by Dreamwidth Studios