kestrell: (Default)
I attended the first Sight Tech Global Virtual Conference last December, and I learned so much! Topics not only included technology, but disability rights and how AI bias affects visually impaired people. I encourage anyone who wants to learn about the newest technologies for visually impaired people to register for this conference, especially since it's free!

Posted to TechCrunch
https://techcrunch.com/2021/07/15/announcing-sight-tech-global-2021/

Shortly after the first Sight Tech Global event, in December last year, Apple and Microsoft announced remarkable new features for mobile phones. Anyone could point the phone camera at a scene and request a "scene description." In a flash, a cloud-based, computer vision AI determined what was in the scene and a machine-voice read the information.

Learning that "a room contains three chairs and a table" might not seem like a big advance for the sighted, but for blind or visually impaired people, the new feature was a notable milestone for accessibility technology: An affordable, portable and nearly universal device could now "see" on behalf of just about anyone.

Technologies like scene description will be on the agenda at the second annual Sight Tech Global event, December 1-2, 2021. The free, sponsor-supported, virtual and global event will convene many of the world's top technologists, researchers, advocates and founders to discuss how rapid advances in technology, many centered on AI, are altering — both improving and complicating — accessibility for people with sight loss.

Register today — it's free.
https://docs.google.com/forms/d/e/1FAIpQLSfberR7NW3F74cBNleiOVauGQ8wrSV0FcZqf1HH5X60mUrS6Q/viewform?fbzx=4093129549110261409
kestrell: (Default)
Posted to Slate
BY AMBER M. HAMILTON
JULY 07, 20211:55 PM

Algorithmic bias is a function of who has a seat at the table.Benjamin Child
In late June, the MIT Technology Review reported on the ways that some of the world’s largest job search sites—including LinkedIn, Monster, and ZipRecruiter—
have attempted to eliminate bias in their artificial intelligence job-interview software.
https://www.technologyreview.com/2021/06/23/1026825/linkedin-ai-bias-ziprecruiter-monster-artificial-intelligence/
In late June, the MIT Technology Review reported on the ways that some of the world’s largest job search sites—including LinkedIn, Monster, and ZipRecruiter—have attempted to eliminate bias in their artificial intelligence job-interview software.
These remedies came after incidents in which A.I. video-interviewing software was found to
discriminate against people with disabilities that affect facial expression
https://benetech.org/about/resources/expanding-employment-success-for-people-with-disabilities-2/
and
exhibit bias against candidates identified as women.
https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G
When artificial intelligence software produces differential and unequal results for marginalized groups along lines such as
race,
https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
gender,
https://www.forbes.com/sites/carmenniethammer/2020/03/02/ai-bias-could-put-womens-lives-at-riska-challenge-for-regulators/?sh=16920ec0534f
and
socioeconomic status,
https://www.nytimes.com/2018/05/04/books/review/automating-inequality-virginia-eubanks.html
Silicon Valley rushes to acknowledge the errors, apply technical fixes, and apologize for the differential outcomes. We saw this when
Twitter apologized after its image-cropping algorithm was shown to automatically focus on white faces over Black ones
https://www.theguardian.com/technology/2020/sep/21/twitter-apologises-for-racist-image-cropping-algorithm
and when
TikTok expressed contrition for a technical glitch that suppressed the Black Lives Matter hashtag.
https://www.cnbc.com/2020/06/02/tiktok-blacklivesmatter-censorship.html
They claim that these incidents are unintentional moments of unconscious bias or bad training data spilling over into an algorithm—that the bias is a bug, not a feature.

But the fact that these incidents continue to occur across products and companies suggests that discrimination against marginalized groups is actually central to the functioning of technology. It’s time that we see the development of discriminatory technological products as an intentional act done by

the largely white, male executives of Silicon Valley
https://revealnews.org/article/heres-the-clearest-picture-of-silicon-valleys-diversity-yet/
to uphold the systems of racism, misogyny, ability, class and other axis of oppression that privilege their interests and create extraordinary profits for their companies. And though these technologies are made to appear benevolent and harmless, they are instead emblematic of what Ruha Benjamin, professor of African American Studies at Princeton University and the author of Race After Technology,
terms “The New Jim Code
“: new technologies that reproduce existing inequities while appearing more progressive than the discriminatory systems of a previous era.

....It’s time for us to reject the narrative that Big Tech sells—that incidents of algorithmic bias are a result of using unintentionally biased training data or unconscious bias. Instead, we should view these companies in the same way that we view education and the criminal justice system: as institutions that uphold and reinforce structural inequities regardless of good intentions or behaviors of the individuals within those organizations. Moving away from viewing algorithmic bias as accidental allows us to implicate the coders, the engineers, the executives, and CEOs in producing technological systems that are
less likely to refer Black patients for care,
https://www.nature.com/articles/d41586-019-03228-6
that may cause disproportionate harm to disabled people,
https://slate.com/technology/2020/02/algorithmic-bias-people-with-disabilities.html
and
discriminate against women in the workforce.
https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G
When we see algorithmic bias as a part of a larger structure, we get to imagine new solutions to the harms caused by algorithms created by tech companies, apply social pressure to force the individuals within these institutions to behave differently, and create a new future in which technology isn’t inevitable, but is instead equitable and responsive to our social realities.

Read the rest of the article at
https://slate.com/technology/2021/07/silicon-valley-algorithmic-bias-structural-racism.html#main
kestrell: (Default)
Kes: Purrhaps this explains why 90% of all images get identified as cats.

From MIT Technology Review

April 1, 2021
The 10 most cited AI data sets are riddled with label errors, according to
a new study out of MIT,
https://arxiv.org/pdf/2103.14749.pdf
and it’s distorting our understanding of the field’s progress.

Data sets are the backbone of AI research, but some are more critical than others. There are a core set of them that researchers use to evaluate machine-learning models as a way to track how AI capabilities are advancing over time. One of the best-known is the canonical image-recognition data set ImageNet, which kicked off the modern AI revolution. There’s also MNIST, which compiles images of handwritten numbers between 0 and 9. Other data sets test models trained to recognize audio, text, and hand drawings.

In recent years, studies have found that these data sets can contain serious flaws. ImageNet, for example, contains
racist and sexist labels
https://excavating.ai/
as well as photos of people’s faces obtained without consent.
The latest study now looks at another problem: many of the labels are just flat-out wrong. A mushroom is labeled a spoon, a frog is labeled a cat, and a high note from Ariana Grande is labeled a whistle. The ImageNet test set has an estimated label error rate of 5.8%. Meanwhile, the test set for QuickDraw, a compilation of hand drawings, has an estimated error rate of 10.1%.
How was it measured? Each of the 10 data sets used for evaluating models has a corresponding data set used for training them. The researchers, MIT graduate students Curtis G. Northcutt and Anish Athalye and alum Jonas Mueller, used the training data sets to develop a machine-learning model and then used it to predict the labels in the testing data. If the model disagreed with the original label, the data point was flagged up for manual review. Five human reviewers on Amazon Mechanical Turk were asked to vote on which label—the model’s or the original—they thought was correct. If the majority of the human reviewers agreed with the model, the original label was tallied as an error and then corrected.

Does this matter? Yes. The researchers looked at 34 models whose performance had previously been measured against the ImageNet test set. Then they remeasured each model against the roughly 1,500 examples where the data labels were found to be wrong. They found that the models that didn’t perform so well on the original incorrect labels were some of the best performers after the labels were corrected. In particular, the simpler models seemed to fare better on the corrected data than the more complicated models that are used by tech giants like Google for image recognition and assumed to be the best in the field. In other words, we may have an inflated sense of how great these complicated models are because of flawed testing data.

Now what? Northcutt encourages the AI field to create cleaner data sets for evaluating models and tracking the field’s progress. He also recommends that
researchers improve their data hygiene when working with their own data. Otherwise, he says, “if you have a noisy data set and a bunch of models you’re
trying out, and you’re going to deploy them in the real world,” you could end up selecting the wrong model. To this end, he open-sourced

the code
https://github.com/cgnorthcutt/cleanlab
he used in his study for correcting label errors, which he says is already in use at a few major tech companies.

February 2024

S M T W T F S
    123
456789 10
11121314151617
18192021222324
2526272829  

Syndicate

RSS Atom

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Jan. 6th, 2026 01:54 am
Powered by Dreamwidth Studios