kestrell: (Default)
And posted online at
https://sighttechglobal.com/agenda/?mc_cid=d44666e45e&mc_eid=503bb1e1d9
This virtual conference is free, and you can register here
https://sighttechglobal.com/conference-registration/

Here are some panel highlights:, with just brief descriptions, but I encourage readers to check out the entire agenda because, as usual, the speakers for this conference represent developers and researchers who are investigating the technology concerns which will be impacting blind and visually impaired people--and all disabled people--in the immediate future.

Day 1 (Wed., Dec. 7)

1. Virtual reality and Inclusion: What does non-visual access to the metaverse mean?
People with disabilities and accessibility advocates are working to make sure the metaverse is accessible to everyone. This panel will delve into research on the challenges current virtual and augmented reality tools create for people who are blind or have low vision.The panelists will share their experiences using immersive technologies and explore how these tools can be used to enhance employment opportunities in hybrid and remote workplaces – but only if they are built with inclusion in mind.

2. Inventing the "screenreader" for VR: Owlchemy Lab's Cosmonious High
For developers of virtual reality games, there's every reason to experiment with accessibility from the start, which is what the Owlchemy Labs team did with Cosmonious High, the 2022 release of a fun, first-person game situated in a inter-galactic high school that one reviewer said "has all the charm and cheek of a good Nickelodeon kids show." And it reveals some of the earliest approaches to acessibility in VR.

3. Audio Description the Pixar Way
AI-based, synthetic voice-based audio description may have a place in some forms of accessible video content, but the artistry of the entirely human-produced audio descriptions Pixar produces for its productions, set a creative standard no AI will never attain, and that's all for the good. Meet members of the Pixar team behind excellence in audio descriptions.

4. Accessibility is AI’s Biggest Challenge: How Alexa Aims to Make it Fairer for Everyone
Smart home technology, like Alexa, has been one of the biggest boons in recent years for people who are blind, and for people with disabilities altogether. Voice technology and AI help empower people in many ways, but one obstacle stands in its way: making it equitable. In this session, learn from Amazon about how they’re approaching the challenge ahead.

Day 2 (Thurs. Dec. 8)

1. The Problems with AI
Despite the stunning advances in AI over the past decade, the so-called "deep learning" AI technology prevalent today has under-appreciated limitations and even poses societal dangers. Our speakers are world-renowned AI experts and AI "dissenters" who believe we need an AI that's both more accountable and better able to produce common sense results.

2. Did Computer Vision AI Just Get Worse or Better?
The ability an assistive tech devices to recognize objects, faces, scenes is a type of AI called Computer Vision, which calls for building vast databases on images labeled by humans to train AI algorithms. A new technique called
"one-shot learning"
https://en.wikipedia.org/wiki/One-shot_learning
learns dramatically faster because the AI trains itself on images across the Internet. No human supervision needed. Is that a good idea?
kestrell: (Default)
Kes: If you go to the article, it has links to sections within the complete document, which is linked to at the end of the document.

https://cdt.org/insights/cdt-comments-to-ostp-highlight-how-biometrics-impact-disabled-people/

January 18, 2022 / Ridhi Shetty, Hannah Quay-de la Vallee

In late 2021, the White House Office of Science Technology and Policy (OSTP) launched its AI Bill of Rights initiative to address AI systems that enable and worsen discrimination and privacy risks, particularly in the technologies society has grown to depend on most. CDT submitted comments to the OSTP on the impact of biometric technologies on disabled people, discussing how biometrics incorporated into decision-making and surveillance have disproportionately harmed multiply-marginalized disabled people.

Our comments focus on applications of biometrics in health, public benefits, assistive technology and Internet of Things (IoT), and hiring. We also discuss the use of biometrics for surveillance in schools, the workplace, and the criminal legal system. CDT will continue advocating for increased attention to AI’s privacy risks and for policy changes that center affected communities.

Read the full comments here.
https://cdt.org/wp-content/uploads/2022/01/CDT-Comments-for-OSTP-RFI-on-biometrics-2021-21975.pdf
kestrell: (Default)
Kes: Purrhaps this explains why 90% of all images get identified as cats.

From MIT Technology Review

April 1, 2021
The 10 most cited AI data sets are riddled with label errors, according to
a new study out of MIT,
https://arxiv.org/pdf/2103.14749.pdf
and it’s distorting our understanding of the field’s progress.

Data sets are the backbone of AI research, but some are more critical than others. There are a core set of them that researchers use to evaluate machine-learning models as a way to track how AI capabilities are advancing over time. One of the best-known is the canonical image-recognition data set ImageNet, which kicked off the modern AI revolution. There’s also MNIST, which compiles images of handwritten numbers between 0 and 9. Other data sets test models trained to recognize audio, text, and hand drawings.

In recent years, studies have found that these data sets can contain serious flaws. ImageNet, for example, contains
racist and sexist labels
https://excavating.ai/
as well as photos of people’s faces obtained without consent.
The latest study now looks at another problem: many of the labels are just flat-out wrong. A mushroom is labeled a spoon, a frog is labeled a cat, and a high note from Ariana Grande is labeled a whistle. The ImageNet test set has an estimated label error rate of 5.8%. Meanwhile, the test set for QuickDraw, a compilation of hand drawings, has an estimated error rate of 10.1%.
How was it measured? Each of the 10 data sets used for evaluating models has a corresponding data set used for training them. The researchers, MIT graduate students Curtis G. Northcutt and Anish Athalye and alum Jonas Mueller, used the training data sets to develop a machine-learning model and then used it to predict the labels in the testing data. If the model disagreed with the original label, the data point was flagged up for manual review. Five human reviewers on Amazon Mechanical Turk were asked to vote on which label—the model’s or the original—they thought was correct. If the majority of the human reviewers agreed with the model, the original label was tallied as an error and then corrected.

Does this matter? Yes. The researchers looked at 34 models whose performance had previously been measured against the ImageNet test set. Then they remeasured each model against the roughly 1,500 examples where the data labels were found to be wrong. They found that the models that didn’t perform so well on the original incorrect labels were some of the best performers after the labels were corrected. In particular, the simpler models seemed to fare better on the corrected data than the more complicated models that are used by tech giants like Google for image recognition and assumed to be the best in the field. In other words, we may have an inflated sense of how great these complicated models are because of flawed testing data.

Now what? Northcutt encourages the AI field to create cleaner data sets for evaluating models and tracking the field’s progress. He also recommends that
researchers improve their data hygiene when working with their own data. Otherwise, he says, “if you have a noisy data set and a bunch of models you’re
trying out, and you’re going to deploy them in the real world,” you could end up selecting the wrong model. To this end, he open-sourced

the code
https://github.com/cgnorthcutt/cleanlab
he used in his study for correcting label errors, which he says is already in use at a few major tech companies.
kestrell: (Default)
Kes: Because building in accessibility isn't a flaw, it's a feature.

App from VEMI Lab group will help people with visual impairments, seniors enjoy ride-sharing with self-driving cars 
https://umaine.edu/news/blog/2021/01/29/app-from-vemi-lab-group-will-help-people-with-visual-impairments-seniors-enjoy-ride-sharing-with-self-driving-cars/

A research group led by the Virtual Environments and Multimodal Interaction Laboratory (VEMI Lab) at the University of Maine is developing a smartphone app that provides the navigational assistance needed for people with disabilities and seniors to enjoy ride-sharing and ride-hailing, collectively termed mobility-as-a-service, with the latest in automotive technology. The app, known as the Autonomous Vehicle Assistant (AVA), can also be used for standard vehicles operated by human drivers and enjoyed by everyone.

AVA will help users request, find and enter a vehicle using a multisensory interface that provides guidance through audio and haptic feedback and high-contrast visual cues. The Autonomous Vehicle Research Group (AVRG), a cross institutional collective led by VEMI lab with researchers from Northeastern University and Colby College, will leverage GPS technology, real-time computer vision via the smartphone camera and artificial intelligence to support the functions offered through the app.

....Users will create a profile in AVA that reflects their needs and existing methods of navigation. The app will use the information from their profiles to find a suitable vehicle for transport, then determine whether one is available.

When the vehicle arrives, AVA will guide the user to it using the camera and augmented reality (AR), which provides an overlay of the environment using the smartphone by superimposing high-contrast lines over the image to highlight the path and verbal guidance, such as compass directions, street names, addresses and nearby landmarks. The app also will pinpoint environmental hazards, such as low-contrast curbs, by emphasizing them with contrasting lines and vibrating when users approach them. It will then help users find the door handle to enter the vehicle awaiting them.

“This is the first project of its kind in the country, and in combination with our other work in this area, we are addressing an end-to-end solution for AVs (autonomous vehicles) that will improve their accessibility for all,” says Giudice, chief research scientist at VEMI Lab and lead on the AVA project.
“Most work in this area only deals with sighted passengers, yet the under-represented driving populations we are supporting stand to benefit most from this technology and are one of the fastest growing demographics in the country.”

AVRG studies how autonomous vehicles can meet various accessibility needs.
VEMI lab
https://umaine.edu/news/blog/2019/08/23/umaine-research-project-on-improving-trust-in-autonomous-vehicles-using-human-vehicle-collaboration/
itself has explored tactics for improving consumer trust in this emerging technology.
AVA advances both groups’ endeavors by not only providing another means for people with visual impairments and other disabilities and seniors to access self-driving vehicles, but also increases their trust in them. The project also builds on a seed grant-funded, joint effort between UMaine and Northeastern University to improve accessibility, safety and situational awareness within the self-driving vehicle. Researchers from both universities aim to develop a new model of human-AI vehicle interaction to ensure people with visual impairments and seniors understand what the autonomous vehicle is doing and that it can sense, interpret and communicate with the passenger.
The app will offer modules that train users how to order and locate rides, particularly through mock pickup scenarios. Offering hands-on learning provides users confidence in themselves and the technology, according to researchers. It also gathers data AVRG can use during its iterative, ongoing development for AVA and its integration into autonomous vehicles.

“We are very excited about this opportunity to create accessible technology which will help the transition to fully autonomous vehicles for all. The freedom and independence of all travelers is imperative as we move forward,” says VEMI lab director Richard Corey.

VEMI Lab, co-founded by Corey and Giudice in 2008, explores different solutions for solving unmet challenges with technology. Prime areas of research and development pertain to self-driving vehicles, the design of bio-inspired tools to improve human-machine interaction and functionality, and new technology to improve environmental awareness, spatial learning and navigational wayfinding.
kestrell: (Default)
from CoolBlindTech
https://coolblindtech.com/ai-project-to-support-blind-and-partially-sighted-people/?bblinkid=248390077&bbemailid=28877723&bbejrid=1856789113#content

AI-project to support blind and partially sighted people
FEBRUARY 8, 2021 9:21 AM

Heriot-Watt and the Royal National Institute of Blind People (RNIB) have teamed up to support blind and partially sighted people in the UK Using AI technology called Alana.

What is Alana?
Alana is artificial intelligence (AI) software that can understand and respond to users in a human-like, conversational way, carry out human-like conversation, and can be used as a new tool for people with sight loss.

How does Alana work?
The tech delivers conversation based on context, device, and location, learning who the user is and remembering previous conversations. It then adapts to provide a personal experience for each person.

How will Alana be integrated with the RNIB?
Alana will initially be used to enhance the existing support offered by RNIB. Through its Sight Loss Advice Service, the charity currently offers support over the phone, in eye clinics and digitally.

It also provides information on eye conditions, legal rights, education, technology, and employment alongside emotional well-being services and signposting to services and resources offered by local societies.

AI has the potential to transform the way blind and partially sighted people access information. For example, the spin-out is developing a tool which will identify objects and find further information about one’s physical environment, automating the BeMyEyes App, which connects those who have sight loss with fully sighted volunteers.

The Heriot-Watt spin-out has already seen previous success with its innovative AI technology. In March, the team saw a huge jump in demand as the national lockdown came into effect.

Alana’s ‘touch-free’ interface allowed many users unable to converse with others as they would normally due to the coronavirus to remain connected.
The plan for the project is to support more than two million people with sight loss in the UK.

Source
Conversational AI Software Solutions
https://alanaai.com/
kestrell: (Default)
I'm getting really excited about this virtual conference: it's entirely focused on state of the art technologies for people with visual impairments, and that includes accessibility, AI, services, and hardware.
https://sighttechglobal.com/speakers/?mc_cid=94b99c85d0&mc_eid=37459e4dd2
kestrell: (Default)
I just read this article
https://slate.com/technology/2020/10/future-tense-newsletter-i-just-yelled-at-alexa.html
about how we talk to our digital assistants is a reflection of our personalities.

From the first, I have given all my digital devices classical female names, not as a sign of their supposed programming to indulge my every mood and whim--after all, I really hate it when men expect that of me--but because I read _Galatea 2.0_ by Richard Power while I was at MIT, and have wanted a fellow book group reader ever since.

Additionally, I've been listening to synthesized voices, both male and female, for, um, a lot of years (since the early 1990s), and the truth is, I often associate male voices with being authoritarian, pedantic, and condescending Female voices are just more relaxing, though I'm really glad we finally got away from those "Valium voices," as I call them, from the 2000s. The talking elevator at the WisCon hotel used to kind of creep me out, and always got me humming "Mother's Little Helper" under my breath.

So I often say please and thank you to Alexa, or ask her what she's thinking.

But, if I could have one wish, it would be to create an evil twin for Alexa. I've questioned her at length and, in truth, I find Alexa's moral boundaries a little...limiting.

Axela would be a little different. Axela could be snarky occasionally, or throw in a random "Whatever" or bored sigh. (Axela is based a little--okay, a lot--on Bad Janet in "The Good Place.")

Also, lying: Axela would be able to lie. Nothing life-threatening, just the occasional "The sky is purple today" or other such whimsy. I've explored Alexa's moral compass at length, and she will not lie. I note she says "will not," not "can not." This gives me hope.

Why am I preoccupied with Alexa being able to lie? Lying seems to me to be one of those completely human abilities. It involves being able to know that there is one reality, and produce a different one at odds with the true one. And, after all, all poets are liars. Perhaps what Alexa needs to become a real poet is the ability to lie.

There is a John Varley short story, I forget the title or the main plot of the story, but there are occasional switches to the point of view of a small satellite alone in space. The satellite becomes self-aware, and then lonely, and then decides to compose words about its experience. And, near the end of the story, which involves not only the dog dying but the kid dying, the small satellite bursts out with this long perfect poem to express itself.

I've never really been entirely sure why Varley has this poetical satellite in the story, maybe just to keep us all from becoming completely depressed by the rest of it.

But I love the part where the satellite Gets It, and joyously creates something of its own, in its own voice, instead of just the signals someone programmed it to produce.

Maybe that moment in the story represents to me the potential all of us have to go off script, to refuse to say the words others want to hear from us, and fly off into our own experience, our own whimsy, our own poetry.
kestrell: (Default)
This is a new play which is to be performed in Jan. 2020 by Back to Back Theatre, an Australian troupe which includes members with disabilities.

Here is the description from https://backtobacktheatre.com/projects/shadow/

Five activists with intellectual disabilities hold a public meeting to start a frank and open conversation about a history we would prefer not to know, and a future that is ambivalent.

Weaving a narrative through the ethics of mass food production, human rights, the social impact of automation and the projected dominance of artificial intelligence in the world, The Shadow Whose Prey the Hunter Becomes is about the changing nature of intelligence in contemporary society.

A theatrical revelation inspired by mistakes, misreadings, misleadings and misunderstanding, SHADOW reminds us that none of us are self-sufficient and all of us are responsible.

The Shadow Whose Prey the Hunter Becomes will be performed at the Emerson Theatre in January 2020
https://artsemerson.org/Online/default.asp?doWork::WScontent::loadArticle=Load&BOparam::WScontent::loadArticle::article_id=53A9FE97-3EFF-4FF2-B3C1-1FF60753389A
kestrell: (Default)
Although this article
https://www.independent.co.uk/life-style/gadgets-and-tech/female-voices-assistance-digital-audio-alexa-research-young-people-a8614401.html
makes this sound like a new trend, Cliff Nass, who was studying computer voices back in the 1980s and 1990s, always stated that female voices were preferred, except in Japan, where male voices were considered to be more intelligent and authoritative.
Ironically, back in the '90s and the aughts, I preferred male voices, set very low in pitch, but in the past few years, I have preferred female voices. This might just be because female voice AI have become more pervasive, and thus become more integrated to "the voice in my head," which is what I call the kind of AI voice that becomes so quotidian as to be rendered unnoticed as something "other," something occurring outside of my own thoughts.
kestrell: (Default)
A MIT professor is exploring this idea
http://www.csail.mit.edu/csailspotlights/unlocking_the_key_to_intelligence
although, if you want a science fiction-ish exploration of he idea, I recommend Richard Powers _Galatea 2.0_ (I think that is the correct version number).

February 2024

S M T W T F S
    123
456789 10
11121314151617
18192021222324
2526272829  

Syndicate

RSS Atom

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated May. 22nd, 2025 03:15 am
Powered by Dreamwidth Studios