kestrell: (Default)
2022-02-11 01:00 pm

What accessible comics mean to me as a blind fan

I recently read about Vizling, an app being developed by Professor Darren Defrain, who was awarded $100,000 from the National Endowment for Humanities, to help visually impaired people read comics. (The app is currently undergoing testing and is expected to launch in June.)

Some sighted people ask, "Why would a blind person care about comics?" and, since I have a personal history with comics, I thought I would write about some of my experiences.

I grew up with low vision in my left eye, and totally blind in my right eye, but I was both a bookworm and an art student, so I was a very visual person. I was that kid in the class that other kids would come to and ask to draw horses, unicorns, or monsters. As a teenager, I used to visit the local comics store. I even pre-ordered and waited in line to get my copy of Frank Miller's The Dark Knight Returns.

Years after I went blind, I met a man who asked me out a couple of times, but I turned him down, until one day he offered to read me a comic he had told me about. A little over a year later, we got married. The graphic novel he read to me was From Hell, and that's why I refer to Alan Moore as our Cupid. Alexx, my husband, is a serious comics geek. No, I mean, geekier than that. I mean, thanks to Alexx, I have the fact that Clancy Brown played the voice of Lex Luthor in the animated TV series of Superman embedded in my brain. When, in the original broadcast of Buffy the Vampire Slayer, Xander made a joke about red Kryptonite that fell flat with his female friends, I turned to my Alexander and said, "Tell me about the red Kryptonite," and he explained the reference. Now, "Tell me about the red Kryptonite" is our cue for geeking out about comics.

Thus, when I was taking a comics course with Henry Jenkins in the media studies program at MIT, Alexx was my reader and describer of comics. My personal favorite was David Mack's Echo. Echo is a deaf Native American, and she is often dismissed as merely being one of Daredevil's ex-girlfriends. But she is so much more than that. You know she is seriously kickass when Wolverine shows up as her spirit guide. HER SPIRIT GUIDE. I love you, Echo!

As for other superheroes with disabilities, I also love Oracle. Formerly Batgirl, she took the name Oracle after she became disabled, and transformed herself into a superhacker.

It's these self-transformations in response to disability and trauma, the intentional creation of personae (from the Greek word for masks) and alter egos, that fascinate me. When I lost the last of my functional vision in my early twenties, I originally thought of myself in terms of relearning how to do all the things I already did, staying the same person, rising like a phoenix from the ashes of my old life.

Then it struck me how boring that would be.

That phoenix remains the same phoenix forever, never changing.

So I decided I would be a shapeshifter, a trickster, someone who, rather than feeling compelled to stay within the lines and do everything just like everyone else (i.e., sighted/"normal" people), I would instead invest all that time and energy trying new things.

Which is how I ended up going back to college to complete my undergraduate degree, becoming a disability advocate, and then attending MIT as one of Henry Jenkins's grad students.

The heavily drawn lines of borders or frames may seem to act as restrictive boundaries but, in the comics I love, they are more like thresholds, a liminal marker which the character might step, fall, fly, or explode out of at any moment. A character's persona might be "killed" figuratively or literally, through trauma, tragedy, or murder (often prompted by the hiring of a new writer and/or artist), but there is always the opportunity for some shapeshifting.

If you do a web search on the topic of comics and disability, you will find hundreds of posts by fans with disabilities, and also academic papers by scholars, some with disabilities, some not, writing about comics and people with disabilities. In recent years, however, creators of comics and movie studios have been compelled to listen to people with disabilities and frame these characters with more respect and realism. In the past few months, we've had the release of The Eternals with a deaf character, and the Disney+ series Hawkeye, which features both a main protagonist with hearing aids and Echo herself (it's rumored that Echo will also be getting her own series).

During the pandemic, Alexx and I have been reading one of my comfort comics, Squirrel Girl and, although the comic has ended, there were a couple of novels I hadn't read. It turns out that the novels feature a junior high Doreen Green before she becomes Squirrel Girl, and she meets a friend, Ana Sophia, who is deaf. At one point, Doreen says, "Someone once said with great power comes great accessibility -- no wait, that doesn't sound right." Trust me, it was a good phrase and, like, really inspiring. (The Unbeatable Squirrel Girl: 2 Fuzzy, 2 Furious, Shannon Hale and Dean Hale [2018]).

In trying to locate the earliest occurrence of this phrase, I came across this post about Stan Lee's death: With Great Power Comes Great Accessibility – How the Death of Stan Lee Affects the Disability Community - Rooted in Rights by Patrick Cokley, which discusses Stan Lee's creation of characters with disabilities, including his co-creation of the X-Men, which has become a major source of identification for many people with disabilities, and even more LGBT people.

Finally--and this is a connection with comics which I always carry with me, but which I often forget about--I have a pair of prosthetic eyes. About a decade ago, I needed to get a new pair and I decided to ask my ocularist (the technician who creates the prosthetic eyes) to make mine to look like Delirium's eyes in Neil Gaiman's Sandman graphic novel. Delirium has one bright blue eye, and one bright green eye and, to follow up on my idea of shapeshifters, most of Delirium's appearance--her hair color, her hair length, her style of clothing--and she is the epitome of whimsy. What I really liked about this idea was that people with prosthetic eyes are always portrayed in media as having these absolutely obvious, ugly eye prosthetics but, in truth. prosthetic eyes are designed to match each individual person's original eyes (unless you're like me), and the the technicians who create them take hours, over a period of days, to make them. I loved the idea of having prosthetic eyes based on art. Also, I met Neil Gaiman at a convention some time later, and he pronounced them "Perfect."

In closing, I want to point out that comics are a major part of our culture, whether we experience them in graphic novels, movies, novels, toys, video games, T-shirts, tattoos, or a hundred other forms of media. Media is a shared source for how we communicate with one another, how we spend time with one another, how we form our ideas of heroes, and friends, and virtues, and a dozen other concepts.

Most of all, comics are built on the foundation of being able-bodied versus disabled. All we have to do is look at one of the ultimate comics heroes, Captain America, who started out as a disabled young man who used crutches. He used to get beat up by bullies, but he always got back up again, saying, "I can do this all day." And then he participated in a secret Army experiment and became the superhero Captain America, who is not only physically strong but morally the most virtuous of the Avengers (he can even wield Thor's hammer, Mjölnir). More than any other superhero, Captain America literally embodies the synthesis of physical and moral integrity, and even social integrity in his role of "Cap," the older, fatherly leader of the Avengers.

So, considering all the ways that comics present and represent images of people with disabilities, it's past time that we find ways to make comics more accessible to people with disabilities themselves. In the disability movement we have a saying, "Nothing about us without us," and, in creating a more inclusive conversation regarding the many intersections of disability and comics, projects such as Vizling need to be supported and encouraged both by academics and fans.
kestrell: (Default)
2022-02-09 02:42 pm
Entry tags:

Online event: Seeing AI: Describing the world for people who are blind/low vision

Kes: This is one of the most useful apps for visually impaired people--I use it for everything from reading package labels to identifying my beer in the drinks cabinet. The developer is a great speaker, and he has lots of fascinating stories about how visually impaired people have found different uses for the app than he ever imagined when he initially developed it.

Seeing AI: Describing the world for people who are blind/low vision

Date: Tuesday, February 15, 2022

Description: Seeing AI is a talking camera app for people who are blind/low vision. It describes the text, people, and things around you.
Come hear about our latest developments, leveraging AI+AR to provide an immersive audio AR experience.

Speaker: Saqib Shaikh

At Microsoft, Saqib Shaikh leads teams of engineers to blend emerging technologies with natural user experiences to empower people with disabilities to achieve more - and thus to create a more inclusive world for all.
The Seeing AI project enables someone who is visually impaired to hold up their phone, and hear more about the text, people, and objects in their surroundings. It has won multiple awards, and been called "life changing" by users. Shaikh has demonstrated his work to the UK Prime Minister, and to the House of Lords. The video of the original prototype (http://youtu.be/R2mC-NUAmMk) has been viewed over three million times.ZS04NjZkLWNlZjg5M2RiNzNhNg

Sign up and find more info at
https://www.meetup.com/hololens-mr/events/282678622/

Hosted by: The Microsoft HoloLens and Mixed Reality Meetup
kestrell: (Default)
2022-01-13 09:33 am

Libreoffice 7.3 Will Ship With Support For Two Made-Up Languages; Klingon And Interslavic

From this week's Top Tech Tidbits newsletter:

The popular open-source office suite, LibreOffice, will support two constructed (made-up) languages from early February with the launch of LibreOffice 7.3. The two languages are Star Trek's Klingon — the language of the Klingons, and Interslavic, a language that's supposed to bridge the language gap between Slavic languages such as Russian and Polish
https://www.neowin.net/news/libreoffice-73-will-ship-with-support-for-two-made-up-languages-klingon-and-interslavic/

To read the rest of this week's newsletter, or to subscribe, go to
https://toptechtidbits.com/
kestrell: (Default)
2021-12-09 10:18 am
Entry tags:

Live Text on iOS 15 and the history of OCR

This is a great little article on the history of optical chracter recognition (OCR) for the blind, including how it took forty years for it to be anything close to affordable, which many contend it still isn't for many visually impaired people
https://accessibility-insights.com/2021/11/06/live-text-new-in-ios-15-is-amazing-but-it-took-us-45-years-of-technical-advancements-to-get-there/
kestrell: (Default)
2021-07-03 10:37 am

How to download an entire website for offline reading

This article provides a how to for using an app called WebCopy
How to Download an Entire Website for Offline Reading
https://www.makeuseof.com/tag/how-do-i-download-an-entire-website-for-offline-reading/
kestrell: (Default)
2021-04-05 11:03 am

When the poorly-coded are leading the blind

Kes: Purrhaps this explains why 90% of all images get identified as cats.

From MIT Technology Review

April 1, 2021
The 10 most cited AI data sets are riddled with label errors, according to
a new study out of MIT,
https://arxiv.org/pdf/2103.14749.pdf
and it’s distorting our understanding of the field’s progress.

Data sets are the backbone of AI research, but some are more critical than others. There are a core set of them that researchers use to evaluate machine-learning models as a way to track how AI capabilities are advancing over time. One of the best-known is the canonical image-recognition data set ImageNet, which kicked off the modern AI revolution. There’s also MNIST, which compiles images of handwritten numbers between 0 and 9. Other data sets test models trained to recognize audio, text, and hand drawings.

In recent years, studies have found that these data sets can contain serious flaws. ImageNet, for example, contains
racist and sexist labels
https://excavating.ai/
as well as photos of people’s faces obtained without consent.
The latest study now looks at another problem: many of the labels are just flat-out wrong. A mushroom is labeled a spoon, a frog is labeled a cat, and a high note from Ariana Grande is labeled a whistle. The ImageNet test set has an estimated label error rate of 5.8%. Meanwhile, the test set for QuickDraw, a compilation of hand drawings, has an estimated error rate of 10.1%.
How was it measured? Each of the 10 data sets used for evaluating models has a corresponding data set used for training them. The researchers, MIT graduate students Curtis G. Northcutt and Anish Athalye and alum Jonas Mueller, used the training data sets to develop a machine-learning model and then used it to predict the labels in the testing data. If the model disagreed with the original label, the data point was flagged up for manual review. Five human reviewers on Amazon Mechanical Turk were asked to vote on which label—the model’s or the original—they thought was correct. If the majority of the human reviewers agreed with the model, the original label was tallied as an error and then corrected.

Does this matter? Yes. The researchers looked at 34 models whose performance had previously been measured against the ImageNet test set. Then they remeasured each model against the roughly 1,500 examples where the data labels were found to be wrong. They found that the models that didn’t perform so well on the original incorrect labels were some of the best performers after the labels were corrected. In particular, the simpler models seemed to fare better on the corrected data than the more complicated models that are used by tech giants like Google for image recognition and assumed to be the best in the field. In other words, we may have an inflated sense of how great these complicated models are because of flawed testing data.

Now what? Northcutt encourages the AI field to create cleaner data sets for evaluating models and tracking the field’s progress. He also recommends that
researchers improve their data hygiene when working with their own data. Otherwise, he says, “if you have a noisy data set and a bunch of models you’re
trying out, and you’re going to deploy them in the real world,” you could end up selecting the wrong model. To this end, he open-sourced

the code
https://github.com/cgnorthcutt/cleanlab
he used in his study for correcting label errors, which he says is already in use at a few major tech companies.
kestrell: (Default)
2021-03-13 08:10 am

NaviLens is a new accessible QR code app

Kes: I was telling a housemate about this app and pondering the possible applications such as: you could post secret signs around a building just for your friends who had the app and could access your tags or, if you had a large old house like our, you could do a themed party, such as an Alice in Wonderland party, and have the tags give different quotes/themes for different rooms.

– The new smart digital signage for everyone! Or, as we say, QR Codes on Steroids! Free App on iOS and Android
by Blind Abilities Team
http://blindabilities.com/?p=6584#genesis-content

Show Summary:
Jeff and Pete are in the studio again, this time to chat with Javier Pita, founder and CEO of NaviLens Corp., and their remarkable product, NaviLens.
NaviLens is a new and enhanced kind of QR code, but unlike existing QR codes that can only be read from a short distance, the NaviLens tag can be detected by your smart phone camera up to 60 feet away with a 160 degree wide-angle range. This means that you will now be able to detect NaviLens tags that are almost at your 9 o’clock or 3 o’clock direction, thus being simple to find if you are blind or visually impaired. For example, you will now be able to find an indoor room if you are walking down a hallway, or another destination as you are approaching a bus stop or store front. And with NaviLens, you don’t even have to aim or point your phone as precisely as before. Jeff and Pete sometimes refer to them as "QR Codes on Steroids.”
In addition, there are home-use NaviLens tags that are free and you can print them out on your own personal printer and use them at home, and customize them for whatever purpose you want: labeling your record or CD collection, items in your fridge or pantry, clothing, medications, or anything. Then you can find things as easy as opening up the NaviLens app on your phone. Again, totally free.
NaviLens is also available with pre-labeled tag kits for schools and other packages for businesses. It would be great to see this product adopted by more schools and businesses around the country. So get the word out!
The company is also rolling out a new and more dynamic feature called NaviLens 360 Vision which gives you detailed step by step, foot by foot guidance to your destination with all kinds of really cool navigational assistance, such as audible tones to guide you left or right or straight to your destination. While this new feature will be available later this month, Javier gives us a sneak preview on today’s podcast, so be sure to give it a listen, begin using the tags yourself, and think about reaching out to a local business, school or governmental agency to implement the use of NaviLens in their location - it will benefit them as well as you!
You can find out much more on their web site,, NaviLens.com
You can also follow them on Twitter [profile] navilens
kestrell: (Default)
2021-02-11 07:14 am

App will improve accessibility while also making autonomous vehicles more "intelligent"

Kes: Because building in accessibility isn't a flaw, it's a feature.

App from VEMI Lab group will help people with visual impairments, seniors enjoy ride-sharing with self-driving cars 
https://umaine.edu/news/blog/2021/01/29/app-from-vemi-lab-group-will-help-people-with-visual-impairments-seniors-enjoy-ride-sharing-with-self-driving-cars/

A research group led by the Virtual Environments and Multimodal Interaction Laboratory (VEMI Lab) at the University of Maine is developing a smartphone app that provides the navigational assistance needed for people with disabilities and seniors to enjoy ride-sharing and ride-hailing, collectively termed mobility-as-a-service, with the latest in automotive technology. The app, known as the Autonomous Vehicle Assistant (AVA), can also be used for standard vehicles operated by human drivers and enjoyed by everyone.

AVA will help users request, find and enter a vehicle using a multisensory interface that provides guidance through audio and haptic feedback and high-contrast visual cues. The Autonomous Vehicle Research Group (AVRG), a cross institutional collective led by VEMI lab with researchers from Northeastern University and Colby College, will leverage GPS technology, real-time computer vision via the smartphone camera and artificial intelligence to support the functions offered through the app.

....Users will create a profile in AVA that reflects their needs and existing methods of navigation. The app will use the information from their profiles to find a suitable vehicle for transport, then determine whether one is available.

When the vehicle arrives, AVA will guide the user to it using the camera and augmented reality (AR), which provides an overlay of the environment using the smartphone by superimposing high-contrast lines over the image to highlight the path and verbal guidance, such as compass directions, street names, addresses and nearby landmarks. The app also will pinpoint environmental hazards, such as low-contrast curbs, by emphasizing them with contrasting lines and vibrating when users approach them. It will then help users find the door handle to enter the vehicle awaiting them.

“This is the first project of its kind in the country, and in combination with our other work in this area, we are addressing an end-to-end solution for AVs (autonomous vehicles) that will improve their accessibility for all,” says Giudice, chief research scientist at VEMI Lab and lead on the AVA project.
“Most work in this area only deals with sighted passengers, yet the under-represented driving populations we are supporting stand to benefit most from this technology and are one of the fastest growing demographics in the country.”

AVRG studies how autonomous vehicles can meet various accessibility needs.
VEMI lab
https://umaine.edu/news/blog/2019/08/23/umaine-research-project-on-improving-trust-in-autonomous-vehicles-using-human-vehicle-collaboration/
itself has explored tactics for improving consumer trust in this emerging technology.
AVA advances both groups’ endeavors by not only providing another means for people with visual impairments and other disabilities and seniors to access self-driving vehicles, but also increases their trust in them. The project also builds on a seed grant-funded, joint effort between UMaine and Northeastern University to improve accessibility, safety and situational awareness within the self-driving vehicle. Researchers from both universities aim to develop a new model of human-AI vehicle interaction to ensure people with visual impairments and seniors understand what the autonomous vehicle is doing and that it can sense, interpret and communicate with the passenger.
The app will offer modules that train users how to order and locate rides, particularly through mock pickup scenarios. Offering hands-on learning provides users confidence in themselves and the technology, according to researchers. It also gathers data AVRG can use during its iterative, ongoing development for AVA and its integration into autonomous vehicles.

“We are very excited about this opportunity to create accessible technology which will help the transition to fully autonomous vehicles for all. The freedom and independence of all travelers is imperative as we move forward,” says VEMI lab director Richard Corey.

VEMI Lab, co-founded by Corey and Giudice in 2008, explores different solutions for solving unmet challenges with technology. Prime areas of research and development pertain to self-driving vehicles, the design of bio-inspired tools to improve human-machine interaction and functionality, and new technology to improve environmental awareness, spatial learning and navigational wayfinding.
kestrell: (Default)
2021-01-23 09:57 am

Accessible tech at CES, voice commands on Youtube, new apps and more

Accessibility Devices at CES 2021 Reflect Growing Focus on Inclusive Tech
https://www.cnet.com/health/accessibility-devices-at-ces-2021-reflect-growing-focus-on-inclusive-tech/#ftag=CAD590a51e

Podcast | Blind Tech Guys | CES 2021, BBC Apps And How To Utilise Gestures
January 19th 2021
On episode 69 of the Blind Tech Guys, they went through all that happened at CES 2021, Marco demonstrated two BBC apps and Nimer showed us how to go about setting up and utilising gestures on Android and iOS.
https://www.blindtechguys.com/69

The Intersectionality of Identities with Disability
This resource comes to us from the National Center for College Students with Disabilities (NCCSD), a federally-funded project under the U.S. Department of Education, and provides a range of disability resources for people of different races, ethnicities, cultures, LGBTQI identities, and religions (listed in no particular order). Also includes a collection of self-care and identity resources:
https://www.nccsdclearinghouse.org/intersectionality-of-
identities.html

more below the cut )
kestrell: (Default)
2020-12-30 03:00 pm

Getting augmented

I'm taking an online course on XR technology (a term that covers augmented, virtual, and mixed reality), and I decided to use a gift certificate toward purchasing a pair of Bose bluetooth audio sunglasses, which add more features to Microsoft Soundscape and other augmented reality apps for visually impaired people.

I'm setting up the Bose glasses now and, as soon as I got them paired with my iPhone, i hear "nothing is more important than your trust."

Which somehow has the opposite of the desired effect, and makes me feel that message is actually kind of creepy.

Also, the speakers on these glasses are located so close to my ears that the voice almost sounds as if it is in my head.

I found myself saying "Creeeepy," and giving the happy giggle I usually give during horror movies.

These are so *cool*.
kestrell: (Default)
2019-05-14 03:29 pm

Windows 10 Your Phone app lets Android users text from a PC

One reason I switched from Android to the iPhone is to have more dependable access to texting, so I thought this might be useful to some blind Android users
Windows 10’s Your Phone app links your phone and PC. It works best for Android users, letting you text from your PC and wirelessly transfer photos back and forth. Notification sync and screen mirroring features are on the way, too.

The Your Phone app in Windows 10

is a powerful and often overlooked part of Windows 10. If you’re an Android user, you can use it to text right from your PC, see all your phone’s notifications, and quickly transfer photos. If you have the right phone and PC, you can even use the Your Phone app to mirror your phone’s screen and see it on your PC.

Texting from your PC and transferring photos work right now on current stable builds of Windows 10. The notification and screen mirror features are only available for some Windows Insiders right now, but they should arrive for everyone soon.

Info and how to at
https://www.howtogeek.com/413566/why-android-users-need-windows-10s-your-phone-app/

Unfortunately, iPhone users won’t get any of that. Apple’s restrictions prevent that level of integration.

Also see All the ways Windows 10 works with Android and iPhone
https://www.howtogeek.com/361418/all-the-ways-windows-10-works-with-your-android-or-iphone/
kestrell: (Default)
2013-06-26 10:15 am
Entry tags:

Visual impairment simulator app

This is pretty damn awsome, although I'm a little disappointed that it doesn't throw in the occasional random hallucination like I use to experience, because having black dogs, little brown twisty gnomy people, and floating trees at the edge of your vision will really keep you alert.
https://itunes.apple.com/us/app/visionsim-by-braille-institute/id525114829?mt=8