<?xml version='1.0' encoding='utf-8' ?>

<rss version='2.0' xmlns:lj='http://www.livejournal.org/rss/lj/1.0/' xmlns:atom10='http://www.w3.org/2005/Atom'>
<channel>
  <title>Kestrell</title>
  <link>https://kestrell.dreamwidth.org/</link>
  <description>Kestrell - Dreamwidth Studios</description>
  <lastBuildDate>Mon, 05 Apr 2021 15:03:39 GMT</lastBuildDate>
  <generator>LiveJournal / Dreamwidth Studios</generator>
  <lj:journal>kestrell</lj:journal>
  <lj:journaltype>personal</lj:journaltype>
  

<item>
  <guid isPermaLink='true'>https://kestrell.dreamwidth.org/407359.html</guid>
  <pubDate>Mon, 05 Apr 2021 15:03:39 GMT</pubDate>
  <title>When the poorly-coded are leading the blind</title>
  <link>https://kestrell.dreamwidth.org/407359.html</link>
  <description>Kes: Purrhaps this explains why 90% of all images get identified as cats.&lt;br /&gt; &lt;br /&gt;From MIT Technology Review&lt;br /&gt;&lt;br /&gt;April 1, 2021&lt;br /&gt;The 10 most cited AI data sets are riddled with label errors, according to &lt;br /&gt;a new study out of MIT,&lt;br /&gt;&lt;a href=&quot;https://arxiv.org/pdf/2103.14749.pdf&quot;&gt;https://arxiv.org/pdf/2103.14749.pdf&lt;/a&gt;&lt;br /&gt;and it’s distorting our understanding of the field’s progress.&lt;br /&gt;&lt;br /&gt;Data sets are the backbone of AI research, but some are more critical than others. There are a core set of them that researchers use to evaluate machine-learning models as a way to track how AI capabilities are advancing over time. One of the best-known is the canonical image-recognition data set ImageNet, which kicked off the modern AI revolution. There’s also MNIST, which compiles images of handwritten numbers between 0 and 9. Other data sets test models trained to recognize audio, text, and hand drawings.&lt;br /&gt;&lt;br /&gt;In recent years, studies have found that these data sets can contain serious flaws. ImageNet, for example, contains &lt;br /&gt;racist and sexist labels&lt;br /&gt;&lt;a href=&quot;https://excavating.ai/&quot;&gt;https://excavating.ai/&lt;/a&gt;&lt;br /&gt; as well as photos of people’s faces obtained without consent.&lt;br /&gt;The latest study now looks at another problem: many of the labels are just flat-out wrong. A mushroom is labeled a spoon, a frog is labeled a cat, and a high note from Ariana Grande is labeled a whistle. The ImageNet test set has an estimated label error rate of 5.8%. Meanwhile, the test set for QuickDraw, a compilation of hand drawings, has an estimated error rate of 10.1%.&lt;br /&gt;How was it measured? Each of the 10 data sets used for evaluating models has a corresponding data set used for training them. The researchers, MIT graduate students Curtis G. Northcutt and Anish Athalye and alum Jonas Mueller, used the training data sets to develop a machine-learning model and then used it to predict the labels in the testing data. If the model disagreed with the original label, the data point was flagged up for manual review. Five human reviewers on Amazon Mechanical Turk were asked to vote on which label—the model’s or the original—they thought was correct. If the majority of the human reviewers agreed with the model, the original label was tallied as an error and then corrected.&lt;br /&gt;&lt;br /&gt;Does this matter? Yes. The researchers looked at 34 models whose performance had previously been measured against the ImageNet test set. Then they remeasured each model against the roughly 1,500 examples where the data labels were found to be wrong. They found that the models that didn’t perform so well on the original incorrect labels were some of the best performers after the labels were corrected. In particular, the simpler models seemed to fare better on the corrected data than the more complicated models that are used by tech giants like Google for image recognition and assumed to be the best in the field. In other words, we may have an inflated sense of how great these complicated models are because of flawed testing data.&lt;br /&gt;&lt;br /&gt;Now what? Northcutt encourages the AI field to create cleaner data sets for evaluating models and tracking the field’s progress. He also recommends that&lt;br /&gt;researchers improve their data hygiene when working with their own data. Otherwise, he says, “if you have a noisy data set and a bunch of models you’re&lt;br /&gt;trying out, and you’re going to deploy them in the real world,” you could end up selecting the wrong model. To this end, he open-sourced &lt;br /&gt;&lt;br /&gt;the code&lt;br /&gt;&lt;a href=&quot;https://github.com/cgnorthcutt/cleanlab&quot;&gt;https://github.com/cgnorthcutt/cleanlab&lt;/a&gt;&lt;br /&gt; he used in his study for correcting label errors, which he says is already in use at a few major tech companies.&lt;br /&gt;&lt;br /&gt;&lt;img src=&quot;https://www.dreamwidth.org/tools/commentcount?user=kestrell&amp;ditemid=407359&quot; width=&quot;30&quot; height=&quot;12&quot; alt=&quot;comment count unavailable&quot; style=&quot;vertical-align: middle;&quot;/&gt; comments</description>
  <comments>https://kestrell.dreamwidth.org/407359.html</comments>
  <category>mit</category>
  <category>ai</category>
  <category>iphone</category>
  <category>digital equity</category>
  <category>ai bias</category>
  <category>apps</category>
  <category>image recognition</category>
  <category>blind</category>
  <lj:security>public</lj:security>
  <lj:reply-count>1</lj:reply-count>
</item>
<item>
  <guid isPermaLink='true'>https://kestrell.dreamwidth.org/371457.html</guid>
  <pubDate>Fri, 13 Nov 2020 12:41:55 GMT</pubDate>
  <title>Picture Smart improvements in Jaws 2021</title>
  <link>https://kestrell.dreamwidth.org/371457.html</link>
  <description>Kes: I&apos;m still amazed by this feature - the details  are listed after the URL&lt;br /&gt; &lt;br /&gt;What&apos;s New in Jaws, Zoom Text, and Fusion 2021&lt;br /&gt;&lt;a href=&quot;https://blog.freedomscientific.com/whats-new-in-jaws-zoomtext-and-fusion-2021/&quot;&gt;https://blog.freedomscientific.com/whats-new-in-jaws-zoomtext-and-fusion-2021/&lt;/a&gt; &lt;br /&gt;&lt;br /&gt;Improvements to Picture Smart&lt;br /&gt;Introduced in JAWS and Fusion 2019, the Picture Smart feature analyzes photos and displays a description in the Results Viewer, which can be read with JAWS. To use Picture Smart:&lt;br /&gt;&lt;br /&gt;Press INSERT+SPACEBAR to activate layered commands.&lt;br /&gt;Press P to activate the Picture Smart layer.&lt;br /&gt;Press A, F, C, or B for a description, as described below:&lt;br /&gt;Press A for a description of a photo acquired from a flatbed scanner or PEARL camera.&lt;br /&gt;Press F for a description of an image file you selected in Windows Explorer.&lt;br /&gt;Press C for a description of a control in focus. This can include a graphical button in a dialog box or other area of the screen.&lt;br /&gt;Press B for a description of an image on the Windows Clipboard.&lt;br /&gt;Several improvements to this feature are available in JAWS and Fusion 2021. These include:&lt;br /&gt;&lt;br /&gt;Providing descriptions of images on web pages&lt;br /&gt;Submitting images to multiple services for a more accurate analysis&lt;br /&gt;Using Picture Smart with multiple languages&lt;br /&gt;Learn more about Picture Smart in JAWS Help, or press INSERT+SPACEBAR, followed by P, then ? (question mark) for additional information.&lt;br /&gt;&lt;br /&gt;&lt;img src=&quot;https://www.dreamwidth.org/tools/commentcount?user=kestrell&amp;ditemid=371457&quot; width=&quot;30&quot; height=&quot;12&quot; alt=&quot;comment count unavailable&quot; style=&quot;vertical-align: middle;&quot;/&gt; comments</description>
  <comments>https://kestrell.dreamwidth.org/371457.html</comments>
  <category>accessible images</category>
  <category>jaws</category>
  <category>accessible photos</category>
  <category>image recognition</category>
  <lj:security>public</lj:security>
  <lj:reply-count>0</lj:reply-count>
</item>
<item>
  <guid isPermaLink='true'>https://kestrell.dreamwidth.org/363519.html</guid>
  <pubDate>Thu, 22 Oct 2020 12:21:43 GMT</pubDate>
  <title>Microsoft and image recognition, 1Password, Jeopardy&apos;s online test now accessible</title>
  <link>https://kestrell.dreamwidth.org/363519.html</link>
  <description>Kes: This first link discusses advances in Microsoft&apos;s image recognition technology, which is a pretty big deal for any visually impaired person using &lt;br /&gt;Microsoft&apos;s free Seeing AI app&lt;br /&gt;&lt;a href=&quot;https://www.microsoft.com/en-us/ai/seeing-ai&quot;&gt;https://www.microsoft.com/en-us/ai/seeing-ai&lt;/a&gt;&lt;br /&gt;or the Picture Smart feature in Jaws  &lt;br /&gt;&lt;a href=&quot;https://blog.freedomscientific.com/picture-smart-in-jaws-independently-selecting-your-artwork/&quot;&gt;https://blog.freedomscientific.com/picture-smart-in-jaws-independently-selecting-your-artwork/&lt;/a&gt;&lt;br /&gt;which has been improved in the soon-to-be-released Jaws 2021&lt;br /&gt;&lt;a href=&quot;https://support.freedomscientific.com/Downloads/JAWS/JAWSPublicBeta&quot;&gt;https://support.freedomscientific.com/Downloads/JAWS/JAWSPublicBeta&lt;/a&gt;&lt;br /&gt;&lt;br /&gt;Microsoft Announces Breakthrough AI Image Captioning for Word, PowerPoint, Outlook&lt;br /&gt;&lt;a href=&quot;https://blogs.microsoft.com/ai/azure-image-captioning/&quot;&gt;https://blogs.microsoft.com/ai/azure-image-captioning/&lt;/a&gt;&lt;br /&gt;&lt;br /&gt;Web Friendly Help discusses new features in NVDA &lt;br /&gt;&lt;a href=&quot;https://webfriendlyhelp.com/new-features-in-nvda-2020-3/&quot;&gt;https://webfriendlyhelp.com/new-features-in-nvda-2020-3/&lt;/a&gt;&lt;br /&gt;&lt;br /&gt;Google Search Tips: Always Find What You&apos;re Looking For by Online Tech Tips&lt;br /&gt;&lt;a href=&quot;https://www.online-tech-tips.com/google-softwaretips/8-google-search-tips-always-find-what-youre-looking-for/&quot;&gt;https://www.online-tech-tips.com/google-softwaretips/8-google-search-tips-always-find-what-youre-looking-for/&lt;/a&gt;&lt;br /&gt;&lt;br /&gt;Using Zoom with a Screen Reader on the Eyes on Success podcast&lt;br /&gt;The latest episode features Heather Thomas, author of _Getting Started with Zoom Meetings: A guide for Jaws, NVDA, and iPhone VoiceOver users_.    &lt;br /&gt;&lt;a href=&quot;http://eyesonsuccess.net&quot;&gt;http://eyesonsuccess.net&lt;/a&gt;&lt;br /&gt;&lt;br /&gt;1Password: Mosen at Large provides a review and demonstration of this password manager&lt;br /&gt;&lt;a href=&quot;https://mosenatlarge.pinecast.co/episode/a69030679a8d434b/review-and-demonstration-of-the-1password-password-manager&quot;&gt;https://mosenatlarge.pinecast.co/episode/a69030679a8d434b/review-and-demonstration-of-the-1password-password-manager&lt;/a&gt;&lt;br /&gt;&lt;br /&gt;Facebook Mobile with Chrome and Edge: the mbasic interface is actually the older m.facebook.com interface &lt;br /&gt;&lt;a href=&quot;https://mbasic.facebook.com/&quot;&gt;https://mbasic.facebook.com/&lt;/a&gt;&lt;br /&gt;&lt;br /&gt;Jeopardy Makes Online Test Accessible to the Blind&lt;br /&gt;&lt;a href=&quot;https://www.nfb.org/about-us/press-room/jeopardy-makes-online-test-accessible-blind&quot;&gt;https://www.nfb.org/about-us/press-room/jeopardy-makes-online-test-accessible-blind&lt;/a&gt;&lt;br /&gt;&lt;br /&gt;RNIB Creates Accessible Pregnancy Test Prototype to Raise Awareness of Accessible Design&lt;br /&gt;In 2020 there is still no fully accessible pregnancy test, meaning that blind and partially sighted women must ask for help to read their tests, and are therefore never the first to know what is happening to their own bodies. The Royal National Institute of Blind People (RNIB) has unveiled the first accessible pregnancy test prototype that allows women with sight loss to know their results privately for the first time. The groundbreaking test allows the user to feel their results, producing raised nodules to indicate a positive result. Learn more at:.&lt;br /&gt;&lt;a href=&quot;https://www.dexigner.com/news/33351&quot;&gt;https://www.dexigner.com/news/33351&lt;/a&gt;&lt;br /&gt;&lt;br /&gt;&lt;br /&gt;A Day in The Connected Digital Life on Tek Talk will discuss Apple products.&lt;br /&gt;GMT Tuesday, October 27, 2020 at 00:00&lt;br /&gt;&lt;a href=&quot;https://zoom.us/j/839935813&quot;&gt;https://zoom.us/j/839935813&lt;/a&gt;&lt;br /&gt;&lt;br /&gt;Most of these links are culled from the Top Tech Tidbits weekly newsletter: you can view the entire newsletter or subscribe at&lt;br /&gt;&lt;a href=&quot;https://toptechtidbits.com/&quot;&gt;https://toptechtidbits.com/&lt;/a&gt;&lt;br /&gt;&lt;br /&gt;&lt;img src=&quot;https://www.dreamwidth.org/tools/commentcount?user=kestrell&amp;ditemid=363519&quot; width=&quot;30&quot; height=&quot;12&quot; alt=&quot;comment count unavailable&quot; style=&quot;vertical-align: middle;&quot;/&gt; comments</description>
  <comments>https://kestrell.dreamwidth.org/363519.html</comments>
  <category>blind</category>
  <category>screen readers</category>
  <category>image recognition</category>
  <category>accessible zoom</category>
  <category>accessibility</category>
  <lj:security>public</lj:security>
  <lj:reply-count>1</lj:reply-count>
</item>
</channel>
</rss>
