• Subscribe

How to see the world with a smartphone

Speed read
  • Aipoly Vision uses image recognition to help people with visual difficulties see the world
  • Neural networks make image recognition possible for the app
  • Complex scene description to come in future versions

When light enters our eyes, it travels through the cornea to the retina at the back of the eye. After that, the light is then sent to the brain through the optic nerve as electrical impulses, which our brain then decodes to give us a mental image of what we see before us.

For some people, however, this process can become faulty. According to the American Optometric Association, the shape of the eyes can change, the retina can deteriorate over time, and the optic nerve can sustain damage. 

Now, one smartphone app has the potential to make a real difference.<strong>Sight for sore eyes. </strong> Smartphone application Aipoly Vision is a useful aid for the visually impaired. Simply point the phone at the object, and a voice identifies what it is. Courtesy Aipoly.

Aipoly Vision is an iOS app that uses image recognition technology to identify objects. With Aipoly, clients point, and then a voice — reminiscent of Siri from Apple or HAL from 2001: A Space Odyssey — tells them what they are looking at.

The app functions without a connection to the internet, giving users greater flexibility in case they are in areas with limited or non-existent internet connectivity.

As Aipoly co-founder Marita Cheng explains, third-party apps already exist that work via real time human description through video calls. Unlike Aipoly, however, these aids are time-consuming and don’t offer clients much in the way of privacy.

Using the app during a trip to the grocery store will be an entirely different experience. Aipoly identifies thousands of different objects, and while it cannot yet describe details like how the harsh fluorescent lights of the supermarket gleam off the clear plastic of the Coca-Cola bottle, the app will dramatically improve a shopping experience by saving time and effort. 

For future versions of the app, researchers are working on including facial recognition and the ability to contextualize complex scenes with multiple objects in them.

There's a long way yet to go, but each added layer of information means another step toward independence for those with limited sight.

Join the conversation

Do you have story ideas or something to contribute?
Let us know!

Copyright © 2017 Science Node ™  |  Privacy Notice  |  Sitemap

Disclaimer: While Science Node ™ does its best to provide complete and up-to-date information, it does not warrant that the information is error-free and disclaims all liability with respect to results from the use of the information.

Republish

We encourage you to republish this article online and in print, it’s free under our creative commons attribution license, but please follow some simple guidelines:
  1. You have to credit our authors.
  2. You have to credit ScienceNode.org — where possible include our logo with a link back to the original article.
  3. You can simply run the first few lines of the article and then add: “Read the full article on ScienceNode.org” containing a link back to the original article.
  4. The easiest way to get the article on your site is to embed the code below.