In April 2016, Microsoft showed the world a special glasses. It looks plain, but it has a camera and microphone built in. When someone wears it, just touch it lightly, the camera built in the glasses will capture the picture in front of them, and through the analysis of AI system, tell the user what it is, whether it is the surrounding environment, objects or human expressions, the user can immediately know.
Like the magic eyeglasses of that year, Seeing AI is an application that can analyze the objects around it by using AI technology, and directly "dictate" the words, barcodes, images captured by mobile phone lens to users so that they can "see" the world around them. We've also had our first experience since the release of App. Now, two years later, on March 12, Microsoft updated the app dramatically. This time, it's a step closer to the goal of "See" and the geek selection has been downloaded for the first time.
A richer experience
Compared with two years ago, the biggest change in Seeing AI is the further improvement of its functions (but it still does not support Chinese). On the basis of the original support for character, text, barcode and scene recognition, SEEING AI now adds color, handwriting and light recognition functions, which are not only practical for blind friends, but also easy to use.
The color function captures the color appearing in the picture through the back camera of the mobile phone and transmits it to the user directly by voice. It is very fast because this function is simpler than scene recognition. But the role of color recognition is different from that of scene recognition. For example, when a blind friend buys clothes or shoes, this simple function is actually very useful.
The function of handwriting recognition is close to the previous text recognition, but subdivided can better recognize the image information such as a person's handwritten letter or menu. Unfortunately, after trying, the handwriting can not recognize the Chinese language, but the accuracy of English and Arabic numerals recognition has been very high, which can meet the daily use.
Light is a special function. When the camera captures a picture with different light and shade, it will express the light intensity of the current scene through different sounds (sounds like buzzing). I can't imagine the specific use of the scene. But for blind users, it may also play a role in some specific occasions (such as sleeping at night and turning off the light?).
It's not just such an attempt at subdividing features, but there's another bright spot in Seeing AI's update, which finally supports browsing mobile photo albums. Blind people can't see the world, but they can also take pictures with their mobile phones or keep some pictures with good memories in their albums. The new Seeing AI can open each photo directly, tell users in front of the mobile phone the content of each photo, and tell them when and where the photo was taken, which has to be said to be a very intimate function.
Technology Gets Out of Darkness
Microsoft seems to have made many attempts to use AI technology to help vulnerable groups.