This time the protagonist is still AI artificial intelligence, it has been integrated into Google's products and software systems, but this time, Google explained AI or product functions from the perspective of understanding human and human nature, bringing AI to science and technology and humanities. At the crossroads.
AI: Close to humanity
Google CEO Pichchai used two cartoon icons as an opening statement to illustrate Google’s true pursuit of information. AI was the first topic to be mentioned, and Pichay described it as specific to the medical field. It uses the retinal scan to diagnose diseases to explain the machine learning ability so that the audience can understand the role of AI in people. “Medical care means that AI can change. Important fields."
AI helps judge eye condition
In the past, when Google talked about the code on I/O, only the coders were excited and others felt that the situation was finally improved. Pitchai made every technology point specific, such as finding a wedding photo of a friend in Google Photos. Whether the prompt is to be shared with her; or a darker photo prompts the user to adjust.
AI application example, see someone photo, cell phone ask to share it with her
In addition, Google announced its own artificial intelligence chip TPU3.0, compared to the previous generation, it uses liquid cooling, computing power is 8 times last year's TPU2.0, can solve more problems faster and more intelligent.
Water-cooled smart chip
Google Assistant: More like humans
Google Assistant Google Assistant is this timeDevelopmentThe second major new product introduced by the conference is Google’s AI smart assistant. This update brings six new sounds, and the sound is more like a human being, not as much as a robot’s voice.
Pachchai said that its tone and rhythm are closer to humanity. They collected human voices in the studio, captured the sound qualities, and then analyzed the features into Google Assistant.
Google Assistant's voice is more natural
Google announced that its own AI voice assistant has released 30 years of voice support in 80 countries after 2 years of publication. Some details of the changes include: 1) Continuous dialogue, instead of shouting each time, Hey Google wakes up once to ask a question. , It can answer in a row; 2) Home use scenarios. For example, when a child talks to Google Assistant, if he is more polite and says “please”, Google Assistant will give a better answer. Google hopes to use this kind of details that are not noticeable. Tell the user how thoughtful his AI assistant is.
Google Assistant is not only an audio assistant, but also optimizes the device with a screen
In addition to sound, Google Assistant has also improved its visual expression. The scene uses the Chinese company Lenovo's "smart display" to show Google's assistant's feedback on the visual aspects of people, and the voice search display on the mobile phone; when driving, if voice control, Google assistant will also be integrated on the Google map. Human voice + visual display, do a better job of navigation.
Pichchai demonstrated two extremely sci-fi scenes: you have to cut your hair, call a Google assistant to book, call the barbershop, and answer the barber's “several haircuts” and “the name of the intended guest”. Or if you order a restaurant, if you don't have a place, it can also learn to change the scheduled time - the entire process is Google's assistant (machine) is talking to a human being, its voice is like a human being, and the answer is also a human being.
Machines and humans talk about haircuts, extremely sci-fi scenes
The appointments for haircuts and restaurants are very lifelike, but it is such a simple scene that the smart assistant in your mobile phone can be completed. Google has performed the science fiction in its own conference, though we believe in real life. It will not be as smooth as the stage presentation, but Google Assistant has already demonstrated the current development direction of AI assistants.
Google News: AI Makes News
Google itself is not the media does not do news, but their Google News uses AI to intervene in people reading news, the first feature can be roughly understood as the high-end version of "Today's headlines", and some of its ways are similar to associative reading, you read the article North Korea's South Korean leaders met and it will push other related news.
Learn how users read news and automatically generate a topic
The second feature, 360-degree coverage, allows users to understand the ins and outs of the entire news. For example, there was a power outage in a certain place, and there was the matter before and after the news itself, what people needed to do, and related events before and after. They want to let the user know one thing and draw a context diagram in the background so that the user can fully understand the matter - this is basically equal to the machine built a topic.
Related reading recommendation
The third feature is to allow users to subscribe to the media they need. Currently, Google News has entered 127 countries and regions and can be found on Android and iOS.
Android P: help users get rid of mobile addiction
Ten years ago, Google released the first mobile phone G1 equipped with an Android system, which affected changes in smartphones. Today's Android system incorporates more intelligent expressions, such as discerning usage according to user habits to save power; automatically determining brightness according to user preferences (the brightness of the mobile phone has jumped to a higher level), allowing users to significantly reduce the chance of manually adjusting the brightness. The rate has reached 60%; application forecasting is still learning user habits and usage scenarios to provide users with effective App recommendations.
Learn user habits to save power
In terms of the interface, the interface of the Android system has become simpler and more gestures have been added. At this point, Google seems to learn fromiPhoneThe X gesture (or webOS), on the live demo Pixel phone, has a small bar at the bottom, slides back to the desktop, or toggles the App to be close to the iPhone X.
Bottom slider like iPhone X
Google hopes that Android can help people understand what they are doing with their mobile phones. For example, each app can now count usage time and see what you do with your mobile phone; it reminds humans of the time it takes to sleep and turns the phone screen into gray. To help users quit mobile addiction.
Today, the beta version of Android P Beta is on the line. Most of the fit phone manufacturers mentioned by Google come from China: One plus, Xiaomi, ov and Sony, Nokia and other international manufacturers put together and become the first batch of support vendors.
Most of the first batch of adapter vendors come from China
Google Maps+Google Lens: Interest recommendation
Google has added a richer location-related concept to the map, using AI and satellite imagery technology to learn about new locations and businesses, as well as recommending the nearest route and the closest restaurant for you. In fact, similar features are currently high German, Baidu maps and so on.
Map related recommendations are not new
Google Maps is combined with a mobile phone camera. For example, if you don't know where to open the camera, it will capture the street image, and then combine the real scene with the map to tell the user where to go.
AR Reality Navigation
This feature is actually a fusion of Google Lens and the map. It is an artificial intelligence application based on image recognition and OCR technology. This time it was used in a practical scene such as a map. In addition, Google Lens has also been strengthened, such as taking a picture. Paper, the words above can be identified and "paste" from the paper into the phone.
From the reality "paste" to the electronic world
Driverless: Predict traffic conditions
In 2016, Google’s parent company, Alphabet, announced that it will split the self-driving car project into a separate company, Waymo. This time, the driverless car and part of it will be explained by Waymo CEO John Krafcik. He introduced the Waymo driverless car. State of the Art: Already on the road in Phoenix, Arizona, AI's intervention has reduced the driverless error by more than 100 times, making the driverless car safer.
Waymo driverless car
Waymo's lecturer gave an example to illustrate the learning outcomes of neural networks: predicting people or cars crossing the road, running red lights, and so on. Pre-judgment in advance to prevent red lights from crossing the intersection. Currently, 25,000 vehicles in Waymo verify various road conditions on the road every day.
AI predicts behavior on the road
Summary: Add Humanity to AI
This is a veritable developer conference, there is no hardware release, and it seems that no system is a revolutionary update. This is actually not the case in the first year - the involvement of AI, so that every product has the properties of machine learning, updated in the subtle. In contrast, the UI interface is less of a surprise for this kind of surface change, and the evolution of intelligence is performed every second and changes quietly according to the user's usage habits.
What's special about this keynote speech is that it never fails to mention the help of AI to human beings and the AI's advanced humanity (for example, teaching children to say “please”). It is the attitude of all technology companies in Silicon Valley to create a “better world” dream.