Google I/O, an AI more than to.

 

The Google I/O 2018 opened by confirming to those who still doubted it after the 2017 edition, of the direction that Google has taken with its range of products. This direction can be summarized in a simple acronym: AI.

Artificial Intelligence, most often through Machine Learning, is now everywhere and offers its capabilities to push back the limits of the Google branded applications.

Google Assistant, Google Photos, Google Maps, all these daily tools receive updates that we could only dream of without the performance of Machine Learning.

This performance has a cost though and Google tackles it by presenting us with its new generation of TPU chips to equip its Data Centers and to support the load that will induce all these new functionalities.

The TPU 3.0 is said to be about 8 times more powerful than the previous one and would require liquid cooling to deliver its full potential.

 

A more human assistant ?

 

Among the noticeable innovations, impossible to miss the new Google assistant.

No more robotic voices because, now supported by Wavenet, the assistant’s voice comes very close to a human one.

Thanks to its neural network allowing to generate audio waves more than real, the interaction with the tool is much closer to a conversation where its current version remains very limited in terms of connection with the interlocutor.

To top it all off, if you were tired of starting all your sentences with “Hey Google” or “Ok Google”, rejoice because “Continued Conversation” enters the track to facilitate conversation with the assistant and detect several chained commands without repeating this annoying keyword.

To finish with the assistant, Google introduces a new feature that allows it to make a background call to make simple appointments for us. The test shown on stage is very conclusive, it remains to be seen if it will be the same on a large scale.

 

Not just a Camera.

 

Google is also very proud of its Photos application given the contribution of AI in its new features.

From the very impressive colorization of black and white photos to facial recognition and silhouettes, the application offers even more services, from the essential to the nice to have.             

The Smart Text for example won over the assembly during the presentation and it also convinced us here at Tapptic: from a text photo, the application is able to offer digital character recovery in the same way as a copy and paste from real to digital would act.

The demo having taken place on a typed page, it’s hard to evaluate its performance on handwritten and/or cursive texts.

Google Lens, integrated into the Camera and Assistant applications on some Android devices, allows you to perform object recognition in order to use this data in different ways: information about a book from its cover, animal breeds, origin of a plant, price of furniture, size of a monument, if it exists, if it is filmed, then it can be explained and analyzed.

 

The VPS is going out for fresh air. 

 

What would Google be without Maps? It is probably not tomorrow that we will know because shortly after the mini-scandal of the new tariff plan for its API usage, Google had to improve its image quickly.

What could be better for that than reaffirming its leadership in the field of modern cartography and Street View?

Maps is now boosted with AI hormones and uses its VPS outside in the open to complement GPS data for orientation. Are you lost? You don’t know where the west is? Point your camera at the nearest building and Maps does the rest.

This feature was also met with great enthusiasm from the audience and is my second favorite of the Keynote after the Assistant.

With an augmented reality overlay, you can reach your destination using directional arrows while gleaning information on your way: shops, companies with which you can interact like you would with their website thanks to a small extensible map attached to their frontage.

Finally, we note the addition of an extra tab called “For you” that will contain contextual customized information to make Maps a real companion on and off the road.

 

P for Popcorn?

 

A small Sneak Peak of Android P has also been presented with relatively few huge changes, which is blamed every year on Google by viewers who do not seem to realize that it is impossible to fit an exhaustive summary of the I/O in one hour and a half of keynote.

The emphasis was placed on three points: intelligence, simplicity and digital well-being.

As for intelligence, it is welcome with more tools to save our batteries faster and faster devoured by processors and GPUs more and more powerful. The AI also appears on the home screen with App Actions, allowing you to contextualize the screen based on the recent interactions performed.

Simplicity is embodied by the disappearance of the button to display recent applications. There is no doubt that the transition to the new gestures will take longer for some, used to using it for years.

From now on, it is with a first Swipe Up that you will access the launched applications and with a second that you will find the familiar screen containing the list of the applications of your device, available from any screen, even from a third application.

For digital well-being, Google emphasizes new ways to forget your phone and manage the time you spend on it. Some ideas are good and, I think, led to success as the Shush that allows by flipping the device to disable all sounds and vibrations of applications and notifications. Of course, it is still possible to define contacts that are allowed to override the setting and reach you.

Wind Down can also encourage addicted users to let go of their phones to sleep thanks to an improved “Do Not Disturb” mode that offers a system that desaturates the screen down to shades of grey, less stimulating for the brain.

Finally, the feature that leaves me most doubtful is a new dashboard that will provide information about phone usage: what applications have been used today and for how long, thus allowing the user to consider an action plan to recover valuable minutes of his life each day.

In practice, I am not convinced, but I hope I’ll be proven wrong.

 

AI to the rescue of the SDC.

 

And finally, we had a look at Waymo’s Self Driving Cars, formerly the Google Self Driving Cars Project.

Thanks to Machine Learning, we were able to visualize the breakthroughs made in detection and forecasting, which are obviously the biggest challenges for autonomous cars.

Filtering parasites like Canada’s big snowflakes, recognizing a human in the most unlikely Halloween suit, detecting and responding to an abnormally fast car at a crossroads – these are just some of the challenges Waymo engineers face in their quest to find the best model to make roads safer for everyone.

If you’re interested, head to Phoenix, Arizona, where you’ll soon be able to use Waymo collection points to get you where you want to go, just as if you called for a cab.

So ends the summary of the 2018 keynote, if some of these topics interest you, check out the full program of the IO or if you read this article after its end, visit the Google’s official youtube channel to find all the conferences.

 

Cédric Goffoy

Android Developer @ Tapptic