The last year has been one of growth and transition for Google as Android and Chrome mature and converge and Assistant, led by advancements in machine learning and AI, has taken center stage in Google’s plans.
At the same time, Google has been held to the fire for showing false news in its search carousel and admitting to not doing enough to limit the spread of extremist videos on YouTube.
At Google I/O 2018, the company addressed all of these topics, and announced many new tools for developers to make their apps and services better.
Here are all the major announcements from Google I/O 2018!
Machine learning all up in your battery, launcher, and bed
With Android P, Google’s highlighting three main areas to tackle – intelligence, simplicity, and digital wellbeing.
To help keep your phone running as long as possible before needing to get charged up, Google’s making battery endurance a big focus of Android P. Google partnered with Deep Mind to create a new feature called Adaptive Battery, using on-device machine learning to study your usage habits and only draw battery power to applications Android P knows you’re using – not once that are needlessly running in the background. Additionally, background processes are also moved to low-power cores.
Machine learning is also being used for another new feature called Adaptive Brightness. Android P will learn your habits of manually changing your brightness based on the environment around you, and Google notes that testers of the service manually change their brightness far less.
Moving on to Android P’s launcher, Android P will now showcase suggested “app actions.” Under your suggested apps at the very top, you’ll see recommend actions for starting a run in Strava, making a call to a specific contact, etc. App actions are also found in Google Search. If searching for movie tickets, you may see an action for ordering them within the Fandango app.
It was rumored earlier in the year that Android P would introduce a gesture-based navigation system, and that’s exactly what we’re getting. Swiping up from the bottom of your screen takes you to your Recent Apps page, and this is now presented in a left-right carousel showing apps you’ve recently opened or apps Android thinks you want to open. If you swipe up again, you’ll be taken to your app drawer.
Other goodies include better screenshot management, a button to rotate your screen even if you turn your phone when auto-rotate is turned off, volume controls are on the left while now defaulting to media volume, and a new Do Not Disturb gesture called Shush will turn on DND when flipping your phone upside down.
Android P is available today for Pixel phones, as well as the Nokia 7 Plus, OnePlus 6, Xiaomi Mi Mix 2S, Essential Phone, Sony Xperia XZ2, Oppo R15 Pro and Vivo X21.
Lastly, following the trend of having a better digital wellbeing, Google’s adding a new Wind Down mode to Android P. After telling the Assistant what time you’d like to go to bed, your screen will switch to a grayscale color and turn on Do Not Disturb once your bedtime rolls around.
Let’s have a chat
Google made a lot of strides with Google Assistant over the course of the last 12 months, and at I/O the company announced a slew of new features that are set to make their way to the virtual assistant later this year. Assistant usage has skyrocketed in emerging markets like India, and Google has mentioned that it is turning to machine learning to better analyze and recognize local accents or dialects.
Assistant is used on over 500 million devices and will support over 30 languages in 80 countries by the end of 2018. Assistant is also picking up six new voices, with John Legend being the notable addition. Google is once again turning to AI to build a complete voice model instead of recording every possible sound.
A new feature that’s coming to Assistant is Continued Conversation, which lets you ask follow-up queries without having to invoke the “Hey Google” wake word every time. Assistant will also be able to implement multiple actions via a single command. Finally, Google is rolling out a “Pretty Please” mode for kids, which encourages polite reinforcement. The feature will be going live later this year.
Assistant will also start serving full-screen cards for display responses, and makes it easier to control smart home appliances. You’ll even be able to place orders from Starbucks, Dunkin Donuts, and more directly from Assistant.
The new features will start rolling out on Android starting later this summer and will be available on iOS later in the year.
The computer said “uhm”
Every I/O is home at least one announcement that causes people’s jaws to drop, and this year, that announcement was Google Duplex.
With Google Duplex, you can have the Google Assistant make a phone call and actually talk to another human at a restaurant, hair salon, or another business to make a reservation/appointment for you.
Google says it’s still working with Duplex to make sure it’s just right before pushing it out, and when it does over the coming weeks, it’ll be a gradual rollout as an experimental feature.
Even with that said, the demo Google showed off is straight-up bananas. The Assistant speaks naturally to the person on the other end, even saying things like “um” and “uh” when responding.
Google Photos is getting new features to make sharing easier. Suggested Sharing can help you find the best pictures of your friends, and share them, using machine learning to recognize people in photos, and offers to share with that person, based on your own sharing patterns.
The new Shared Libraries feature can help to automate sharing of pictures of specific people, things or places. Shared Libraries can notify recipients of new photos, and automatically save photos to personal library — no more worrying about whose phone has which photos. Suggested Sharing and Libraries will be rolling out on iOS, Android and the web in the coming weeks.
In addition, Google Photos is making use of AI to further enhance photos. Coming in the next few months, Photos will be able to recognize shots of documents and convert them to PDF files. It will also be able to suggest automatic brightness enhancements and even colorize old black and white photos, as well as recognizing faces and selectively desaturating the background to make subjects pop.
See the world
Google Lens is currently available in Photos and Assistant, but it’ll soon be integrated directly into your phone’s camera app. In addition to Google’s Pixel phones, it’ll also added to the camera for LG, Motorola, Xiaomi, Nokia, Sony, TCL, OnePlus, Asus, BQ, and Transsion Holdings devices.
After opening your camera app, you can point it at an object, Google Lens will pop up, and give you information about what it sees.
Lens is also getting smart text selection, allowing it to see words through your phone’s camera and then copy and pate them as regular text. Additionally, you’ll be able to view listings on a menu at a restaurant and get a visual of what it is.
Lastly, Google notes that it’s working to take Lens beyond offering basic information about what it sees. In one example, the company said that you’ll eventually be able to point Lens at a concert poster and then automatically see music videos for that band.
These new features for Lens will be rolling out “in the next few weeks.”
Trusted sources and context
Google News is getting a major makeover, with the app getting a Material Design refresh and a host of new features. Google says it “reimagined” its news product, leveraging AI to surface stories from quality sources.
Google News Initiative committed $300 million to products and programs to help the news industry, and the company says the goal with the redesign is to deliver deeper insights and a fuller perspective on the topics that you care about. There’s now a briefing section at the top that surfaces five stories for each topic, and you’ll be able to tailor the topics as well as publishers you’re interested in.
Images and videos now show up inline, and Google is putting the emphasis on local news — building a contextual map of related news. There’s also a new visual format called Newscasts that delivers additional information — everything from trailers, tweets, and headlines — giving you a visual preview on a subject you care about so you can dive in for more.
You’ll also be able to view a timeline for key moments in a story, and Google News will suggest relevant tweets, built-in fact checking, videos, and opinion editorials.
Newsstand puts publishers — newspapers and magazines — front and center, and you’ll be able to follow and subscribe to your favorite publications from directly within News. The Subscribe with Google feature will be rolling out in the coming weeks.
The redesigned Google News will be rolling out on Android, iOS, and the web in 127 countries starting later today, and everyone will be able to access the new features next week.
Understanding how much you use your phone
As great as our smartphones are, it can be incredibly easy to get lost in our screens and lose touch with the world around us. Google’s doing its part to help people have a better “digital wellbeing” with Android Dashboard.
With Android Dashboard, Google will show you which apps you’re using throughout the day, how often you’re using them, total time spent on your phone, how many times you’ve unlocked it, etc.
Additionally, apps like YouTube will help contribute too by suggesting users take a break if it detects they’ve been binge-watching videos for too long.
Dashboard data will also be available on a per-app basis. YouTube can show you how many minutes/hours you’ve spent watching videos on a certain day and Gmail can indicate how much time you spend in it on a weekly or daily basis.
Google will also allow you to set App Timers. For example, you can choose to set an App Timer on Twitter for 1 hour. You’ll get a subtle reminder once you’re nearing an hour of use of Twitter, and once that hour is up, the app icon will be grayed out and you won’t be able to access it for the rest of the day.
Visual Positioning System
Google Maps is gaining personalized recommendations to suggest locations like trending restaurants based on your search history and interests. Soon, you’ll be able to send your friends a shortlist of restaurant options to put to a vote and quickly decide where to eat. Once you’ve decided, you can place an order or make a reservation with just one click.
More impressively, Google Maps will soon integrate directly with your camera to overlay key information like navigation directions and highlight points of interest like restaurants and businesses.
The Visual Positioning System makes use of Google’s Street View database to precisely position the user for the most accurate results, meaning you should be able to reliably make use of these features almost anywhere. Just point your camera at a storefront to quickly see information like business hours and phone numbers.
Coming in July
Back at CES in January, Google gave us our first look at Smart Displays – the company’s take on Amazon’s Echo Show and Echo Dot. At I/O, Google confirmed that the first Smart Displays from Lenovo, JBL, and LG will go on sale in July.
Since Smart Displays are powered by Google Assistant, you’ll be able to ask any of the questions you already ask Google Home or your phone. However, since you now have a large screen at your disposal, you’ll be able to use Smart Displays to watch live programming on YouTube TV, access the full YouTube app, and get video demonstrations for recipes thanks to a partnership with Tasty.
Smarter than ever
Google’s been making a big bet on AI for a few years now, and this year’s I/O proved to be no different. For 2018, Google’s focusing its AI efforts on the healthcare and accessibility.
On the healthcare front, Google’s using AI to help doctors around the world discover and diagnose cardiovascular disease by performing retina scans. By scanning a patient’s eye, AI can determine age, gender, if a person smokes, and more to find these potential health risks.
Additionally, Google’s also using AI to predict medical events by scanning over 100,000 data points for every patient – giving doctors more time to act and respond to situations than before.
As for accessibility, machine learning is being used to create closed captions for multiple people on a screen at once by picking up on audio and video cues in a clip. Gboard is also picking up support for morse code to make it easier than ever for more people to easily communicate with friends and family.
All of this is getting powered in part by Google’s newest Tensor Processing Unit 3.0. The new TPU units are so powerful that Google had to build liquid cooling into their data centers, and each TPU bundle has more than 100 PetaFLOPS of processing power.
Predicting your next full sentence
Gmail got a revamped user interface earlier this month, and Google is now introducing Smart Compose. The feature leverages machine learning to predict full sentences based on just a few letters or words.
For instance, if you’re typing out your address, Smart Compose will recognize the context and autofill your location details in the sentence. The feature will be rolling out to all users from later this month.
A self-driving future
At I/O, Alphabet’s autonomous car company Waymo shared key statistics on the advances made over the course of last year. Waymo’s fleet of cars have now driven more than 6 million miles on public roads — picking up more miles each day than the average American drivers each year.
Waymo says that it is the only company in the world with a fleet of fully self-driving cars on public roads with no human safety driver, with the Early Rider Program in Phoenix driving a number of people over the course of the last year. Waymo is now opening up a driverless transportation service later this year, starting with Phoenix.
The service works in a similar fashion as ride-sharing services like Uber or Lyft — open an app, and select where you want to go. The only difference is that with Waymo, a driverless car will pick you up at your location.
Waymo’s deep learning algorithms reduced pedestrian detection error rate by 100x, and the company is flexing its AI muscle to recognize pedestrians as well as predicting patterns for other vehicles on the road. As Waymo CEO John Krafcik summed up, “We’re not just building a better car. We’re building a better driver.”