Google I/O 2018 is finally wrapping up after a week packed full of goodies for developers and consumers. The conference is typically where Google announces new hardware and software as well as revealing what it has been working on with new projects. Here are the best announcements to come out of Google I/O 2018.
If reading isn’t your thing, you can check out the below video that condenses all the action into a 10-minute roundup.
By far one of the biggest highlights at Google I/O 2018 was predictably Android P that saw Google elaborate on what it sees as the vision for the next version of the software. First up is gestures that appear to play a big part of Android P. While not enabled by default in the current build, Android P provides the option to reduce the navigation buttons from three to two. The button that takes the place of the home button has multiple functions allowing you to swipe up and down to navigate between currently open apps as well as a drawer that suggests apps you might use that Google is calling App Actions.
The newly designed launcher is built to take advantage of a gesture system that is akin to what was first seen in WebOS. The multi-function button also acts as a spring-loaded quick toggle allowing you to switch apps quickly and perform other functions by sliding it to the right.
Google has also created a new feature called Dashboard that provides an insight into how you’re using your phone. Google seems to think the average user spends too much time on their smartphone so the company has built in an information screen that will show a new app timer system as well as a new Shush feature for easily turning on “Do Not Disturb”. The idea behind Dashboard is being able to see how much time you spend on an individual app and limit the use of it based on the time spent. There’s also a wind-down mode that will turn your phone to black and white at night.
Alongside the tweaks Google has done in Android P, there was also a heavy focus on AI and machine learning within the new software. First up is what Google is calling Adaptive Battery and is a collaboration between the Android team and Google’s DeepMind team.
“Adaptive Battery uses on-device machine learning to figure out which apps you’ll use in the next few hours and which you won’t use until later, if at all today,” Google’s Dave Burke explained at the firm’s I/O 2018 event.
The result of this machine learning, according to Google, is that the CPU sees a 30% reduction in wake up calls and this significantly improves battery life.
Part of the AI solution for the Adaptive approach is the implementation of Adaptive Brightness. Google identified that the current auto-brightness setting is a one-size-fits-all solution and doesn’t take into account individual preferences to screen brightness leaving users manually adjusting the brightness to their own liking. Adaptive Brightness removes this need by learning your preference over time. It accounts for your environment and activities and learns from the adjustments you make to provide the optimal levels at various times throughout the day.
App Standby Buckets is yet another way Google is using AI to improve battery life. The feature classifies apps based on how often you use them and limit access to system resources to those apps you don’t use often.
Typically the Android P beta software is restricted to Google devices such as the Pixel 2 and Pixel 2 XL, but for the first time the company has opened up access to other devices. The list of supported phones is as follows: Google Pixel/XL, Google Pixel 2/XL, OnePlus 6, Essential PH-1, Xiaomi Mi Mix 2S, Sony Xperia XZ2, Nokia 7 Plus, Oppo R15 Pro, and Vivo X21/UD. This could mean we may see the final version of Android P hit these devices quicker with OEMs often dragging their heels with updates and could be thanks to Project Treble.
Google I/O 2018 saw a new feature for Google Lens announced that allows you to copy text from the real world by simply holding your phone over a physical document. It’s called smart text selection and words exactly the same as if you had copied the text from within another app. It works on recipes, gift cards, invoices, and much more. As shown in the image above, another feature is to allow text to be selected a search to be performed on that object to see more information about it. Similarly, Style Match allows you to search for objects and clothing in a similar manner that will link to retailer listings and reviews. Finally, Google Lens sees a feature called proactive searching that will show anchor points of information in your viewfinder. Eventually, the company wants to add live results onto real-world objects in augmented reality such as posters, with Google Lens ultimately coming natively to camera apps.
Even Google Maps got some new upgrades, the most significant being camera integration. You’ll now get an AR overlay and be able to look through your camera to see which way you should be going. There’s also points of interest that all feed into a new Visual Positioning System. When GPS signal isn’t strong enough to position you, VPS will use the camera to match your surroundings to Google’s data to identify where you are. Certainly, the visual prompt of directions through the camera is extremely useful where turns aren’t always clear when they come up.
Perhaps one of the more controversial but certainly impressive showcases at Google I/O 2018 was the unveiling of the new lifelike AI which is capable of replicating real conversations and interacting with humans. The important standout feature here is that Duplex doesn’t act like a bot at all. In front of the audience of I/O, Google demonstrated the AI assistant making calls to local businesses without human intervention. It seamlessly made a dinner reservation and scheduled a haircut without any hesitation from the person on the other end of the line that they weren’t talking to a human. Duplex is Google’s technology that enables its virtual assistant to conduct a natural conversation with a human over the phone. This inevitably raises a number of moral questions and as such has been met with a mixed reception, but needless to say stunned the audience as it made small talk while completing simple real-world tasks.
Duplex marks the next step in natural-sounding artificial intelligence for fully autonomous conversations for real-world tasks. The feature is launched but the company says it plans on testing Duplex publicly this summer.
Gmail Smart Compose
Also getting an update was Gmail that saw a new Smart Compose feature that is now available to users. It is very similar to autocomplete that Google users in its search boxes where it will attempt to predict what you’re about to write. Users can just hit tab to have the sentence or phrase autocomplete based on what Google thinks you’re trying to say. Smart Compose uses machine learning to match regular expressions and analyze how humans type in order to suggest words to cut down the time it takes to compose an email. During the demo on stage, it automatically generated a user’s mailing address and Google CEO Sundar Pichai even joked that this new feature has caused him to write a lot more emails to employees.
The virtual assistant saw six new voices added with both new male and female options available and the ability to assign different voices to different Google accounts. The voice of pop music artist John Legend will also be included as a Google Assistant option sometime later in 2018. Google is also rolling out Custom Routines that will allow users to create personal routines that can be triggered using a custom phrase. In the pipeline is an update called Continued Conversation that, as suggested by the name, will allow a natural conversation with Google Assistant to perform multiple tasks without the need to repeat the trigger word each time.
To introduce some manners to the future rulers of the world, Google has also introduced a feature called Pretty Please that will respond with lines like “thanks for saying please,” or “what a nice way to ask me” if you say “Please” after your request. It’s supposed to encourage children to more polite or to lull you into a false sense of security thinking robots are our friends.
Rounding up the big announcements to come out of Google I/O 2018 is the updates to Google Photos. The introduction of AI is set to influence the services to make auto-awesome and face identification much quicker and easier than before. Not only that but instead of having to do manual edits to your photos, Google Photos will suggest edits to make to that photo based on its composition and information in the picture. The AI will also be able to recognize other people in the photo and automatically allow you to share the picture with them.
Google has also now opened up an API to Photos that will allow third-party developers to hook into the service to hopefully make things more seamless and take advantage of some of the great features Google Photos is offering, paired with some of the great external tools available.
Google I/O 2018 saw a lot of new features that make some already excellent apps and services even better. The focus of the conference was clearly pinpointed on AI and what Google is doing to make everyday tasks and routines less of a burden on the user by utilizing machine learning. There’s still a lot of work to do to understand how this translates into real-world use and the limitations around the technology, but it certainly is exciting to see where Google is taking things.
It’s great to see Android P beta opened up to more devices but is disappointing not to see Samsung on the list. It appears Project Treble is helping address the age-old problem of fragmentation on Android. While it won’t be fixed overnight, it seems things are heading in the right direction.
Leave us a comment below and let us know what your favorite thing was to come out of Google I/O 2018.
LATEST FROM YOUTUBE: