Google Lens was unveiled at I/O last year as an image recognition tool that could provide contextual suggestions for objects that you scan with the camera. For instance, scanning a restaurant can show things like the menu, pricing, reservations, and timings. It is Google’s experimental, camera-powered search engine that combines search, artificial intelligence, augmented reality, and computer vision. On May 8, Google announced the most significant update to Google Lens at this year’s developers conference. Apart from getting new features, Google Lens will no longer stay buried inside Google Assistant and Google Photos app. Google is now merging the feature into the native camera apps in some smartphones.
Aparna Chennapragada, Vice President of Product for AR, VR, and Vision-based products, Google, demoed three new features available with the new Google Lens at Google I/O 2018 keynote. First up, is smart text selection that connects the words you see with the answers and actions you need. This essentially means users can copy and paste text from the real world, such as recipes, gift card codes, or Wi-Fi passwords, directly to their smartphone. Google Lens, in turn, helps in making sense of a page of words by showing relevant information and images.
Â
For instance, if you are at a restaurant and you don’t identify the name of a particular dish, Lens will be able to show you an image to give you a better idea. Google is leveraging its years of language understanding in Search help to recognise the shapes of letters as well as the meaning and context of the words.
Next up is a discovery feature called style match, similar to a Pinterest-like fashion search option. With the new feature, you can just point the camera at an item of clothing, such as a shirt or a handbag, and Lens will search for items that match that piece’s style. Google is able to achieve this by running searches through millions of items, but also by understanding things like different textures, shapes, angles and lighting conditions, Chennapragada explained at the event.
Lastly, Google Lens now works in real time. It can now proactively surface information instantly and anchor it to the things you see. You will be able to browse the world around you by pointing your camera. This is possible because of the advances in machine learning, using both on-device intelligence and cloud TPUs, allowing Lens to identify billions of words, phrases, places, and objects in a split second, says Google.
It can also display the results of what it finds on top of things like storefronts, street signs or concert posters. With Google Lens, “the camera is not just answering questions, but putting the answers right where the questions are,” noted Aparna Chennapragada.
As for integration in the native camera apps of smartphones, Chennapragada said that starting in the next few weeks, Google Lens will be integrated inside the camera app for Google Pixel, and smartphones from other manufacturers such as LG, Motorola, Xiaomi, Sony Mobile, HMD Global/ Nokia, Transsion, TCL, OnePlus, BQ, and Asus.
Also notable is that Chennapragada, ahead of the several new additions for Google Lens, demonstrated a clever way Google is using the camera and Google Maps together to help people better navigate around their city with AR Mode. The maps integration combines the camera, computer vision technology, and Google Maps with Street View.
Â
Â
We discussed Android P, Google Assistant, Google Photos, and also the most important things that Google did not mention during its I/O 2018 keynote, on Orbital, our weekly technology podcast, which you can subscribe to via Apple Podcasts or RSS, download the episode, or just hit the play button below.