Full width home advertisement

Welcome Home

Post Page Advertisement [Top]

Google Lens elevates visual search to a whole new level of sophistication

Google Lens can now understand and answer questions about images; thanks to a new, more advanced artificial intelligence system.

As reported by Mashable, Google has unveiled what appears to be a really useful and almost frighteningly advanced new way to search with images. According to the article,

Google Lens already allows you to search using an image as a starting point. Consider this: If you take a picture of an elephant, you will almost certainly receive Google Lens search results for the term "elephant."

In contrast, you can now tap on a photograph that you've taken or one that is already in your library and ask a question related to that photograph.

Consider the elephant: Simply tap the photo to reveal the option to "Add Questions," and a text box will appear, allowing you to search Google for additional information about that specific image, such as "What kind of elephant is this?" or "How many of these elephants are left in the world?"

That entails so many layers of artificial intelligence processing that it's difficult to comprehend.

It has comprehended what is depicted in the picture, comprehended your question, and comprehended the relationship between your question and the picture. As well as providing you with the answers to your questions, it (ostensibly) saves you time.

All of this is made possible by a new, more advanced artificial intelligence system called Multitask Unified Model (or MUM), which was announced in May and is now being used to power Search.

With the introduction of new technology, Google has been gradually rolling out applications for the new technology, which is capable of processing queries in more complex ways and delivering results that Google believes will be more relevant or instructive than before.

The switch to Lens is one of the most striking yet, and the examples of queries Google provides demonstrate just how astute MUM is in her decisions. It is possible to inquire of Google Lens about a shirt pattern, as well as whether or not the same pattern is available in socks: And presto, you've got the exact product listing you were looking for.

Alternatively, consider the following example of someone taking a photograph of a broken bicycle component. Showing Google Lens a picture of the broken part and asking "how to fix" results in a response that includes both the exact broken part and instructions on how to repair it.

These types of questions would be difficult to answer if the visual component were not included in the equation. If you searched for socks in a floral pattern, for example, you would most likely get non-specific results if you searched for socks in a floral pattern.

And, in the case of the bike, you'd have to figure out what the specific bike part that's broken is called before you could even begin to think about how to fix it, let alone figure out what the problem is.

The machines are becoming more intelligent, which is a potentially frightening prospect for the world, but is excellent news for those seeking answers to difficult questions.

No comments:

Post a Comment

Bottom Ad [Post Page]