If voice-based assistants are to become the primary way we interact with our devices now and in the future, then Google has just upped the ante in the battle between its own voice search functionality and that of rival mobile assistants like Apple’s Siri or Microsoft’s Cortana. While Google’s app has historically done a better job at understanding its users and answering queries accurately, the company says today that its app has now gotten better at actually understanding the meaning behind users’ questions, as well.
That is, the app has been improved so that it better understands natural language and more complex questions.
The company first rolled out voice search in 2008, and later tied that into its Knowledge Graph in 2012, in order to answer users’ questions with factual information about persons, places or things. At first, it was able to provide information on entities like “Barack Obama,” and then later was able to answer questions like “How old is Stan Lee?,” Google explains.
Its ability to answer questions improved again after a while, as it learned to understand words in different contexts. For instance, if you asked what ingredients were in a screwdriver, Google knew you meant the drink, not the tool.
Now Google says its voice assistant baked into the Google app is able to understand the meaning of what you’re asking, so you’ll be able to ask more complex questions than in the past. In order to do so, Google is breaking down your query into its pieces to better understand the semantics involved with each piece, it says.
That means you could ask Google a question like “Who was the U.S. President when the Angels won the World Series?” and it could respond, “George Bush.”
Source: TechCrunch - Sarah Perez