Search engines have the unenviable task of trying to figure out what people mean when they type something into a query. People expect search engines like Google to understand what kind of information they want to see in search results, no matter how badly worded the query is. For the most part, Google is pretty good at this users rarely need to ask a question more than twice to get the results they’re looking for. However, Google is about to update its algorithm with the BERT system that should make the algorithm better at understanding natural languages.
Many people have learned how to ask questions in Google to get the answer they need, but that way of asking a question isn’t the same way people talk. As voice commands become more prevalent, it becomes essential for Google’s algorithm to become better at understanding natural language.
As Google explained in a post announcing the BERT system, “At its core, Search is about understanding language. It’s our job to figure out what you’re searching for and surface helpful information from the web, no matter how you spell or combine the words in your query. While we’ve continued to improve our language understanding capabilities over the years, we sometimes still don’t quite get it right, particularly with complex or conversational queries. In fact, that’s one of the reasons why people often use “keyword-ese,” typing strings of words that they think we’ll understand, but aren’t actually how they’d naturally ask a question.”
While BERT may bring to mind a character from Sesame Street, it’s also the name of a powerful new system that Google has been working on for over a year. Back in November 2018, Google introduced and open-sourced a neural network-based technique for natural language processing (NLP) pre-training called Bidirectional Encoder Representations from Transformers, or BERT, for short. This technology enables anyone to train their own state-of-the-art question answering system.
What makes BERT unique is the way it considers all of the words in a query in a more comprehensive way. These “transformers” are models that process words in relation to all the other words in a sentence, rather than one-by-one in order. According to Google, “BERT models can, therefore, consider the full context of a word by looking at the words that come before and after it—particularly useful for understanding the intent behind search queries.”
Google has taken this research and will now apply it to the results users see when they search for a particular query. The BERT system will be applied to search rankings, and it will affect the results seen in featured Snippets.
While Google is positioning this change as an entirely good thing, SEO marketers should be ready for some disruption to their rankings and web traffic. In their announcement, Google stated, “In fact, when it comes to ranking results, BERT will help Search better understand one in 10 searches in the U.S. in English, and we’ll bring this to more languages and locales over time.”
This statement from Google seems to imply that switching to BERT could affect 10 percent of English search results. However, the results may not be as dramatic as some might fear. If the algorithm produces more natural results, the 10 percent of results that are affected will produce results that are more useful to everyone. In the screenshot Google provided with the announcement, the results removed a top listing that was useless to the searcher. Theoretically, everyone benefits since the site that couldn’t help the searcher didn’t benefit from someone landing on their website from Google Search and quickly bouncing.
Users should also expect some hiccups along the way. BERT does produce superior results in many situations, but in others, the results show remove for improvement. Google gave the funny example that if you search for “what state is south of Nebraska,” BERT’s best guess is a community called “South Nebraska.”
For more recent news about updates to Google, read this article on Google’s international test of the Duplex system.