Bill Slawski’s Hardest SEO Quiz You’ll Ever Take – 2018 Edition

by Posted @ Dec 04 2018


Welcome to the hardest SEO quiz you will ever take (the 2018 edition.)

Good luck and no cheating! 🙂


A search involving a knowledge base entity, such as "Who said 'I love the smell of napalm in the morning'" returning a result of "Robert Duvall" is an example of:

Correct! Wrong!

Solution: This is an example of an attribute store search as described in the Google patent: "Identifying entities using search results"

Google has sitelinks, and Bing has deeplinks. Bing deeplinks differ from sitelinks by being:

Correct! Wrong!

Solution: Both deeplinks and sitelinks are personalized, but Bing has some deeplinks that are action-based and tied to different categories of activities. I describe this in more detail in my blog post titled "How Bing Deeplinks and Google Sitelinks Differ" -

Personalized results in Google originate in the intersection of documents from:

Correct! Wrong!

Solution: Personalized search results come from both a biased document site and a high quality document set, as described in the continuation patent I wrote about in the post "Personalizing Search Results at Google" -

A context vector is taken from a "count of uses of a word" under a specific meaning (context) from:

Correct! Wrong!

Solution: Context vectors are a count of a particular word as it is defined in a particular knowledge base. The term appears in a patent from Google, and the knowledge bases used aren't identified in the patent, but Google has been shown to use many sites as knowledge bases, including Wikipedia, IMDB, Yahoo Finance and more. The Google patent that context vectors appears in is "User-context-based search engine"

True or False: "Direct Answers" in Google were introduced by the search engine in 2009, under the name Google Q&A

Correct! Wrong!

Solution: They were actually introduced much earlier — in 2005 — in the Google Official Blog, "Just the Facts, Fast," by Jonathan Betz

True or False: These books were the 1999 seed set for Sergey Brin's DIPRE Algorithm, which would crawl the web collecting facts about books: The Robots of Dawn, Startide Rising, Chaos: Making a New Science, Great Expectations, The Comedy of Errors

Correct! Wrong!

Solution: The DIPRE Algorithm was filed by Brin as an early Google Patent, as a way of crawling the web to collect facts rather than links and anchor text thus extracting patterns and relations from the World Wide Web

Which of the following is not a place where Google has been learning about semantics:

Correct! Wrong!

Solution: Google has been extracting information and semantics from data-based tables on the web since they started the Webtables Project, and they published a paper about their findings in doing so titled "Applying Webtables in Practice"

True or False: Google may show off similar local entities in maps results based upon similarities found in query logs and the queries used to find them. The queries may be determined to be high quality or low quality based upon factors such as: [1] if the queries fit categories (high quality) [2] if the queries are geographic (low quality) [3] if the query includes a navigational term (low quality)

Correct! Wrong!

Solution: This is described in a patent from Google which focuses upon similarities in local entities. I wrote about it here:

The Google patent on "Generating structured information" focuses upon:

Correct! Wrong!

Solution: This was one of the first patents from Google that describes how information from commercial data providers, enterprise websites and directory websites might help Google collect structured data about local entities, and publish citations that look for similarities in the facts about a business. The patent is titled "Generating structured information"

True or False: Google may evaluate and compare click throughs, long clicks and other positive human behavior signals on search results from an original query, and from an augmentation query that is selected because it is similar to the original query as a synonym or because of other similarities. If the augmented query scores higher than the original query, the search results from it may be added to those of the original query

Correct! Wrong!

Solution. This is described in a Google Patent titled Augmentation Queries, which I wrote about in Quality Scores for Queries: Structured Data, Synthetic Queries and Augmentation Queries

Google came out with a patent about "Combined Content" where they described how they might combine the following types of search results:

Correct! Wrong!

Solution: The patent says that they might label these combined content results as "Ads." I wrote about them in the post Google to Offer Combined Content (Paid and Organic) Search Results -

True or False: Google supplied free directory service phone results under the name GOOG-411 to people looking for businesses so that Google could collect voice data for those business searches

Correct! Wrong!

Solution: A recent patent about an automated phone business directory sounds a lot like the GOOG-411 service where people could ask for specific businesses or categories of businesses. I wrote about this in my post Early Days of Voice Search at Google -

True or False: Xin Luna Dong, creator of Google's "knowledge-based trust", developed a way of identifying sites that tended to have accurate and correct information most of the time. As she showed in a presentation, sites that ranked highly according to PageRank, weren't necessarily sites that ranked highly according to knowledge-based trust

Correct! Wrong!

Solution: She showed a number of gossip sites on the web that all ranked in the top 15% in terms of PageRank and in the bottom 50% in terms of knowledge-based trust. Her presentation was: No Valuable Data Left Behind: The Crazy Ideas and the Business

Biperpedia is:

Correct! Wrong!

Solution: Google wrote a white paper about Biperpedia, titled Biperpedia: An Ontology for Search Applications. They were also granted a patent for it.

True or False: "Related questions" are Google's name for questions that appear on search results pages under headings such as "people also search for". They are "related" to each other by belonging to an answer graph that connects them to one another

Correct! Wrong!

Solution: An updated continuation patent from Google about related answers includes the idea of a question graph in its claims to tell us how the questions are related, rather than an answer graph. My post about this updated patent is titled Related Questions Now Use a Question Graph and Are Joined by ‘People Also Search For’ Refinements -

True or False: Google ranks local search results based in part upon location prominence, distance to the user's search device, and relevance of local entity titles. That distance may be the user's current distance away from their search location history

Correct! Wrong!

Solution: A Google patent from this year tells us that location history may be used to determine the likelihood that a person might visit a place in search results, instead of just looking at a distance at the time of a search for a place. I wrote about this patent in the post Google to Use Distance from Mobile Location History for Ranking in Local Search -

True or False: Latent semantic indexing was invented by researchers from Bell Communications Research in the late 1980s (before there was a WWW) to index smaller and rarely changing databases, and was patented to prevent others from using the technology

Correct! Wrong!

Solution: Some people believe they mean synonyms and semantically related terms when they say "LSI" however it is more than slang and is a technical term with a specific meaning that has been protected with a patent, and it means much more than "semantic terms are being used". See the background of LSI in my post Does Google Use Latent Semantic Indexing? at

True or False: Google gives equal weight to all reviews, even reviews for places that you stop visiting

Correct! Wrong!

Solution: Google tells us about this possibility in a patent which I wrote about in my post Google Giving Less Weight to Reviews of Places You Stop Visiting? -

Navneet Panda came up with a new patent that tells us about "quality scores for web pages" based upon language models developed from n-gram statistics on those pages. The n-grams are used to:

Correct! Wrong!

Solution: I wrote about this patent from Navneet Panda in the post Using N-gram Phrase Models to Generate Site Quality Scores, which tells us that new sites would be scored against previously scored sites -

True or False: Google may disambiguate people who share a name by looking at them in knowledge bases, to learn about the differences between them by collecting context terms about them, so it can tell apart people that you might write about

Correct! Wrong!

Solution: Google may learn about people who share the same name by using knowledge base information to help it tell them apart. I wrote about a patent that describes this in the post Google Shows Us Context is King When Indexing People -

Bill Slawski's Hardest SEO Quiz You'll Ever Take - 2018 Edition
Ouch, maybe try again?
Even getting some right is impressive! This quiz is hard.
Not bad. You've got some SEO chops!
Wowza! You're a real whiz.
No way! You were so close to perfect!
There are no ifs, ands or buts here. You're a genius!

Share your Results:

subscribe to our newsletter

1 Comment

  1. […] just for fun. It’s the 2018 SEO Quiz from Go Fish Digital. Good luck and no cheating. Just click here.  […]

Leave a Comment