Google has sprinkled some new ingredients into its search engine in an effort to prevent bogus information and offensive suggestions from souring its results. The new initiative includes tweaks to the algorithm used by Google to deliver search results, while offering users more options to flag inappropriate content, including for “autocomplete,” or the suggestions Google makes while someone is typing a search query. Besides taking steps to block fake news from appearing in its search results, Google also has reprogrammed a popular feature that automatically tries to predict what a person is looking for as a search request as being typed. The tool, called “autocomplete,” has been overhauled to omit derogatory suggestions, such as “are women evil,” or recommendations that promote violence. Paul Haahr, a lead search engineer at Google said that it only impacts about 0.1% of the queries but it is an important problem. Paul Haahr explained that there are times where people specifically want to find hateful or inaccurate information. Maybe on the inaccurate side, they like satire sites or maybe on the hate side, they hate people. Google should not prevent people from finding content that they want, Paul said. And the quality raters’ guidelines explains with key examples on how raters should rate such pages.
For instance, here’s one for a search on “holocaust history,” giving two different results that might have appeared and how to rate them:
The first result is from a white supremacist site. Raters are told it should be flagged as Upsetting-Offensive because many people would find Holocaust denial to be offensive.
The second result is from The History Channel. Raters are not told to flag this result as Upsetting-Offensive because it’s a “factually accurate source of historical information.”