Google’s Change to Make Autocomplete Safer
Google’s “Autocomplete” feature, first released in 2004, has been a revolution in search. It shortens the time it takes to make a query based on the first few letters that a user types. It’s a very handy feature that most users would say they can’t live without, even if there are some privacy concerns.
Essentially, Google aggregates this data and uses popular search data to make an educated guess as to what you might be looking for. It can also take things a step further, using your query of “wea” to serve up today’s weather (since that’s what you’re probably looking for). Google’s more intelligent search also has a downside: it can be inadvertently offensive.
In a recent article on Guardian, journalists showed how Google’s autocomplete returned results for holocaust denial, which is fake and highly offensive. These types of results, including celebrity death hoaxes, are nothing more than using popular search data to game the system. Google is adding a new feature aimed at curbing that.
Users will now be able to report a potential query as offensive, and contribute to a growing report that Google will use to try and “teach” its algorithm the proper way to return non-offensive results.
What’s not clear is how this will be impervious to the same kinds of problems that Google is trying to solve. What if users reported a particular ideology or name as offensive? What if they decided to report a company name or product as offensive? What kinds of safeguards are in place at Google to make sure these flagged searches, which might otherwise be valid queries, don’t end up in the filter?
Bio: Submit Express, founded in 1998 by Pierre Zarokian, is a leading SEO firm specializing in reputation management and search engine optimization.