How does Google use human raters in web search?

How does Google use human raters in web search?

MATT CUTTS: Hey, everybody
Matt Cutts here, ready to answer another question
you’ve got about how Google search works. We’ve got a really interesting
one today. It’s from San Francisco, California. “Can you provide more details on how Google uses human raters as
part of their algorithm?” Great question. I’m going to try to narrow it
down a little bit first, so by human raters I assume you mean
people who are paid by Google, that is you’re not talking about
people who are blocking results in the Google search
results or using the Chrome extension to block things. You’re actually talking about
people who are rating results. I’m also going to assume
that you don’t mean people doing web spam. So I’ve made other videos that
talks about how Google takes action and is willing to take
manual action on web spam, but you’re talking about raters. You used the word raters, so
let me drill down on that a little bit. Raters are really not used to
influence Google’s rankings directly, so let’s walk
through exactly how they are used. I’m not a member of the Search
Quality Evaluation Team. I work on web spam, but I can
basically paraphrase the process because that’s
where the human raters come into play. Suppose an engineer
has a new idea. They’re thinking, oh, I can
score these names differently if I reverse their order because
in Hungarian and Japanese that’s the sort of
thing where that can improve search quality. What you would do is we have
rated a large quantity of urls, and we’ve said this
is really good. This is bad. This url is spam. So there are 100s of raters who
are paid to, given a url, say is this good stuff? Is this bad stuff? Is it spam? How useful is it? Those sorts of things. Is it really, really
just essential, all those kinds of things. We also– so once you’ve gotten all those
ratings, your engineer has an idea. He says “OK, I’m going to change
the algorithm.” He changes the algorithm and does
a test on his machine or here at the internal corporate
network, and then you can run a whole bunch of different
queries. And you can say OK, what
results change? And you take the results the
change and you take the ratings for those results and
then you say overall do the return– do to the results that
are returned tend to be better, right? They’re the sort of things that
people rated a little bit higher rather than a
little bit lower? And if so, then that’s
a good sign, right? You’re on the right path. It doesn’t mean that it’s
perfect, like, raters might miss some spam or raters might
not notice some things, but in general you would hope that if
an algorithm makes a new site come up, then that new site
would tend to be higher rated than the previous site
that came up. So imagine that everything
looks good. It looks like it’s a
pretty useful idea. Then the engineer, instead of
just doing some internal testing, is ready to go through
sort of a launch evaluation where they say
how useful is this? And what they can do is they can
generate what’s called a side by side. And the side by side is exactly
what it sounds like. It’s a blind taste test. So
over here on the left-hand side, you’d have one set
of search results. And on the right-hand side
you’d have a completely different set of
search results. So one, two, three, four, five,
six, seven, eight nine ten, one, two, three, four,
five, six, seven, eight, nine, ten. And if you’re a rater, that is
a human rater, you would be presented with a query and
a set of search results. And given the query, what you
do is you say, “I prefer the left side, ” or “I prefer the
right side.” And ideally you give some comments like, “Oh,
yes, number two here is spam,” or “Number four here was
really, really useful.” Now, the human rater doesn’t
know which side is which, which side is the old algorithm
and which side is the new test algorithm. So it’s a truly blind taste
test. And what you do is you take that back and you look at
the stuff that tends to be rated as much better with the
new algorithm or much worse with the new algorithm. Because if it’s about the same
then that doesn’t give you as much information. So you look at the outliers. And you say, “OK, do
you tend to lose navigational home pages? Or under this query set do
things get much worse? And then you can look at the
rater comments, and you can see could they tell that things
were getting better? If things looked pretty good,
then we can send it out for what’s known as sort of
a live experiment. And that’s basically taking a
small percentage of users, and when they come to Google
you give them the new search results. And then you look and you say
OK, do people tend to click on the new search results a
little bit more often? Do they seem to like it better
according to the different ways that we try to
measure that? And if they do, then that’s
also a good sign. Now, people can get it wrong. For example, raters and just
regular users don’t always recognize spam. So you could launch some change
that got rid of a whole bunch of spam and people
might still think that that was not as good. So it’s no substitute for the
intuition and the experience that the search engine engineers
have, but we do take the evaluation and the results
of both the human raters, as well as the analysts who
evaluate those results very, very seriously. And we want to make sure that
we’re launching a change that’s overall a big improvement
or ideally at least an improvement
for users. So as you can see here, if I
rate this left or right as better, that doesn’t change
the algorithm. Really the human raters that are
used within the evaluation group are used to say we think
this would be better or we think this would be worse. But those ratings don’t directly
affect the search engine results. So very good question, I’m
glad you asked it. I’m glad it gave me an
opportunity to talk about how we think about when you want to
launch a search change how do you tell if it’s really
an improvement? How do you tell if you’ve
missed anything? Can you evaluate it in different
languages and see whether it looks better across
all those different languages? So those are the kinds of things
that we think about. But to just dispel the
misconception that there are a group of raters and when they
rate something is bad the– if you don’t think that this
result is as useful then it starts to drop in the rankings,
that doesn’t happen. The only time that that sort of
thing happens is when we’re taking action on web
spam, and that’s a completely different group. And we’ve talked a little bit
about those, and we could cover those in a different
video. But I hope that helps. I hope that explains a little
bit about how we think about whether to launch a search
change or not, and sort of explains when human raters are
used and what they’re used for and how their expertise
helps us make Google search results better. Thanks very much.

30 thoughts on “How does Google use human raters in web search?

  1. How raters can tell whether content on a site is useful or not (spam or not) if they might be not familiar with the topic they are evaluating?

  2. google search is falling every day, someone just need to click at most watched videos in youtube to realize that.
    The first search competence that come out with a good idea, it will erase google from the internet.

  3. Interesting, I like that you use a blind test to see if the new or old algorithm gates to the next test

  4. Every video I see from Matt, it gives the same message over and over again with different questions : "If you don't do anything bad and add value to that search keyword, we will rank you good".

  5. Thanks Mat – good video and very useful to many I'm sure as it's lifting a little bit of the curtain to expose the wizard behind 🙂

  6. Ok, lets say my topic is hypnosis or politics. So, do you hire raters who are professional hypnotists or political scientists???? If not, then how those raters know the quality and relevance of the content? Does Google raters know all spheres of the life? I think with this "raters" Google makes mistakes as and goes against its own principles which is called "search relevance". How can "raters" help with algorithms (which must help to get RELEVANT results on Google) if they don't know the topic?

  7. Two algorithm's to compare the same queries, cool way to determine the results internally before implementing the changes to everyone.

  8. I used to be rater and yes we are evaluated. The test to become a rater takes hours and you have to study a manual that's over 100 pages.

  9. The way that pages are evaluated are not as direct as you would think therefore this concern is a bit irrelevant.

  10. There are very specific instructions they must follow. But it's also about their experience in their locales as users and their intuition and common sense.

  11. If efforts to make "a best website" possible are already being attempted by a webmaster… how does an understanding of human raters support a webmaster's ongoing efforts? Unless a site ranks #3 or so for something… the site might not ever truely understand what's holding them back from reaching a top ranking. Yes?

  12. I've long suspected that human raters are the cause for the perceived bias toward Google-owned asset's in Google's organic SERPs. What steps (if any) are taken to prevent the inherent pro-Google bias of the "Google-paid" human raters from choosing the results sets that favor the inclusion of Google-owned assets? Wouldn't such bias would encourage adoption of algorithmic changes that favor Google-owned assets and discourage adoption of those that would be unfavorable to Google's bottom line?

  13. Why don't they just let everyone rate? I mean that would work better versus letting a few hundreds do the rating. Plus the human raters might not be as knowledgeable about the topic so they won't be able to decide which one's better compared to the other.

Leave a Reply

Your email address will not be published. Required fields are marked *