How Our Bias Affects Reviews

You are in a store, holding a movie in your hand. You’re planning to buy it, because you looked at the Internet Movie Database (IMDB) earlier, and saw it has a 7.5 out of 10 average rating from 200 people. But you haven’t read any full-length reviews. A stranger sees you standing there and comes over to you.

‘I’ve watched that,’ he says. ‘It was awful. I didn’t like it at all.’ And he walks out of the store.

Would you still buy the movie?

Maybe you would, maybe you wouldn’t. But studies have shown that most people would think pretty seriously about it before making a decision. In other words, the impact of that one face-to-face discussion far outweighs the 200 opinions from online ratings aggregates.

But what has really changed? Before, there were 200 ratings with an average of 7.5. Now let’s say the stranger in the store gives the movie a rating of 2. Now there are 201 opinions, with an average of 7.48. Two-hundredths of a point really shouldn’t matter. You don’t know any more about the movie itself. The stranger’s comments were very generic. So why does it make you think twice?

We are social creatures. As such, we are programmed to respond to individuals in a very different way than we do to abstract data. Our brain doesn’t see this situation as 201 people reviewing something. It just sees two pieces of information: IMDB liked it. A stranger didn’t like it.

Psychological studies have shown that we respond the least to numbers, more to written reviews, and most to face-to-face encounters.

In one study, students were split into four groups, each given different information about courses, and asked to choose which they would take. One group was given just the course titles. A second group also got a numeric rating. The third group was given written reviews of the classes, although they were phrased in quite a generic way. And the fourth group had someone sit with them and verbally give them the same written review.

The numeric ratings had a slight impact on course selection, and the written review more. But the face-to-face encounter had the biggest impact by far – even though neither the written nor verbal presentations gave any basis for decision-making. The study also found that more weight was given to negative reviews than to positive ones. One bad apple can indeed spoil the bunch.

You should be aware of this when you read a review – especially when you read one from someone you don’t know. Of course, the more information there is in a review, the more you can get out of it. And the more reviews you read from a particular reviewer, the more you understand what they like and don’t like, and how that matches up with your own preferences.

This is why more frequent reviewers can be more valuable than one-shot wonders, even if you don’t always share their opinions. So be aware that your brain is hardwired to give more weight to someone saying, ‘I give this movie a 2, I didn’t like it at all,’ than just seeing a rating of 2 with no comment. Yet, in reality, they both have the same information. And 100 people giving a 7 has even more information, especially compared to the stranger making generic negative comments.

By understanding our inherent bias towards certain types of information, we can all make more informed decisions.


Prospect Theory

On a slight side note, but still within the realm of inherent bias in decision-making, I’d like to play a game with you. I’m going to give you two options.

• Option A: You get $3000 guaranteed.
• Option B: Roll a die. On a 1 to 5, you get $4000. On a 6 you get nothing.

Which would you pick?

Strictly mathematically, option B has a higher expectation value. Your average gain is $3200, and in option A it is $3000. But most people, about 80 per cent, choose option A, the sure thing.

Here’s another game. I’m going to give you two new options. • A: You lose $3000 guaranteed. • B: Roll a die. On a 1 to 5 you lose $4000. On a 6, you lose nothing.

How about this one?

Well, in this case, a whopping 92 per cent of people take the second choice – even though it actually increases their expected loss. In both examples, the vast majority of people will choose the worst option mathematically speaking.

What’s going on here? Is it just that people are bad at math?

This experiment was performed by two psychologists, Daniel Kahneman and Amos Tversky. They did a variety of experiments that repeatedly showed that people do not operate strictly by rational judgements. Based on their experiments, they developed a theory to explain this behaviour called ‘prospect theory’, which has developed into a fertile area of research, and offers a rich theoretical foundation for economics, finance, insurance, psychology and other areas. Kahneman received the Nobel Prize in 2002 for the development of prospect theory. Tversky would have shared the prize, but sadly he had passed away a few years earlier. '

The amazing thing is that neither Kahneman nor Tversky was trained in economics. They were both cognitive psychologists and wrote many highly readable books about decision-making. I particularly recommend Thinking, Fast and Slow.

Now, in a nutshell, prospect theory says a few things. First, people have different risk attitudes towards gains and losses: people are less likely to take risks to increase their gains, but they are more likely to take risks to avoid losses. This is clearly seen in the examples above, where most people wanted the sure $3000, but would take a gamble – a bad gamble, in this case – to avoid losing $3000.

Second, the way the problem is presented is critical. This is called ‘framing’. You can present the same problem to people to make it seem like they are losing instead of gaining. So, you can manipulate what they decide.

For example, here’s a slight variation on the original problem.

Participants were asked to imagine that the US was preparing for the outbreak of a disease that was expected to kill 600 people. The first group was asked to select between two different courses of action, with the following outcomes:

• A: 200 people will be saved.
• B: There is a 1/3 chance that 600 people will be saved, and a 2/3 chance that no people will be saved.

Over two-thirds (72 per cent) of people preferred option A, to definitely save 200 people.

A second group was presented with these outcomes:

• C: 400 people will die.
• D: There is a 1/3 chance that nobody will die, and a 2/3 chance that 600 people will die.

Now, in this framing, 78 per cent preferred program D, with the remaining 22 per cent opting for the program where 400 people die. However, A and C are exactly the same, as are B and D. The only difference is that A and B are expressed positively, in terms of saving people, versus in negative terms of people dying.

One game that explores this is Deal or No Deal.

In this US TV game show, the contestant is presented with 26 briefcases, each of which has a hidden amount of money, ranging from $1 to $1,000,000. The contestant picks one to keep, and then selects cases to be opened, removing them from the game. So, the remaining pool of dollar amounts gradually shrinks.

At certain points in the game, the ‘banker’ makes the player an offer for their case, which they can either accept, ending the game, or reject in order to play on, opening more cases.

This game is basically a psychological exploration of prospect theory. A fair offer from the banker would be the average of the remaining dollar amounts. But the banker never offers this. They always offer significantly less. So, a strictly mathematically inclined person should never take the offer.

But then, a mathematical answer is not always a correct answer. Consider a simple game where you roll a die, doubling your money on a 2 to 5, and losing everything on a 6. Mathematically, you should continue to play indefinitely. Realistically, though, we know you have to stop at some point.

So, people do take the banker’s deal. And the amount of money they are willing to leave on the table, to be able to walk away with a sure thing, helps economists fine-tune prospect theory, and learn more about people’s risk tolerance.

Keep this in mind when negotiating in a game, or trying to anticipate what people what will do. Most people will take a sure gain, but take a risk to avoid a loss.


Review or Analysis?

Tim's recent series on Final Fantasy 7 is an example of analysis, specifically on the game's localisation.

When we talk about a board game ‘critic’, as with many things, the terminology can be a little fuzzy. Usually we are referring to somebody who critically reviews games, similar to a movie reviewer, or a film critic.

The second type of critic performs critical analysis – looking at themes, historical context and comparable works. Literary criticism is often of this ilk. So, we’ve got two types of criticism: review and analysis. These serve two completely different purposes, which leads to a lot of confusion.

Reviews are there to help you decide whether to buy a game, or see a film, or read a book. Analysis is there to help you think more deeply about the topic and see connections or features that weren’t readily apparent at first glance. It can also help you understand the period in which the game was developed, or how the game reflects the designer, or how it fits into their larger body of work.

So, reviews by their very nature are time-sensitive. It is no accident that film reviews are typically published the day the film is released in theatres. Similarly, in the game world, it sometimes feels like there is a race between reviewers to get that first review up. It does seem that the earliest reviews garner the most attention, regardless of their ultimate utility. And it is rare that a review that comes out for a five-year-old product – whether it is a game or a film or a book – will attract that much attention.

Analysis is the exact opposite. Analysis requires time and perspective. Analysis requires that the critic be versed in many examples of whatever genre they are working in, be it poetry, literature, or film. Analysis requires expertise, and expertise requires time.

A game review does not require a game expert who has played hundreds or thousands of different games. Anyone can write a review that says what the game is about, how it plays, what works, what doesn’t, and why they loved or hated it. A review is a snapshot of one person’s reaction at a particular moment in time.

Sometimes reviewers attempt to incorporate analysis. The New York Times Book Review, for example, is famous for reviews that attempt to both review the book and place it in a larger literary context. But though I enjoy reading those reviews, I often find, at the end of a review, that I’m not sure whether I should buy the book or not. I’m just not given enough tools by the author, or enough personal bias, to make a judgement.

But there’s a twist for games that doesn’t exist for films, books, or even art. There is an expectation that board games will be played multiple times. There are certainly games that reveal all their depth on one playthrough, but others, especially classics like Go, Chess or Bridge, only reveal their depth and subtlety over time, and demand multiple plays.

So, are game reviewers obligated to play a game multiple times before rendering judgement?

This topic flared up in early 2012 over the game A Few Acres of Snow, where some felt that hidden depths or intricacies in the game were not being explored by early reviewers.

I disagree. I think there are certainly examples where those playing multiple times will have a deeper appreciation of the game, and that will come through in more in-depth reviews. And I think that the thoughtful reader of reviews will take that into consideration. Review is not analysis, however.

Analysis demands multiple plays. The ability to develop deep thoughts about a game – to put it into a context – requires familiarity with and exploration of the designer’s intent. And regardless of the simplicity of the game, you can’t get that with a single playthrough. If someone is doing an in-depth critical film analysis of 2001: A Space Odyssey, or a dissection of Hamlet, I fully expect that person to have gone through the material more than one or two times. And I expect no less from game analysis.

So, as I read an article about a game, I try to put the article into the review or analysis bucket – and, depending on how it’s classified, I approach it differently and have different expectations. From one, I am looking for quick reads on what a game is about and what works and what doesn’t. For the other, I am looking for deeper analysis, multiple plays, and more thought. It is very rare for anyone to combine these two forms, for obvious reasons.

In the board game world, there are many reviewers, some better than others, but a real lack of analysis. And I think that this is the crux of the bemoaning of the state of board game criticism. But I think it is unfair to lump these two together – review and analysis are really two different animals with two different expectations. If a review of A Few Acres of Snow doesn’t discuss the presence of a killer strategy, that’s fine by me. But a piece of critical analysis on the game would be sorely lacking if the killer strategy was omitted.


The above chapter is an excerpt from GameTek, a book about the big questions of life through games by Geoffrey Engelstein. Engelstein is an adjunct professor of Board Game Design at the NYU Game Center and an award-winning table-top game designer.

He has degrees in Physics and Electrical Engineering from the Massachusetts Institute of Technology, and is currently the president of Mars International, a design engineering firm.

Since 2007 he has been a contributor to the leading table-top game podcast Dice Tower, presenting ‘GameTek’, a series on the math, science, and psychology of games. He also hosts Ludology, another weekly podcast on board games.


As Kotaku editors we write about stuff we like and think you'll like too. Kotaku often has affiliate partnerships, so we may get a share of the revenue from your purchase.


Comments

    Yes I would buy the movie. I only go the movie shop if I already know what I am going to buy. Either something terribly cheesy (The Room, Hardcore Henry) or the latest Pixar or Marvel flick. Heck, most of it is even easier online.

      I'd buy the movie anyway, even if I hadn't checked the reviews online. Why? Simple, the short comment from one random stranger are meaningless.

      Firstly, there is no correlation between their tastes and mine, maybe it's a horror movie and they hate horror. That alone invalidates their "review".

      Secondly, there's no detail to that "review". Without going into depth about why they hated the movie there is no way to gauge the quality of their review. Was the acting bad, story bad, effects crap, etc?

      As for the dice roll problem, I think the issue is not that statistically you get more from rolling the dice, it's the difference between the min/max of the problem. The difference is 0-4000. 3000 is closer to 4000 than 0 so people are willing to take the guaranteed 3000. It's still far greater than 0.

      I think the same logic applies to the negative of the problem their is greater gain (or rather lesser loss) if you roll the six and lose nothing. The gap between the loss 0-3000 or 0-4000 that triggers the response.

    Can't say that I disagree with anything here, fantastic article.

    I like the review/analysis distinction. This fairly succinctly explains some of the huffing and puffing one sees on this site and others whenever an analytical piece is published.

    The 'I don't want no context in my reviews' crowd would benefit significantly from just accepting that analysis of a game is not synonymous with a review, that analytical criticism can't be translated into some 'objective' score out of 100, and most particularly shouldn't justify dumping hard on the article's author for the temerity to criticise your favorite genre.

    On a largely unrelated note, I always go straight for the negative reviews. I find them infinitely more useful than page after page of paid-off reviewers, fluffy praise and thrice-tired memes.

    edit: Gah, straight to moderation hell without even a typo edit. Was it the reference to paid-off reviewers wot did it?

    Last edited 20/05/19 1:49 pm

      Agree with the negative review comment, I always look at those as there usually honest and straight to the point.

      I mainly look at genre when buying a game, I play a lot of RPG's and almost anything Sci-fi, so Mass Effect was a big favourite didn't need reviews for that.

      I also like strategy and survival games, Sheltered was a great game Planet Base and Surviving Mars are also great.

        Same here. I look at the negative reviews and then determine if I think those particular issues will affect my gameplay at all. If there are a few negative reviews that agree with each other then I pay more attention to them.

      I think negative reviews can be every bit as bad as fake positive reviews. Look at the review bombing that happens regularly. All it takes is a popular streamer to dislike something and it's reviews get nuked. Or a game like Diablo Eternal which is going to be negatively reviewed because it's NOT Diablo 4, regardless of how good a game it is based on it's own merit.

      It's honestly pretty hard to trust *any reviews*, even the aggregate of reviews can be affected by review bombing or boosting. Really the only reviews I trust are where I am familiar with the reviewers tastes and trust their opinion.

        Every review needs to be read for useful information. Self evidentially review bombs aren’t going to be especially useful unless you have a particular beef against Denuvo, or Taiwanese independence, or whatever. Although certainly knowing that a review bomb has happened can be informative as well, suggesting that the score aggregate is artificially low.

        Nonetheless, the advantage of negative reviews is that if the issues raised don't seem very significant to me I feel much safer buying the game. The odds are good that the rest of the game isn't too bad if these (potentially irrelevant) things are all that people are complaining about. (Think people complaining that Dark Souls is ‘too hard’, assuming that I like hard games.)

        All reviews ultimately have to be parsed through your own bullshit filters for sure, but on balance negative reviews longer than a couple of sentences in my experience tend to be more detailed, balanced and informative than positive ones. The particularly useful ones often start with “This would be a great game, but…”

        Last edited 21/05/19 4:45 pm

          I think in general longer reviews are more useful whether they're positive or negative. Simply because they give you more information to work with and more of a picture of the reviewers thinking (and that they are actually thinking).

    Show me a person who says their review is objective and unbiased and I'll show you a liar. Reviews and reviewers are one of my favourite things to dissect (Not literally for the latter! Maybe) because of the many variables and cognitive biases at play which then pass onto you, the reader. I agree that numbers are absolutely useless, it's like trying to answer "Recommend me a movie" to someone you know nothing about.

    Negative reviews can be just as bad as positive ones though depending on what your definition is. In reality, you want a critical analysis that gives equal consideration to the positives and negatives. If all you see is someone focusing on what they hated or loved about something then it definitely will bias your own thinking because of social acceptance tendencies. (It's cool to hate on things)

    You also want to hear from people at various stages through the entire experience because people who play through the whole game tend to suffer from recency bias where they'll let how they last felt taint their entire impression of something.

    Site based reviewers are also notorious for being biased against games that aren't the type of game they like. It's a common and sometimes unavoidable problem with sites where you only have a certain amount of staff, each with their own tastes. It can help show how something might be liked by someone with different tastes but most of the time you're reading the review because you like those types of things and want to hear about it from a similar frame of reference.

    Basically the best way to approach reviews is to get a variety of opinions and make sure that you find out both the good and bad points. Most importantly though, never buy into hype, both positive and negative.

      I had the idea about 15 years ago to set up a sort of meta review site that matched reviewers and consumers based on personal tastes. So you'd go through a sort of questionnaire that established your likes and dislikes. Then that would be used to weight reviews based on the reviewers responses to those same questions. If you had a strong match for example, the weighting of the review would be basically 100%. But the weaker the match the lower the review would be weighted down to a point where you basically shouldn't trust the review.

        Hell I just had a think about it, and it was probably 20+ years ago because it was pre-Y2K.

        Damn, where have the years gone :(

    Buying a movie before watching it? Oh dear.

    I do take general sentiment into account, but because reviews from all the major publications are so incredibly inconsistent and unreliable I just watch a video review or two on youtube (which generally don't include a score of any kind) and make my decision based on the footage and the information about the game.

    Ever since GTA4 got a 10/10 I've ignored reviews. That and the fact that publishers can potentially force good reviews either through cash or withholding review code.

    Gameplay videos will do me for making the call to buy or not.

      I'm with you. I don't even watch or read reviews at all any more. Not for the decision anyway. I just watch gameplay vids/vods and if i like the setting, mechanics and/or am pulled towards the game for whatever reason, i get it.

        It must be nice to have enough money to be able to purchase games exclusively based on the quality of a publisher's marketing department.

          Ah, i should specify that by gameplay vid/vod, i mean watching a rando play it on youtube or twitch after release.

          Not a gameplay vid released by the publisher.

        I have a twofold problem with that approach. Firstly, a gameplay vid isn't always going to be indicative of your actual experience. Take for example a person playing the game using a controller rather than a mouse/keyboard. You may not realise that's how they're playing and the experience could change drastically. Same if they're a super skilled player with crazy reflexes.

        Secondly, there is the risk of spoilers (depending on the type of game). If it's just a shooter or a racing game it's probably ok. But I'd avoid gameplay vids on a game like Alan Wake or a lot of RPGs because I'd want to experience the moment myself while playing rather than watching a vid.

        And that's before you get to the problem @AngoraFish mentioned: that the video may be fake viral marketing.

        Side note: I hate reviews (for any type of media) that go too deeply into the plot and events. That is straying into outright spoilers. If you feel the need to put them into reviewers then for the love of god use spoiler warnings.

          See above for the vid/vod answer. Id prob watch a couple of mins on several dif content creators.

          But spoilers, i generally watch the volume muted and only for a few min on any one stream or vid. I can tell pretty quickly if its a game im interested in.

    i would buy it... but then again, i don't buy movies. games and anime yes, but i only buy Anime that i have already seen digitally. for games you can't try before you buy, so if it sucks meh, i can refund it if it's from Ebgames. Amazon i think you return it. then again, i bought Anthem, it's alright, but i wouldn't trade it/sell it. i make my own decisions. the Warcraft and Assassin's Creed Movies, low scores, but i loved it, would buy in 4K if i had the 4K TV, even in 3D to use on my PSVR.

    More reviews = more material to dissect and form your own opinions. Frankly I find steam reviews as a whole to be way more helpful than those from "professional outlets" simply because they are more often than not pure customer feedback.

    I rarely rely on reviews now, at least for my entertainment $s (still read a review on fridges if I need a fridge, but not on games or movies).
    The last time I actually followed a review / suggestion was when Neverwinter Nights came out. It got bagged for not being like Baldurs Gate and I didnt play it for over a year. Then I tried it and I was disappointed that I hadnt played it earlier.

    A recent example of this would have been Mass Effect Andromeda. It was panned, but in all honesty, its not a bad game.

Join the discussion!

Trending Stories Right Now