I almost didn't write this. It feels a bit like beating a dead horse. But then I drank a cup of coffee and decided maybe some dead horses deserve the beating.
How is it that so many people have reviewed LittleBigPlanet? I'm not asking this because the single player levels are so insanely hard (they are), but because a good third of the game still isn't really playable.
The servers for online sharing just went live two days ago for a total of about eight hours and after running sporadically with glitches, were taken right back offline by Media Molecule. Currently the servers seem back up, but a bit shakey. So how is it that Metacritic was showing yesterday 34 reviews for the game and a metascore of 95?
I'm not saying LittleBigPlanet doesn't deserve the acclaim, it most assuredly does, but don't gamers deserve to read reviews based on the final product?
The problem, I think, is all about balancing. Publications need to balance the desire to publish timely reviews with the obligation to fully assess a game. That didn't use to be that hard to do. A game, once it was gold, was finished. But nowadays that rarely seems to be the case.
Often review code for a game sent to a reviewer comes with warnings about the game not being polished, or missing bits. Both Ninja Gaiden II and Too Human warned that the official review code wasn't fully polished. Fable II and LittleBigPlanet shipped despite the fact that the games weren't really finished. Both had major patches hit the week the game hit shelves.
What's that mean to reviews? Well, if the review is written and runs the day a game comes out it can mean it's based on something different than what a person who buys the game will experience. That's not always the case of course, but it seems to be a growing concern.
I emailed GameSpot, IGN and 1Up yesterday morning to ask about their LittleBigPlanet reviews, specifically about the timing of them and how they dealt with the online play, which was temporarily dead in the water.
Here's what they said on the issue:
We actually waited until the game was officially released and live until our reviewer finished writing his text (he was at home working on it for the first part of yesterday). In cases where there's a a significant enough online component we'll typically wait until we can test it in real-world situations before giving the game a final grade.
Review Run Date: Oct. 13
"The final game code that we received used the same online servers as the beta test, which we were in for a few weeks before LittleBigPlanet was released. We were given full access to the online component in the game for our review, so we were able to do everything that the retail game allows, including four-player sessions with folks in later levels of the game."
Review Run Date: Oct. 16
LittleBigPlanet was reviewed using a retail copy of the game that was able to connect to a server set up by SCEE ahead of the full release. This, on top of time spent checking out user-created content in the beta, afforded us ample opportunity to test all of the game's online features. We weren't able to see how the retail servers will perform under the weight of hundreds of thousands of users, of course, but that's true of any review for an online game that comes out prior to or even right around the day of release. If the online portion of the game doesn't perform properly at launch then that's something we'll report on.
I suspect many of the reviews relied on that beta of the game for their online impressions, which doesn't seem like a very good decision to me. Perhaps a better option would be to forestall a review until the reviewer has the retail, boxed code in their hands. Then they know they're writing about the same game. But even that has issues. If the game has major online elements those often won't be up and running until the game is released. Sometimes, like the case of an MMO, even writing a review the week of release is problematic. Sometimes writing it the month of release is a problem.
For now the best solution seems to be that reviewers should be very specific about what they could and couldn't play in the game they reviewed. And when playing online they should mention if they used the public servers or one set up specifically for reviews.
At Kotaku we fully disclose in our reviews what we played and how. We also often wait to get final, boxed, review code, rather than using debug review code. Though now I'm considering perhaps changing our review policy to include adding a mention of which we used in our explainer graph.