In 2015, We Might Finally Get Next-Gen Visuals

In 2015, We Might Finally Get Next-Gen Visuals

Polish company Better Reality, who do effects work for advertisers and Hollywood, have developed a piece of technology called Thorskan. It’s designed to scan real environments then recreate them in 3D. And the effects are impressive.

While the tech has been built for – and is being used by – studios like 20th Century Fox, Better Reality say it can also be used for video games.

One studio currently using the tech is fellow Polish outfit The Farm 51 (Necrovision), whose PC/XB1/PS4 game Get Even will use Thorskan to recreate real environments.

There’s no footage of Get Even available per se, just some screenshots (like the one above), but there is some test engine footage, which you can see below.

Easily the most disappointing thing about the PS4 and Xbox One’s launch was that – even though it was to be expected – there were no games that really showed us what these new systems were truly capable of. Exclusive titles were either rushed or relied on a barrage of particle effects, while multi-platform games were hamstrung by last-gen development.

It’s stuff like this that’s going to signal the real beginning of next-gen visuals. If, you know, it actually looks that good in a game. Hence the “might” in the headline.

Get Even has a very tentative release date of 2015.


  • Very impressive. All I can say is that the vision “through” the windows breaks the immersion as for obvious reasons it looks like (and is) a painting more than a window.

    Also somehow I *know* that the pile of rubbish is a solid lump and not something that you could dive into and hide in. I wonder if there’s much point striving for visual realism when there’s a lack of interactive / physics realism – wouldn’t it make it rather jarring similiar to the Uncanny Valley?

    • I wonder if there’s much point striving for visual realism when there’s a lack of interactive / physics realism – wouldn’t it make it rather jarring similiar to the Uncanny Valley?

      This has been an issue for me for a long while now. One of the best examples is Red Faction, where you can blow a hole through almost any game geometry – except office partitions, those are harder than Martian rock.

      I must say GTA5 was quite a pleasant experience – so many things that I expected wouldn’t be there were. I can’t give you an example right now because the details were often so little, but it all adds up to keep you immersed.

      Unfortunately, most games these days are almost on rails. You can look around a little bit, but nothing will happen until you, the only important character in the game universe (who usually doesn’t speak a word when asked questions), finally move onto whatever glowing objective designates the next stage in this interactive novel.

      I think sandboxes are doing well because they’re not just dressing up random corridors, they’ve got to give the world a bit of substance, even if it’s just having a blast destroying the world (Just ‘Cause.)

  • the environment looks great, but it’ll be the characters that will ruin the illusion when they’re put into the picture

  • EDIT: I read the article again and yes, it’s meant to automate the whole process by scanning things. Now that is awesome!

    Looks great, however I wonder how long it takes to create a model just for that one interior. Multiply that by an entire game and it’s not so much that we don’t have tech to achieve this, we just don’t have human time to sit there carefully creating all the environments with that much detail.

    Someone needs to go out, take heaps of photos for textures, model everything, apply the texture, work on the lighting, etc. It takes ages.

    • The technology scans the environment and recreates it. There is no photographing of individual elements, creation of textures or individual modelling. The lighting is baked-in, and would require extra lighting to be added for shadowing purposes. Aside from real-time lighting, the technology does everything, and that’s what makes it so impressive. It makes environs like these achievable in a realistic timeframe. The fact that this is scanning technology should be immediately apparent just by seeing the name.

      Did you even read the article, or are you just thick?

    • that’s the thing that I’m thinking. It used to be that someone coding a game in their bedroom could pump out a game a week in the 80s. Now it takes a team of up to 100 (maybe more) to create a game over 2-3 years.

      As much as we as gamers want to see fully interactive and/or destructive environments more detailed than ever before, it’s going to be logistically impossible for developers to make it happen unless we’re happy to wait Duke Nukem Forever style waits between games (although maybe that’s what the wait for Half Life 3’s all about).

      I hope that the Xbone and PS4 somehow embrace the modding community as that’s where we’ll see truly realistic games on PC at least

      • Indeed. I think I read somewhere that it took Rockstar over 1,000 people to make GTAIV. Over the course of many years.

        I was dicking around with the L4D2 tools on the weekend, trying to replicate my own house as a ‘thing to do when bored on the weekend’ and it was hard enough modelling it using the standard library of objects, let alone creating my own models and textures.

    • The point of this system is that it’s a 3D scanning camera setup.

      There is no modeling. There is no texturing. All data is captured by their camera system, then reassembled into the virtual environment they showed. The point of their system is to remove the need to thousands of hours work creating models, textures, lighting etc….

    • Imagine when they’re making movies like The Hobbit where they create staggeringly detailed sets. If they could just scan them all and use them in the games that’d be great.

  • That environment would look great in a last of us game but if they used the character models from the first game they would not fit in well and if they made new photo-realistic models then they would lose their charm and cross the uncanny valley.

  • I wonder if this is what they’re using for The Vanishing of Ethan Carter. It’s also using Photogrammetric technology (ie. Using photos of real-world environments to create a 3D world out of it) to generate its environments. While it looks good as a still, and a little photo-realistic in motion (it still looks flat in parts), the real test is going to be when there is complex lighting and moving objects (eg. wind, cars, humans, animals, running water).

  • Now I am no expert, but surely this is fairly useless for game design? Everything would have to be unique objects and textures, which would blow out the required storage for the game as you aren’t reusing assets.

    Also the properties and appearance of materials is another big issue. This would just create flat textures. How would dynamic light, reflections or even destruction work. If you want to capture a static environment then ok. I guess you could light the environment when capturing to then use the engine to light it, but once again, what’s the point. It’s surely got to be a lot of work, when it’d probably be easier to make the environment from scratch.

    Also the use would be very limited. How many games do we play that actually take place in real places? Do you want game design limited by where they can go and do scans of places? Surely a hybrid of some scanned locations and created from scratch or altered scans would be inconsistent.

    So yeah, no.

  • Why so many cynics… obviously not every game will use this but in the right circumstances this would look amazing.

    • agreed.

      People tend to forget what the launch-title games on the 360 looked like. Go and fire up Perfect Dark, then load up Battlefield 4 and tell me which shooter looks better.

      Y’all motherfuckers need patience

  • It’s a pity the next gen consoles wont be powerful enough to render this. Hell, even some of the really high-end PCs probably can’t run this.

  • Yes, textures are the future. Lighting, tesselation, and everything else is unimportant.

    • Yes, this video certainly does propose that lighting, tessellation, and everything else is unimportant, that’s exactly what this tech is all about.

  • So how’s this thing going to do animations?
    Or bump mapping?
    Or reflections?
    Or make anything that isn’t based on a real place?
    I could see this being used for a skateboarding game because those take place in an area with very little in the way of reflective surfaces, particle effects, destructible terrain or semi-solid objects but I couldn’t see it being used for anything else like a fantasy game or an FPS.

  • I just want characters that don’t look like rejects from a dreamworks animated movie!

  • There was an Aussie guy with similar idea I believe. Sounded like a pretty smart bloke! Getting stuff to real time render off basic laptops using what I think was cloud scanning. Pretty interesting take on the whole VR world.

  • there were no games that really showed us what these new systems were truly capable of
    Just get a 2 year old PC, drop your resolution to 720p, crank it up to max details, and voila. That is where PS4/XBone will be in a couple of years.

Show more comments

Comments are closed.

Log in to comment on this story!