Neural Network Makes Doom's Graphics Look Like The Boxart

We've had ASCII Doom, and battle royale Doom, so why not boxart Doom? What the flaming fireballs does "boxart" mean? It means you tap into a neural network to transform Doom gameplay into, well... it's best if you see for yourself.

If you check out "the other blog" by a user with the handle "kingdomakrillic", you'll see the steady progression of Doom as it's warped by Google's DeepDream.

Take this terrifying snippet.

https://kingdomakrillic.tumblr.com/post/178161642291/95-done-with-deepdream-doom-at-this-point-eta-is

This work culminated in the short animation above, where it looks like you're running around inside the artwork of Doom's game box.

Here's the artwork for comparison.

Image: id Software

Unfortunately, you can't actually play the game like this. Not yet, anyway. The 15-second clip, consisting of 306 frames, took over six hours to render on an NVIDIA 1050 Ti.

But who knows — in five or ten years, hardware might be powerful enough to do this in real-time.

Machine Learning Makes Screenshots Real Weird

This is a picture of a dragon from ARK: Survival Evolved. It's not normally what an ARK dragon looks like, but it's what can happen when you blend in-game screenshots with machine learning.

Read more

the other blog [Tumblr, via Twitter]


Comments

    looks nothing like the box art, and is basically a blur and cartoonize/paint shader which can be done today real time.

    When writing an article it might be useful to explain what you’re talking about. I don’t know what a neural network is and I’m probably not the only one.
    Looks like a pseudo hippy’s Bedroom wall minus the che Guevara print.

      I doubt the author of this article understands what they're talking about, either. Looking at the linked blog. There is the rendered video of "boxart doom" and there is a deep dream mod and they are two separate things.
      The boxart doom thing is interesting, it uses a neural network (AI) training technique to learn a particular "style" in an image (or group of images) and adjust another image (or in this case a sequence of images to render a video). Style transfer is really cool, the process of how it works it quite a bit complicated and mostly academic.
      Google Deep Dream is another AI experiment that was developed by Google. It was trained on animal images, and I'm not sure what it's original purpose was, but it generally transforms images into trippy mashups of eyes and animal body parts. The blog mentions an in-progress mod. Looks trippy.

        Yeah I dunno if you notices, but if you look at the second image, at the top of the Caco Demon's head you can see a puppy's face!

      How did you miss the whole neural network deep dream thing when it hit? It was pretty much everywhere when it hit a few years ago. As someone that is engaged enough to visit this site and comment I find it hard to imagine you missed it.

      Explaining what a neural network is in every article it's mentioned would be like explaining what swimming is whenever a pool is mentioned. Google is right there, if you have somehow against all odds avoided coming across the subject over the past few years.

    They did it on a 1050ti but who knows in 5 or 10 years the hardware might be powerful enough?
    Umm... just try it now on a 1080ti in that case.

Join the discussion!

Trending Stories Right Now