Image: NVIDIA / ARK Survival Evolved
This is a picture of a dragon from ARK: Survival Evolved. It’s not normally what an ARK dragon looks like, but it’s what can happen when you blend in-game screenshots with machine learning.
Announced at NVIDIA’s pre-briefing ahead of Gamescom, the GPU manufacturer announced it was developing an update to its Ansel in-game photography tools that would let users blend their screenshots with the style of preferred artists.
Due out in the final quarter of the year, Ansel will let you import a picture that NVIDIA’s machine learning then analyses. Called AI Style Transfer, the feature will then apply the “style” of the photo selected and overlay that atop of your screenshot in-game.
It only works within the Ansel overlay for now, which means the feature is restricted to games that support NVIDIA Ansel. Here’s what it looks like from the user’s perspective:
The range of options is pretty limited, and you can see in the photo that the feature still needs a UI pass. But the principle of it is real weird, and super cool at the same time.
Imagine, for instance, giving Battlegrounds the Jackson Pollock treatment. Or seeing what the styles from the latest Archibald winner would look like applied to Mirror’s Edge: Catalyst. The Witness already has a pretty distinctive style, but it could be fun to revisit with an abstract lens, too.
The only downside here is that the list of games that support Ansel is still pretty short. Still, for the games that it does support, this is a neat way to produce some … unusual shots.
The author travelled to Gamescom 2017 as a guest of NVIDIA.
Comments
2 responses to “Machine Learning Is Here To Make Your Screenshots Real Weird”
I may not know art but it looks like someone ran a screenshot through a bunch of random Photoshop filters.
Assuming it is the same kind of thing as the Google deep dream stuff from a few years back, it is running the image through a number of filters.
It’s basically taking a neural network that has been trained to detect certain objects, and then running it backwards with the image as input and repeating the process multiple times. It can end up turning features of the image the neural network half recognise into more recognisable versions of those objects.
The model Google released for the Deep Dream included training on recognising a bunch of dogs, so it tended to find dogs in your images.