By now we’ve seen everything from Fear and Loathing in Las Vegas to Donald Trump to popular memes processed by neural networks like Google’s Deep Dream.
They’re like bizarre drug trips, but without the drugs. But it was only recently that someone was curious enough to see what a neural network would make of the human equivalent of Ambien: Painter Bob Ross.
Image processing programs like Deep Dream rely on neural networks and artificial intelligence to find and enhance patterns and details in images, often leading to bizarrely over-processed results. But the algorithms can also be tweaked so that the neural networks are working to match parts of an image to other images, like a terrifying version of kids trying to recognise objects in random cloud shapes. It’s those results that will haunt your dreams, when paintings of beautiful landscapes become seething canvases of random creatures.
This is all to say: This video is absolutely horrifying, but you should watch it anyway.
This story originally appeared on Gizmodo
Comments
5 responses to “And Here’s Bob Ross, As Seen By A Neural Network”
If this is how AI sees us in the future when they become sentient, well we are done for!
It seems to have a bias for interpreting everything with fractals and dog faces.
Seems to be down to bad programming more than anything else?
Or really good programing. this may be a closer representation of reality and what we percieve is a distilled analogue shaped within our own narrow reality tunnels. A little DMT will break those walls down for you.
Hahaha! Computers use vastly simplified models in order to do this sort of thing. They have lightyears to go before they can break down reality for us 😀
So what would happen if you did this to a Cyriak video…