How Close Science Is To A Real-Life ‘Heads Up Display’

How Close Science Is To A Real-Life ‘Heads Up Display’

In computer games, you usually know things that your avatar could not, like the locations of enemies on a mini-map.

This can break immersion, so increasingly game developers try to justify this using technology from within the (usually science fiction) setting. For example: giving the player a helmet or goggles with sensors that provide visual feedback, like in the Detective Mode of Batman: Arkham Asylum.

How Close Science Is To A Real-Life ‘Heads Up Display’

Batman: Arkham Asylum’s Detective Mode.

The beginnings of this sort of technology is already becoming possible for ordinary people with Google Glass. More and more, though, sci-fi games have started to include the notion of neural implants which can feed information directly to the brain. A fantastic recent example is Deus Ex: Human Revolution (DXHR) in which radar, wall-penetrating imaging, social analysis data, etc. are projected directly onto lead character Adam Jensen’s retina.

How realistic are these neural implants though?

A recent article on Thwacke discussed the science of neural implantations in DXHR. Most of this technology is well beyond what is currently possible, but there is one sense in which science is ahead of fiction. In DXHR, all the extra-sensory data is converted to visual information and projected onto the retina, but it may be that in the future ‘real’ neural implants will do something much more interesting than this.

Rather than just present visual representations of data, implants could directly insert information into the brain. This has two big advantages compared to providing visual information: firstly, because it is provided in a parallel, dedicated pathway, the user could respond more quickly rather than having to interpret the visual information and then decode that into the information they actually need. Secondly, it reduces the amount of cognitive effort required from the user, making it feel effortless.

How Close Science Is To A Real-Life ‘Heads Up Display’

Deus Ex: Human Revolution: social analysis implant in action.

What makes this possible is the remarkable ‘plasticity’ of the brain. Plasticity is the ability of the brain to reconfigure itself based on what task it needs to perform. Folklore has it that blind people develop enhanced hearing to compensate for their loss of vision, and in fact there seems to be some evidence that this can happen, and in animal studies that some of the visual processing areas of the brain get recruited to work on auditory processing (or vice versa for deaf animals). In other words, a particular area of the brain normally devoted to processing one type of input can cope with a very different type of input and adapt to it. There are some limitations though, as some forms of plasticity only occur in young animals, but this is something that future research might be able to overcome.

Recent research has shown that this sort of brain plasticity can indeed allow neural implants to communicate information directly to the brain, and read out responses. In an extraordinary series of experiments, Miguel Nicolelis and colleagues used the activity of neurons (brain cells) of a monkey to control an avatar in a virtual environment. The monkey could not see, hear or touch this virtual environment, the only information it received was from direct stimulation of some of its neurons by electrodes inserted into its brain.

Despite this, after training with positive feedback (rewards for performing tasks), the monkey could meaningfully control its avatar in the virtual environment. This is extraordinary given that it was getting no information about it from its normal senses, only through the activity of the randomly chosen neurons stimulated by the electrodes.

How Close Science Is To A Real-Life ‘Heads Up Display’

A monkey virtual reality avatar in an experiment by Miguel Nicolelis.

Historically, humans have long sought to enhance their senses with technology, and some of these have required brain plasticity. The telescope is the most obvious enhanced sense and can be used by someone without much training, suggesting that little brain plasticity is involved. However, there are some wonderful examples of enhanced hearing using brain plasticity dating from the 19th century and the beginning of the 20th century involving the localisation of sounds. Sailors in foggy conditions had to rely on sound to distinguish the locations of other objects, but since humans are not naturally very good at localizing sounds (unlike the owl, which can tell where a mouse is from the sounds it makes, to within a few degrees), an enhancement was needed. Thus, the “topophone” was invented in 1880:

How Close Science Is To A Real-Life ‘Heads Up Display’

Professor Mayer’s topophone, 1880. Used for navigation in fog.

One of the ways we can tell the direction a sound is coming from is to use the difference in arrival time of a sound between our two ears. A sound coming from the left arrives at the left ear a few hundred microseconds (yes, that’s millionths of a second) before it arrives at the right ear, and we can infer the direction from the time difference. However, the time differences are very small and we are not very sensitive to them. The idea of the “topophone” above is to increase the distance between our “ears” to increase our sensitivity to direction. The same principle was used by gunners attempting to shoot down bombers in World War 1 before the invention of radar:

How Close Science Is To A Real-Life ‘Heads Up Display’

Enhanced sound localisation for spotting bombers, 1921.

It takes several weeks for your brain to reconfigure itself to new auditory inputs. Hofman and colleagues demonstrated this (subscription required for full article) and their dedication to science, by wearing fake ear molds all the time until they could accurately localise sounds, which took about a month.

As well as enhancing senses, people have been given new senses. In the feelSpace project, people volunteered to wear a belt that vibrates more or less depending on whether they were facing north or not. After a few weeks of training, they reported ceasing to feel the vibrations at all and instead just knowing which way was north. Alarmingly, after the project ended many of them reported feeling lost and like the world had shrunk around them.

Going back to computer games: why should we be limited to the idea of neural implants that present information only visually? We can understand this partly as a metaphor — immersive technology is rapidly improving (think about the Oculus Rift). But we can’t actually implant our gamers, at least not yet. So we have to work with the information we can present them with, and visual information is the easiest to work with.

We shouldn’t be content with that though, we should demand more creative metaphors. To be fair, some games have experimented with this. It’s fairly common now to see low health indicated not by a bar on the screen, but by blurred, grayed out vision, or blood splatters on the screen.

How Close Science Is To A Real-Life ‘Heads Up Display’

The Saboteur, showing regions with strong or weak Nazi influence in B&W and colour.

A nice example was The Saboteur, where you play as a member of the Resistance in Paris during WW2. At the beginning of the game, everything is in black and white indicating the influence of the Nazis. But as the player performs visible acts of resistance, the city begins to get coated in colour, indicating some level of liberation in those areas.

So what about using sound to convey information? Some games do convey some information with sound. For example, the proximity of enemies is indicated by the intensity of the music, but this is usually used to intensify emotions rather than to provide information necessary to play the game. One extraordinary example of using sound to convey information is AudioQuake, which is a mod of the original Quake for blind or partially sighted people. Using beeps and tones and various other ‘auditory icons’, the game gives information about the geometry of the level and the proximity of different types of enemies.

Amazingly, it turns out that with training, this is enough information to be able to play Quake with no visual information at all! The same idea could be used in any game. The game would provide extra, useful information like the position or type of enemy in subtle aspects of the sound, and because the player is highly motivated to make use of this information, brain plasticity would ensure that they could soon use it almost instinctively (in-game tutorials could help with this). Not only would this be better science, but it might make for a better game too.

Games are getting a lot right with neural implants, but the reality might turn out to be a lot more interesting than the fiction. Developers have an opportunity to convey this not only with the fictional setting, but also in using game design itself, by conveying critical game information in creative, multi-sensory ways.


Dan Goodman is a consultant for Thwacke and a postdoctoral fellow at Harvard University in computational neuroscience. For more follow us on Twitter, Google + and Facebook.


The Cheapest NBN 1000 Plans

Looking to bump up your internet connection and save a few bucks? Here are the cheapest plans available.

At Kotaku, we independently select and write about stuff we love and think you'll like too. We have affiliate and advertising partnerships, which means we may collect a share of sales or other compensation from the links on this page. BTW – prices are accurate and items in stock at the time of posting.

Comments


3 responses to “How Close Science Is To A Real-Life ‘Heads Up Display’”