A team of scientists lead by the Australian National University has developed what they call a “nano device” for gaming consoles, a device that they claim can speed up the rendering of consoles and graphics workstations.
The new invention, according to ANU senior researcher Professor Dragomir Neshev, is a tiny antenna 100 times thinner than a human hair. The professor says that the device can accelerate “the exchange of data between the multiple processors in the console”.
If implemented by the major platforms, it could go some way to resolving one of the biggest issues modern gaming consoles face: underpowered CPUs. The GPU power of consoles has come a long way in recent years, although small form factors and challenges around cooling have meant console CPUs haven’t been able to advance quite as far. It’s a problem Digital Foundry touched on recently when looking forward to the next generation of consoles.
“Our invention can be used to connect these processors with optical wires that will transmit data between processes thousands of times faster than metal wires,” Professor Neshev explained. “This will enable smooth rendering and large-scale parallel computation needed for a good gaming experience.”
It’s basically a door into the world of optical computing, which computers have been staring at for a little while. The problem behind advancing from copper-bound computers has been three basic issues: power, heat and size. Using optics to transfer information is vastly faster than using electrical signals, but it raises a whole set of problems with transferring information reliably, which in turn increases the heat generated and cooling needed.
The invention was developed in collaboration with Friedrich-Schiller-Universität Jena at the Leibniz Institute of Photonic Technology, and Germany’s Technische Universität Darmstadt. The findings were published in the Science Advances journal, and Professor Neshev added that the device could also benefit workstations used for special effects and animation rendering.
Precisely how long it would be before manufacturers could use this new tech, however, is unknown. Given that the processors in the PS4 and Xbox One (along with their higher powered variants) are designed by AMD, or NVIDIA in the Switch’s case, any implementation would undoubtedly fall on them. And CPU development is often mapped out years in advance: the Xbox One X and PS4 Pro, for instance, are still using the older generation Jaguar APUs from AMD, rather than the newer Ryzen architecture.
I’ve asked ANU whether the researchers have been in contact with the aforementioned companies, and if there’s anything interesting to report there I’ll let you know. My suspicion is that any advancement is probably well beyond the reach of the next generation. And probably the one after that. But the world of optical computing is a lot closer today than it ever has been – and once we’re there, we might finally be in a place to restart Moore’s Law.
Update: When asked if the researchers had gotten in touch with any of the CPU manufacturers yet, Professor Dragomir Neshev replied with an update:
Under the Australian Research Council Centre of Excellence for Ultrahigh bandwidth Devices for Optical Systems (CUDOS), we have worked with researchers from Intel and have established some of the industry requirements for possible translation of the technology to real world manufacturing. For example, our device is built on a silicon platform which is the system of choice for optical interconnects used by Intel, IBM and other industry leaders. So the technology is applicable for integration with large-scale chip production.
Some further steps still need to be made to achieve complete CMOS (complementary metal-oxide-semiconductor) capability, and we are open for discussions with the industry on the implementation of our nanoantennas in the silicon photonic industry.
Interesting.
Comments
17 responses to “The Australian Device That Could Transform Console Gaming”
This seems like a general purpose computing advancement, not anything specific to consoles. Did I miss something?
The researchers are pitching it as a device for console gaming, which might be their intent for its initial implication. But I suspect much the same as you – this is a device that’s helpful for computing more broadly, not consoles specifically, although the fixed nature of consoles might just mean that it’s easier to start there first.
I’ve asked if they’ve reached out to manufacturers, but not heard back, although this is the kind of thing that cooks in the oven for years on end. There will be a development down the road at some stage, especially once AMD and Intel start pushing out 32-core CPUs and the like (and they need a better solution than just dropping the frequency of all those cores to 1GHz or something ridiculous).
It will be interesting to see if this develops into anything noteworthy, especially considering that cpu is the current bottleneck in console gaming.
I thought the speed of an electron in copper was 99.99% the speed of light.
The advantage of using light is that you can transmit multiple signals on one fibre and have then sent/extracted by different “receivers” at the other end.
Electrons moving through a wire don’t actually move very fast at all [we’re talking less than 1cm/s as a first order approximation]. It is the signal which propagates near the speed of light. ie: push an electron at one end, one pops out the other end.
That said, using optical signals rather than electrical increases bandwidth in a number of areas. For example, in processor interconnects, the main way to increase bandwidth is to increase the number of connections. This requires increasing density and decreasing size of each wire. You start to hit big challenges with interference when you get down to a certain scale. But optical pathways are not vulnerable to external interference in the way electrical pathways are, which allows a much greater density.
also heat plays a factor in speed… LN2 cooling makes those processors go extra fast while a warm one is sluggish (relative terms)
optical defeats heat at least for the time being
Re: improving consoles… Is bandwidth between the CPU and GPU the issue though, because if I’m understanding this correctly that’s all this new thing improves. From what I understood, it’s not bandwidth between the CPU/GPU that’s the issue with consoles currently, it’s the fact that CPUs are relatively underpowered vs the GPU. As things currently stand, the gains would be marginal.
It’s more about speed than bandwidth. This will push data faster than it can be sent over current electrical wiring. If the CPU can get the GPU started faster and the GPU can report results back faster it makes the rendering process shorter. I also think this could be applied between the CPU/GPU and the memory controller for faster access to memory.
The gains will be based on the time saved in communication, and how often the CPU and GPU interact.
Bandwidth is (effectively) speed. Thats why it’s measured in units per second.
My point was that with current console tech, the CPU is the limiting factor more than bandwidth between CPU and GPU. In cases where the GPU is waiting for the CPU, it’s not because the CPU can’t get the data to the GPU fast enough, its because it can’t process the data.
From my understanding, the data transfer time as a proportion of rendering time is incredibly small.
Certainly as CPUs catch up, the need for high bandwidth connection between CPU and GPU increases. But we’re a ways off from there.
as the GPU processes it want draw calls from the CPU
peitty soon GPUs will outpace CPUs in consoles… if we keep going the way it is now in small form factor
if the GPU draw calls dont get processed fast by the CPU then the GPU will skip frames
That’s my point. Making data go faster between the CPU and GPU doesn’t make the CPU any faster. If the CPU can’t keep up with the demand of the GPU, getting the data across quicker just means the GPU is idling for longer.
yep
no heat or less heat will allow stacked processors like HBM and SSDs
It’s a problem which is slowing the progression of all computing technology.
https://en.wikipedia.org/wiki/Interconnect_bottleneck
I suppose they are using consoles as an example for a couple of reasons.
1) They are an example of systems which are at the forefront of this problem. While they can take advantage of HSA with the CPU and GPU on the same die, the big limitation is memory bandwidth.
2) Putting it in the context of gaming consoles has the potential for a huge PR boost, allowing comprehension and interest by a large portion of the general population. I don’t mean that in a condescending way, either. This is front-line academic stuff, so making it as accessible as possible is a great way to keep interest in their work.
I guess my point is that (again from what I understood) consoles aren’t bottlenecked by the GPU/CPU link. Memory, to some extent, but not the processors. CPU tech just hasn’t advanced in the same way GPU has, in the past few years. As evidenced by the fact that several year old processors are as competent as newer ones in driving the current gen GPU’s.
If the GPU can already process more data than the CPU can spit out, getting data from the CPU to the GPU faster just means the GPU idles longer….
Cooling’s a big problem. When you’ve got a small box and you can’t pop a cooler on the CPU (or APU), and you know you’re building a device that may not be in the most ventilated space, that’s really limiting from a design aspect.
Spot on as well @edenist.
Can it run Crysis?
ohh nice story… optical processors are always a good read…. bookmarked