It goes without saying that Intel is unanimously on-board with Direct3D 12, the next revision of Microsoft’s graphics API destined for release in the latter half of 2015. Until then, we won’t see benchmarks from the enthusiast sites, so we’ll have to be happy with numbers from Redmond itself or, in this case, chip maker Intel.
It was once the case that the performance of Intel’s integrated graphics was rubbish, sacrificing grunt for lowered power consumption and heat generation. In the last few years, however, the company has stepped up its game — 3D graphics are now just another part of the user experience, accelerating everything from video decoding to browser engines.
With the release of the Sandy Bridge, Ivy Bridge and Haswell platforms, the speed of Intel’s 3D parts have improved remarkably. With its full weight behind the development of better graphics hardware, it’s actually kind of exciting to see where it sits in the benchmarks a couple of years from now.
And those benchmarks will be helped by getting a head start with Direct3D 12. And, as sure as The Big Bang Theory will be picked up for another 1300 seasons, Intel already has a tech demo showing how much better its hardware performs on the unreleased API. It was on display at this year’s SIGGRAPH in Vancouver, which wrapped up late last week.
Because Intel is mostly interested in efficiency — obtaining the same level or better performance with less heat and power — the demo features two modes: frame unlocked and locked. The former setting allows the demo to run as fast as it can, maxing out the resources of the CPU and GPU, while the latter locks the demo at a desired framerate, allowing the efficiency of each API to be measured properly.
The demo was run on a Surface Pro 3 (this is Intel we’re talking about, remember?) and features 50,000 “fully dynamic and unique asteroids”. Certainly not thrilling, but it gets the point across. The objects themselves have to be managed by the CPU, while the rendering is handled by the GPU, so it’s a good balance between the two components. The Pro 3 has a 4th-generation (Haswell) CPU from Intel, so it’s about as good as it gets for mobile processors.
The first run of the demo was conducted using D3D 11 in unlocked mode, where it hit 19fps while maxing the CPU and GPU. The API is then switched to D3D 12 where it reaches 33fps — a 70 per cent gain in performance.
The neat part of Intel’s demo is that it shows the power consumption of the CPU and GPU in the bottom-right hand corner. When it kicks over to D3D 12, you can see that the CPU is doing less work, allowing the GPU to throttle up and subsequently, boost the framerate.
Something to keep in mind is that Intel’s chips — and indeed any silicon designed for low-power, thermally-constrained environments — tend not to run at top speed all the time. It’s why Apple and Samsung frequently underclock their hardware on their mobile phones, but are more lenient with tablets.
The processors operate within a heat envelope, adjusting their speed and power consumption to stay within certain limits. In Intel’s case, the CPU and GPU share physical space on the processor die, so upping the speed on one naturally makes the other component hotter and running them both at maximum is a recipe for a very hot, battery-draining device.
Intel’s next step is to stay with D3D 12, but lock the framerate at 19fps to match D3D 11. The difference in the power graph is immediately noticeable; whereas D3D 11 required everything going full-ball to hit 19fps, D3D 12 reduces the CPU’s involvement to about a quarter of what it was, while maintaining the same performance.
Here’s how Intel explains the change:
These increases in power efficiency in DirectX 12 are due both to reduced graphics submission overhead and to an increase in multithreaded efficiency. Spreading work across more CPU cores at a lower frequency is significantly more power-efficient than running a single thread at high frequency.
It’s easy to focus on the raw speed benefits of an updated graphics API, or all the shiny new stuff it allows developers to do, but in the case of D3D 12, we’ll also see cooler, longer-lasting portable gadgets. It also makes it simple to understand why Apple is pushing ahead with its own flavour of graphics API in Metal, much to the chagrin of those comfortable with OpenGL ES.
Intel finishes off by stating it’ll release the demo to the public once Direct3D 12 becomes available, though I’m expecting by then we’ll have plenty of competition when it comes to compatible benchmark suites.
SIGGRAPH 2014: DirectX 12 on Intel [Intel]
Images: Intel
Comments
9 responses to “Your Next Direct3D 12 Tech Demo Is Courtesy Of… Intel?”
Can the XBone run newer version of Direct3D or is it dependent on newer hardware?
The Xbox One will be able to run D3D 12 and Microsoft is planning to bring it across (well, for now at least): http://blogs.msdn.com/b/directx/archive/2014/03/20/directx-12.aspx
If I remember correctly, anything that supports DirectX 11.2 (PS4 and X1) can get 12 through a software update.
The PS4 actually has it’s own “close-to-the-metal” API, called GMN (and a high-level wrapper for it for those who are less comfortable with that amount of control): http://www.eurogamer.net/articles/digitalfoundry-how-the-crew-was-ported-to-playstation-4
So you could say Sony beat Microsoft and AMD to the punch, though its tech is for the single platform.
interesting???
I didn’t notice that this article had anything to do with PS4. It was simply about performance gains using DX12
Someone asked if Xbox can use it and a reply was yes
Though since we are on the topic
PS4 can also use DX12 but Sony doesn’t much support it. Probably because it wants developers to stay using its own APIs and keep games on one platform for smaller developers.
Though to highlight your point. I don’t believe you work at Sony and its nice to quote about some APIs they developed but we are yet to see any proof of their results from an INDEPENDENT source (yes intel is independent)
So please keep your off tangent posts to yourself and stick to N4G. Only grownups post here
Thank you
I want to reply to your post RJ, but I have absolutely no idea what point you’re trying to make, sorry.
Intel putting their expertise to Graphics processing will be massive competition for Nvidia. Look out!
I agree, though the last time it had a crack we got Larrabee: http://en.wikipedia.org/wiki/Larrabee_(microarchitecture). Though it’s certainly done better since then.
True. Though Intel putting in some serious R&D this time around, we are going to see some state-of-the-art stuff over the next decade. Will be good to see intel remove their old reputation of lacklustre graphics hardware 🙂