Nvidia GTC 2017 Keynote Live Blog: Follow All The News As It Happened

Nvidia GTC 2017 Keynote Live Blog: Follow All The News As It Happened

Nvidia’s GTC keynote is about to kick off in San Jose, California. Follow our live blog for all the announcements as they happen.

You know the drill: we’ll be updating this page in real time from the conference hall, so keep refreshing for the latest updates!

All times are in AEDT


We’re being shown a video spruiking the wonders of AI. Impressively, the accompanying music was “written” and performed by computers.


Jensen Huang has taken to the stage and gets straight to the point: Moore’s Law is finished. Adding transistors to chipsets is no longer cost effective. Processor performance used to grow at 50 per cent a year. Now it’s 15 per cent. This is what prompted the launch of CUDA in late 2016: Nvidia’s parallel computing platform and application programming interface that provides much faster downloads and readbacks to and from the GPU. Jensen calls it “Moore’s law squared”.


We have our first product announcement: a new version of the collaborative VR environment Project Holodeck. This allows people from around the world to interact with each other and photo-realistic models in virtual reality. We were shown an example of a Hypercar protoype which various Nvidia employees played about with in real time (x-ray mode, interior view, etc.) Early access will be available from September, 2017.


We’re now talking about the “big bang” of deep learning, which has been greatly assisted by big data and harnessing the power of GPUs. We’re getting a list of awesome deep learning achievements from the past year – self-driving cars, adversarial learning, transfer learning, etc, etc.


Jensen is showing a cool demo of how deep learning and ray tracing can be used to turn distorted, noisy images into perfect photos. The AI software is sophisticated enough to recognise objects – including reflections of trees in car paint – and fill in the missing information itself to produce the finished picture.


We’re now getting a sales pitch on how Nvidia plans to “power the AI revolution”. With lots of GPUs, tech partnerships and deep learning SDKs, basically. Interesting tidbit: over $5 billion was invested in AI startups in 2016 – a growth of 900% in the past four years.



Introducing the Tesla V100; the most advanced deep learning GPU ever built with an all-new Tensor core. Check out these specs! 21B xtors | TSMC 12nm FFN | 815mm2 5120 CUDA cores | 7.5 FP64 TFLOPS | 15 FP32 TFLOPS | 120 Tensor TFLOPS | 20MB SM RF | 16MB cache | 16GB HBM2 @ 900GB/s | 300 GB/s NVLink. Again, that’s 120 teraflops of tensor operations.


The Tesla V100 is an enterprise GPU – but it still does nifty game graphics. We were shown a character model from the Square Enix game Kingsglaive: Final Fantasy XV which boasted some of the most photorealistic fabrics we’ve ever seen. “That’s a nice leather jacket,” Jensen deadpanned. Bless.


We were just shown an example of using deep learning for style transfer. A photo of a beach on a bright sunny day was instantly transformed into a sunset scene by the AI.


Bear with us guys, the supplied WiFi is crapping out.


We’re back! Nvidia just announced the next-gen DGX-1: a super computer for AI research with eight Tesla V100s packed inside. This is enough power to replace 400 servers. It coss $149,00 and will ve available from Q3. Other Tesla V100 products in the pipeline include the personal DGX computer for deep learning and HGX-1 for GPU-based cloud computing.

ImageDGX Station.


We’re now being treated to a chin wag with Jason Zander, Microsoft’s corporate vice president about Nvidia’s friendly relationship with MS Azure.


Things are getting properly nerdy now: we’re talking the cost effectiveness of accelerated datacenters. Tesla V100 allows 500 CPU sever nodes to be replaced with 33 GPU accelerated server nodes.


Nvidia GPU Cloud is a GPU-accelerated cloud platform optimised for deep learning. We were shown a quick demo of how to create and upload data sets via a drag-and-drop user interface. The CPU Cloud beta will be available from July.


We’re now talking autonomous transportation. “Everything that moves will some day be augmented by autonomy,” Jensen said. Nvidia’s open AI car platform is everywhere at the moment, with everything from airlines to trucking companies embracing the technology. It’s fair to say that autonomous vehicles are putting a lot of jobs are at stake. Oddly, Jensen stressed the lack of unattractive car parks as one of the key advantages of an autonomous driving world.


Big announcement: Toyota has selected Nvidia’s Drive PX paltform for its autonomous vehicles. Both companies are working together to get the first autonomous Toyota on the road within “the next few years.” The pilot model will use Xavier, an AI supercomputer designed for use in self-driving cars that debuted back in September 2016.


In related news, the Xavier DLA is now open source. Early access in July with a general release from September.



Robots! We’re not talking robotic development with an emphasis on learning to move and act. Nvidia has built a new integrated robot simulator called ISAAC. This allows robots to be “pre-trained” before they are physically built. We were shown an example of a robot’s “brain” learning to play hockey and golf in virtual reality without any prior programming.


And that’s it! We’ll be reporting on all the cool tech from the GTC showroom floor so stay tuned!

Lifehacker attended GTC 2017 as a guest of Nvidia.