NVIDIA Releases TensorRT 7 supports more than 1,000 calculation transformations and more in NVIDIA GTC 19

This year, NVIDIA 2019 GPU Technology Conference (GTC19) was in Suzhou, China.

About NVIDIA GTC

For those who do not know what is NVIDIA GTC, it is a top AI event that features a series of global events to bring you with relevant training and insights on the hottest topics in computing.

During this conference, you can get a chance to communicate and interact with top experts from NVIDIA and many industry-leading technical experts. In addition, you also get a chance to obtain hands-on skills at work with hands-on training on the cloud GPU platform.

To learn more about NVIDIA GTC, check out their page here!


What happened this year at NVIDIA GTC 2019?

In 2019, NVIDIA has decided to shift its focus on games, machine learning and autonomous driving. Furthermore, they explored 2 new application scenarios in 5G telecoms and biological gene sequencing.

Here are the highlights of what was covered during the conference this year which we will cover more in-depth point by point:

  • New TensorRT 7 released, supports more than 1,000 calculation transformations
  • Introduces RTX into 7 games and started cooperating with Tencent to bring START cloud gaming service.
  • Releases open source autonomous driving deep neural network DRIVE AGX Orin
  • NVIDIA enters biological field and releases genomic analysis toolkit with Parabricks
  • Released NVIDIA ISAAC Robot SDK

New TensorRT 7 released

Background Information

For those who don’t know, TensorRT is a platform for high-performance deep learning inference. It includes a deep learning inference optimizer and runtime that delivers low latency and high-throughput for deep learning inference applications.

With TensorRT, you can optimize neural network models trained in all major frameworks, calibrate for lower precision with high accuracy, and finally deploy to hyperscale data centers, embedded, or automotive product platforms.

Back to the topic…

With TensorRT 5 released last year at GTC China, this year, they released the new TensorRT 7. Back then, the TensorRT 5 only supports CNN but now as most speech models require RNN, the TensorRT 5 would not make the cut.

With the new TensorRT 7, they support various types of RNN, transformer and CNN. It features new inference speedups for automatic speech recognition, natural language processing and text-to-speech. With this breakthrough, NVIDIA has now accelerated training and development of the entire conversational AI pipeline.

TensorRT 7 apparently also speeds up both Transformer and recurrent network components including popular networks like DeepMind’s WaveRNN and Google’s Tacotron 2 and BERT by more than 10 times compared with processor-based approaches, while driving latency below the 300-millisecond threshold considered necessary for real-time interactions. That’s partly thanks to optimizations targeting recurrent loop structures, which are used to make predictions on time-series sequence data like text and voice recordings.

Furthermore, NVIDIA have also announced that they will cooperating with many large brands including Alibaba, American Express, Baidu, Pinterest, Snap, Tencent and Twitter where they will be using TensorRT for tasks such as image classification, fraud detection, segmentation and object detection.


Introduces RTX into 7 games

NVIDIA have always been heavily involved in the world of gaming.

Earlier this year, NVIDIA announced real-time ray tracing (RTX) in gaming at CES 2019 and demonstrated several stunning demos. Based on the AI capabilities of computer imaging, it uses algorithms to simulate the physical characteristics of light in the real world. Unlike traditional ray tracing, RTX can support real-time movie quality rendering.

Here is one of the example game which NVIDIA released that introduced RTX to called “Anthem”.

Anthem Official CES 2019 Trailer

The approach received immediate industry support from Microsoft, the companies that offer the most widely used game engines and some of the world’s biggest game developers and have been working on it ever since.

After a year of development, Nvidia introduced RTX into more games. One of the games includes one of the world’s best selling video game “Minecraft”. Together, NVIDIA and Microsoft manage to bring RTX into “Minecraft” which can achieve real-time rendering of Global Illumination and various light reflection effects. Here is an example of how the gameplay will look like:

Other than Minecraft, NVIDIA also announced 6 other RTX-enable games which are Shadow Torch, Project X, Infinite Law, Xuanyuan Sword Seven, Lily of the Valley Project and Borderland.

With RTX, not only will your gaming experience be elevated, but game production will also be made easier. Normally, a large number of artists will have to complete immersive and shadow effects on rendering. However, with RTX, this process will be more automated and can be done by just one person.

Other than game production, NVIDIA CEO Jensen Huang also announced a gaming laptop MAX-Q that is integrated with GeForce RTX 2080 with the capability to support NVIDIA ray tracing technology. Best of all, it only weighs 2.5kg.


Releases open source autonomous driving deep neural network NVIDIA DRIVE AGX Orin™

During NVIDIA GTC this year, they introduced NVIDIA DRIVE AGX Orin™ which is a highly advanced software-defined platform for autonomous vehicles and robots.

The major feature of this model is the use of DRIVE transfer learning which allows the pre-trained model to be adjusted to suit the specific needs of the original equipment manufacturer’s specific cars, sensors, and specific regions. NVIDIA partners can retrain this model and use TensorRT to optimize it.

This platform is powered by a new system system-on-a-chip (SoC) called Orin, which consists of 17 billion transistors. The Orin SoC integrates NVIDIA’s next-generation GPU architecture and Arm Hercules CPU cores, compared to the previous Arm’s DynamIQ technology and which improve power and area efficiency by 10% compared with the previous generation.

In addition, it is complemented with new deep learning and computer vision accelerators that, in aggregate, deliver 200 trillion operations per second—nearly 7x the performance of NVIDIA’s previous generation Xavier SoC of 320 TOPs. Moreover, Orin can handle over 200Gbps of data while consuming only between 60W to 70W of power at 200 Tops.

Similar to Pegasus, Orin is suitable for levels from L2 (cars capable of controlling either steering or speed but not both) to L5 (cars fully capable of self-driving without supervisor) enabling OEMs to develop large-scale and complex families of software products. In addition, since both Orin and Xavier are programmable through open CUDA and TensorRT APIs and libraries, developers can leverage their investments across multiple product generations.

NVIDIA DRIVE AGX Orin is expected to come to production applications in 2022.


NVIDIA enters biological field with Parabricks

With AI revolution spreading across industries everywhere, NVIDIA shifts to personalized healthcare.

Due to the advances in sequencing technology have led to an explosion of genomic data, the total amount of sequence data is doubling every seven months. With so much data, it would take all the CPUs in every cloud and more than 200 days to run genome analysis.

The genomics community continues to extract new insights from DNA, but genomic analysis has traditionally been a computational bottleneck in the sequencing pipeline. One that can be surmounted using GPU acceleration.

To deliver a roadmap of continuing GPU acceleration for key genomic analysis pipelines, Parabricks (an Ann Arbor, Michigan-based developer of GPU software for genomics) officially joins the NVIDIA Healthcare team to continue to advance GPU accelerated Genomic analysis

Next generation sequencing is in high demand with national sequencing programs, clinical genomics and is driving personalized medicine. We are excited to have the Parabricks team join NVIDIA and double down on providing fast and accurate genome analysis tools to take genomics to the next level.

Kimberly Powell, NVIDIA Vice President of Healthcare

Together, BGI is using NVIDIA V100 GPUs and software from Parabricks to build the highest throughput genome sequencer yet which can potentially drive down the cost of genomics-based personalized medicine.

Huang has also announced NVIDIA PARABRICKS Genome Analysis Toolkit, a CUDA-based accelerated genome processing toolkit for deep learning, machine learning and high-performance computing.

In addition, Huang also added that he is working with Ericsson to apply CUDA to 5G base stations to identify data transmission modes and optimize data interaction.


Released NVIDIA ISAAC Robot SDK

Simulation of Robot performing Navigation Tasks

Lastly, at the end of GTC, NVIDIA announced the availability of Isaac SDK 2019.3 with new simulation capabilities, new DNNs and much more.

The NVIDIA Isaac Software Development Kit (SDK) is the Industry’s first Robotic AI Development Platform with Simulation, Navigation, and Manipulation. The SDK includes Isaac Engine, high-performance robotics algorithm packages (GEMs), hardware reference applications and Isaac Sim for Navigation, a powerful simulation platform. 

The SDK accelerates robot development for manufacturers, researchers, and startups by making it easier to add Artificial Intelligence (AI) for perception and navigation into modern-day robots.

NVIDIA CEO Jensen Huang also used the newly developed LEONARDO to do a live demo at the end. It is a manipulation robot which that is able to perform tasks such as stacking blocks and exchanging physical objects with the user through a virtual agent.

The training of LEONARDO is done in the simulation and the real world together. In the Issac Gym simulation training space, it learns the operations expected by engineers through a lot of training, and can then complete these operations in the real world.

Summary

That’s all for this year NVIDIA GTC 2019 in Suzhou, China. Perhaps you do still do not know what NVIDIA GTC is all about, but you probably seen various news about the NVIDIA Jetson Nano which was released earlier this year at NVIDIA GTC 2019!

What is the NVIDIA Jetson Nano?

The NVIDIA® Jetson Nano™ Developer Kit delivers the compute performance to run modern AI workloads at unprecedented size, power, and cost. Developers, learners, and makers can now run AI frameworks and models for applications like image classification, object detection, segmentation, and speech processing.

Interested? Well, do not miss out on this good deal as we have just lowered the price of the NVIDIA Jetson Nano at only $89! Get the NVIDIA Jetson Nano to get started on your Machine Learning and AI journey now!

Interested in more NVIDIA products? You can check out all of our NVIDIA products here!

About Author

Calendar

December 2019
M T W T F S S
 1
2345678
9101112131415
16171819202122
23242526272829
3031