Subscribe
  • Home
  • /
  • IOT
  • /
  • Nvidia says AI, 5G ingredients for success

Nvidia says AI, 5G ingredients for success

Lebone Mano
By Lebone Mano, junior journalist
Johannesburg, 15 Apr 2021
Jensen Huang.
Jensen Huang.

Nvidia has unveiled a raft of new products at its annual GPU Technology Conference in areas such as autonomous vehicles, digital twins and data centres.

CEO, Jensen Huang, delivering the keynote from his kitchen at home, said there had been incredible advances in AI recently, and that computers are now writing software like no human could.

“AI and 5G are the ingredients needed to kick-start the fourth industrial revolution where robotics and automation can be deployed to the far edges of the world.”

He said, however, that there was one missing element, what he termed the ‘metaverse’, or a virtual world "that’s the digital twin of ours". He unveiled the company’s simulation tool, Omniverse, for enterprise licensing. He said Omniverse could be used to create virtual worlds and simulate conditions, or, as he put it: “It’s where robots learn to be robots.”

He said digital twins in Omniverse can connect to a company’s ERP system, creating more realistic simulations, such as factory throughput or a new plant layout.

Huang noted carmakers Bentley and BMW were already using Omniverse to simulate production lines and factories.

Datacentres

Huang said cloud and AI are driving fundamental changes in the architecture of data centres, and announced Nvidia’s first data centre infrastructure SDK, the DOCA 1.0, or Datacentre on a Chip Architecture.

He said enterprise data centres had traditionally run on monolithic software packages, and that virtualisation started the trend towards software-defined data centres.

“With virtualisation, the compute, networking, storage and security functions are emulated in software running on the CPU. Though easier to manage, the added CPU load reduced data centres’ ability to run applications, which is its primary purpose,” he said.

Deep learning is compute-intensive and has driven the adoption of GPUs, he added.

“Almost overnight, consumer AI services became the biggest users of GPU supercomputing technologies. And now, adding zero trust security initiatives makes infrastructure software processing one of the largest workloads in the data centre.”

A new chip, the BlueField-2 DPU, had been specifically designed for data centre infrastructure processing, and he mentioned the use case of gaming in the cloud.

Huang said about a third of the roughly 30 million data centre servers shipped each year are solely running the software-defined data centre stack.

“This workload is increasing much faster than Moore's Law predicted, so unless we offload and accelerate this workload, data centres will have fewer and fewer CPUs to run applications.”

Operating AI at the edge

No enterprise has enough data to train their AI models, Huang noted, leading it to release pre-trained models for application. This is called TAO – for train, adapt, optimise.

He said TAO enables collaboration between multiple parties and trains a shared model while protecting data privacy. For example, medical researchers at different institutions can collaborate on one AI model and keep their data separate to protect patient privacy.

In this regard, Nvidia announced Fleet Command, a cloud-native platform purpose-built for operating AI at the edge. Fleet Command allows secure operations and can orchestrate AI across a distributed fleet of computers.

Teaching machines how people speak

Nvidia also unveiled Jarvis, a pre-trained, deep learning AI for speech recognition, translation and language recognition.

Huang said Nvidia had trained Jarvis for several million GPU hours on over a billion pages of text and over 60 000 hours of speech in different languages and accents. He said ‘out of the box’, it was accurate 90% of the time, and could be refined with a customer’s own data. It currently supports English, Japanese, German, Spanish, French and Russian and can also be customised for domain jargon.

“Jarvis now speaks with emotion and expression that you can control, no more mechanical speech,” he said. German telecoms company T-Mobile was one of the first adopters, using it for speech recognition.

Nvidia has partnered with Mozilla Common Voice, a crowdsourced, open database for speech recognition. With a dataset of 150 000 speakers in 65 languages, Huang said he sees it as an initiative to help teach machines how real people speak. He encouraged attendees to visit Common Voice’s page to make universal translation possible.

‘TOPS the new horsepower’

Nvidia Drive is the company’s autonomous vehicle (AV) open development platform, which includes AV chips, computers, sensor architecture, data processing and mapping, and Huang added it had built an AV service in partnership with Mercedes-Benz.

“The more devs learn about AV, the more advanced the algorithm becomes. We’ve seen how more computing capacity gives teams faster iteration and quicker time to market, leading some to call TOPS (Tera Operations Per Second ) the new horsepower.”

He said it was also introducing Orin, a ‘central AV computer’, which is expected to go into production in 2022.

“The future of AV is one central computer with four virtualised and isolated domains, built to be functional, safe, software-defined and upgradable for the life of the car."

Huang also announced the launches of the 8th generation Hyperion, full stack AV developer kit for Level 2+ autonomy and Nvidia’s Drive Atlan System-on-Chip that will be more than 1 000 TOPS on one chip. “To achieve higher autonomy in more conditions, sensor resolutions will continue to increase. AI models will get more sophisticated, there'll be more redundancy and safety functionality… We're going to need all the computing we can get.”

Huang said Atlan is a ‘technical marvel’, fusing all Nvidia’s technologies in AI, autonomy, robotics and BlueField secure data centre technologies.

“AV and software must be planned as multigenerational investments.”

Share