NVIDIA Gensen Juan CEO, Metaverse, creates virtual reality on an extension of the real world and is also useful for developing automatic driving cars.

NVIDIA evolves from AI company to company that provides all directional computer boards

基調講演で公開された「トイ・ジェンスン」。NVIDIAのAI、グラフィックス、メタバース向けのOmniverseなど複数の技術を組み合わせて実現されているAIアバター

I would like to ask Mr. Juan to look back on the GTC keynote lecture at the beginning.

Juan: We made a lot of announcements in GTC, but here we would like to introduce seven points.

 The first is about "Axela Lated Computing" (operating technologies using accelerators, operations using GPUs in the HPC area).As you know, we have been developing this field for a long time.This initiative is a challenge for "full stack computing" (all types of operations from top to bottom).The construction of a computer, which has been generally used in the computer world, is built using a C compiler and runs only on the CPU.

 In the case of Axela Lated Computing, you must understand the application, application area (domain), complex calculation, algorithm, and computer mechanisms to calculate it at high speed.For this reason, we announced 65 new or updated libraries.

 I would like to introduce some of them that are particularly interesting."Repot" is a tool to optimize the supply chain.Optimization of combinations is generally considered to be executed on quantum computers, but there are many cases where such operations are running on the current form of computer, so using GPUs.The performance has been improved by more than 100 times by optimizing in parallel.

 The world's most popular mathematical decoding function, "Cunumeric", is calculated on a large -scale data center, so that it can scale up (extend the size of the calculation from a server alone to a rack unit, etc.).did.At the same time, we started providing a high -speed library "Cuquantum" that accelerates the development of quantum computers.

 "Modulus" is a neural network model that understands physical laws and is a very important new feature.Modulus is a neural network model that understands physical laws, especially in the field of life science and digital biology.What I announced in the keynote speech was to predict the future of the climate in cooperation with climate science.

 The second point is that it has announced the major release of the AI inference platform "TRITON INFERENCE SERVER".TRITON is used during "reasoning" processing to execute AI in the application after learning AI.The NVIDIA AI is currently introduced to 25,000 companies around the world and has grown very rapidly, but has announced its major release.

 The third is speech AI.Speech AI is a development field that will become very important in the future.This is because most people have a structured method of interacting with a computer.In most cases, information is structured and easy to access when we use a keyboard or PC to contact computers.However, if it is not structured, speech AI is a means of connecting us with a computer.Therefore, NVIDIA has launched the world's highest -class audio AI system in both voice recognition and voice synthesis.Its performance is so wonderful that it works in real time, and companies can use it in any cloud or built -in system.

 The fourth announcement about the "Large Language Model"."NEMO MEGATRON" is a platform for a platform to learn such a huge language model.

 About a week ago, it announced that it has developed the world's largest language model in collaboration with Microsoft.MEGATRON 530B has a 530 billion parameters and is tripled compared to the open AI language model.These language models are very important that computers can understand, interpret, interpret, understand, summarize, and answer questions.Nemo Megatron has announced three things to develop such a large language model.It is a real-time function to provide a training system called Nemo Megatron, a pre-training model called MegaTron 530-B, and these huge models in multi-GPU and multi-node Triton.

 The fifth is "Omniverse".Omniverse can be used to create a virtual world simulation system and virtual world that is real as shown in the picture.

 The near future application is digital twin.This GTC introduced some examples of digital twin.There are a variety of molecular digital twin, to digital twin at BMW, digital twin at the key carbarry system at a boiler factory, and digital twins in cities that have simulated 5G by Elixon.

 Omniverse also announced a function to synthesize digital avatars called "Omniverse Avatar".It is equipped with voice recognition, voice AI, and natural language understanding function, and understands and talks about our words.I talked about Megatron earlier, but using it can understand our intentions and do not need any learning.You will be able to propose to humans using the "Merlin" provided by NVIDIA, and use Omniverse to train your faces based on remarks and gestures.

 In the sixth, it has announced and launched a robot computer board called "Jetson AGX ORIN".This is the world's most advanced single chip robotics computer, designed for autonomous driving and robots.A new robot has also been announced.This is a robot for sensing equipment and is mainly designed for medical devices.For all kinds of medical devices that can benefit from robot technology, such as ultrasonic equipment and CT scans.

 And the last seven was the infrastructure of the data center, the next-two-end system, a next-generation infrastructure called the Quantum-2 Infiniband Switch Networking.

Related Articles