NVIDIA Introduces New AI Supercomputing Cloud Service
Listen to the Podcast:
Nvidia said earlier this week that it is making a new cloud service for the market that is based on AI supercomputing. Jensen Huang, the founder and CEO of the company, showed off this new service based on artificial intelligence at the GTC event.
To put it simply, a well-known company that makes graphics cards is launching a new cloud service based on supercomputing that lets you rent the power of supercomputers. These strong systems were used to make ChatGPT and other technologies that use artificial intelligence. In addition, Nvidia announced the impending arrival of the DGX Super AI Computing System. It has eight A100 or H100 flagship chips.
For those who don’t know, the Ampere and Hopper chips that will be sold in China are basically versions of the A800 and H800 languages. Most Chinese programmers use these languages to build language models. Businesses will rent access to this service from the tech giant, which could cost around $37,000 per month. We can expect the cycle of improving artificial intelligence to move faster because of this help.
Jensen Huang also said, “We will work with cloud service providers in Europe and the U.S. to give NVIDIA’s DGX system AI supercomputer capabilities.” We make Ampere and Hopper chips to order in China. Baidu Inc. will get help from Chinese cloud service providers like Tencent and Alibaba Group Holding Limited. Chinese start-ups will have the chance to work on their big language models, and I’m sure they’ll be able to offer top-notch system services.