A plain-looking piece of silicon that is connected to the circuits that drive video game visuals is currently the hottest thing in technology. This AI chip was created with the express purpose of making ChatGPT and other AI systems more quickly and affordably.
These chips have suddenly become the focus of what some experts perceive to be an AI revolution that has the potential to transform not only the technology industry but perhaps even the entire planet. Last Thursday, when the company anticipated a significant increase in income that experts thought showed surging sales of its products, shares of Nvidia, the top designer of AI chips, skyrocketed up about 25%. Tuesday saw a temporary increase in the company’s value to above $1 trillion.
So What are AI Chips?
Answering that question is difficult. “There really isn’t a completely agreed-upon definition of AI chips,” said Hannah Dohmen, a research analyst at the Center for Security and Emerging Technology.
However, the word generally refers to computational gear that has been tailored to handle AI workloads, for example by “training” AI systems to take on challenging issues that would otherwise overwhelm traditional computers.
Video Game Origins
In 1993, three businessmen started Nvidia with the goal of advancing computational graphics. Within a few years, the business had created a new chip known as a graphics processing unit, or GPU. By doing numerous complicated graphics calculations simultaneously, the GPU greatly accelerated both the creation and play of video games.
Both video games and artificial intelligence (AI) would grow using this method, formally referred to as parallel processing. With far lower mistake rates than rivals, two graduate students from the University of Toronto deployed a GPU-based neural network to win the prestigious ImageNet AI competition in 2012.
The victory sparked interest in AI-related parallel computing, creating a new market for Nvidia and its competitors while giving academics effective tools for advancing the field of artificial intelligence.
Modern AI Chips
After eleven years, Nvidia is still the main manufacturer of processors for creating and improving AI systems. 80 billion transistors are contained in one of its most recent products, the H100 GPU, which is around 13 million more than Apple’s most recent high-end processor for the MacBook Pro laptop. Unsurprisingly, the cost of this technology is high; the H100 lists for $30,000 at one online store.
In order to manufacture these intricate GPU chips, Nvidia would have to make huge expenditures in brand-new plants. Instead, it relies on Asian chip manufacturers like Samsung Electronics in Korea and Taiwan Semiconductor Manufacturing Co.
Cloud computing services like those provided by Amazon and Microsoft are some of the biggest consumers of AI chips. These services enable smaller businesses and organizations that couldn’t afford to develop their own AI systems from the ground up to employ cloud-based technologies to assist with activities that can range from drug discovery to customer management by renting out their AI computing capacity.
Other Uses and Competition
Outside of AI, parallel processing has various applications. For instance, Nvidia graphics cards were in short supply a few years ago because cryptocurrency miners, who rig up computer banks to tackle challenging mathematical puzzles in exchange for bitcoin payments, had snatched up the majority of them. As the bitcoin market crashed in the early years of 2022, that issue subsided.
According to analysts, Nvidia would certainly confront more challenging rivalry. Advanced Micro Devices, which competes with Nvidia in the market for computer graphics chips, is one prospective rival. Recently, AMD has taken steps to improve its own line-up of AI processors.
California’s Santa Clara is home to Nvidia. Jensen Huang, a co-founder, is still the company’s president and CEO.