GPUs Fuel the AI Arms Race

GPUs Fuel The AI Arms Race

By: Adam Sharp
The HIVE Newsletter
HIVE Digital Technologies
Nasdaq: HIVE | TSX.V: HIVE

We are only about eight months into the AI innovation boom.

OpenAI kicked things off when it released ChatGPT back in late November of 2022. ChatGPT was an instant blockbuster and set off an AI arms race.

The technology behind ChatGPT is known as a large language model (LLM). LLMs can write code, create content, analyze, summarize, translate, and much more.

As you can imagine, LLMs are now being developed at a furious pace. Google’s Bard and Anthropic’s Claude are the big names, but there are hundreds (thousands?) more being built. 

Meta just released Llama-2, a powerful and open source LLM. And Apple is rumored to be working on its own secretive LLM project which it could release soon. 

Over the next few years we’re going to see some incredible innovations. LLMs are set to automate just about any computer task you can think of, across every industry.

The applications are essentially endless. Customer support, data analysis, content creation, legal, accounting, and more.

Sequoia Capital, considered by some to be the GOAT venture capital firm, says that nearly all of its portfolio companies are currently working on integrating LLMs into their products.

Now that there are hundreds of LLM projects, many of which are open source software (OSS), the tech will advance at a stunning pace.

We’re witnessing one of those rare technological breakthroughs that will change the way the world works over a short period. The productivity gains will be a powerful (and disruptive) force.

The GPU Cloud Opportunity

Large language models require vast numbers of GPUs to train (create) and inference (run).

OpenAI CEO Sam Altman revealed that it cost over $100 million to train GPT-4, using tens of thousands of Nvidia GPUs in powerful servers. And that’s to train a single model, which may need to be re-trained many times to get the desired result. In addition, re-training from scratch may be required for updates.

It’s also worth noting that in May Mr. Altman reportedly told a group of developers that OpenAI can’t get enough GPUs, which is slowing down development of new models.

Elon Musk recently announced that Tesla would spend $1 billion to build its own supercomputer, because they can’t get enough Nvidia chips.

The sheer number of LLMs and other AI tech being developed across the world is stunning. Each project requires a large number of powerful GPUs for initial training and ongoing inference. 

We see a significant opportunity for HIVE, which operates a GPU fleet of 38,000 Nvidia data center GPUs. Here is the breakdown by model, as detailed on our new GPU Operations page.

  • 4,000+ Nvidia A40 w/ 48 GB RAM
  • 400+ Nvidia RTX A6000 w/ 48 GB RAM
  • 12,000+ Nvidia RTX A5000 w/ 24 GB RAM
  • 20,000+ Nvidia RTX A4000 w/ 16 GB RAM

These GPUs, particularly the A40s and A6000s, are well-suited to handle modern AI and HPC workloads.

HIVE is currently building out the software and hardware infrastructure required to optimize these powerful graphics cards. Here is one of our new Supermicro servers with 10 powerful Nvidia RTX A6000 GPUs with 48 GB of visual memory each.

The company expects its GPU cloud service, HIVE Cloud, to be ready for public release in Q4 of 2023. Until then, we’re renting out GPU servers on marketplaces. 

Investors can learn more about HIVE’s GPU cloud plans in our recent news release, where we debuted our new name, HIVE Digital Technologies, and revealed plans to capitalize on AI-related opportunities.

I view GPU cloud as a foundational “picks and shovels” play on AI and HPC. And I suspect that’ll be a nice spot to be in for the next decade-plus.

Cheers,

Adam Sharp
Author, The HIVE Newsletter