Close Menu
StreamLineCrypto.comStreamLineCrypto.com
  • Home
  • Crypto News
  • Bitcoin
  • Altcoins
  • NFT
  • Defi
  • Blockchain
  • Metaverse
  • Regulations
  • Trading
What's Hot

Dutch crypto firm Amdax raises $35m to buy Bitcoin

October 7, 2025

Why did BTC reach a new all-time high?

October 7, 2025

Solana ETF vs. Ether: Can SOL Outperform ETH?

October 7, 2025
Facebook X (Twitter) Instagram
Tuesday, October 7 2025
  • Contact Us
  • Privacy Policy
  • Cookie Privacy Policy
  • Terms of Use
  • DMCA
Facebook X (Twitter) Instagram
StreamLineCrypto.comStreamLineCrypto.com
  • Home
  • Crypto News
  • Bitcoin
  • Altcoins
  • NFT
  • Defi
  • Blockchain
  • Metaverse
  • Regulations
  • Trading
StreamLineCrypto.comStreamLineCrypto.com

AI’s GPU obsession blinds us to a cheaper, smarter solution

May 9, 2025Updated:May 9, 2025No Comments5 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr Email
AI’s GPU obsession blinds us to a cheaper, smarter solution
Share
Facebook Twitter LinkedIn Pinterest Email
ad



AI’s GPU obsession blinds us to a cheaper, smarter solution

Opinion by: Naman Kabra, co-founder and CEO of NodeOps Community

Graphics Processing Items (GPUs) have change into the default {hardware} for a lot of AI workloads, particularly when coaching massive fashions. That pondering is in all places. Whereas it is sensible in some contexts, it is also created a blind spot that is holding us again.

GPUs have earned their status. They’re unimaginable at crunching huge numbers in parallel, which makes them good for coaching massive language fashions or operating high-speed AI inference. That is why firms like OpenAI, Google, and Meta spend some huge cash constructing GPU clusters.

Whereas GPUs could also be most popular for operating AI, we can not neglect about Central Processing Items (CPUs), that are nonetheless very succesful. Forgetting this might be costing us time, cash, and alternative.

CPUs aren’t outdated. Extra individuals want to appreciate they can be utilized for AI duties. They’re sitting idle in thousands and thousands of machines worldwide, able to operating a variety of AI duties effectively and affordably, if solely we would give them an opportunity.

The place CPUs shine in AI

It is simple to see how we received right here. GPUs are constructed for parallelism. They’ll deal with huge quantities of knowledge concurrently, which is great for duties like picture recognition or coaching a chatbot with billions of parameters. CPUs cannot compete in these jobs.

AI is not simply mannequin coaching. It isn’t simply high-speed matrix math. As we speak, AI contains duties like operating smaller fashions, decoding knowledge, managing logic chains, making selections, fetching paperwork, and responding to questions. These aren’t simply “dumb math” issues. They require versatile pondering. They require logic. They require CPUs.

Whereas GPUs get all of the headlines, CPUs are quietly dealing with the spine of many AI workflows, particularly if you zoom in on how AI programs truly run in the actual world.

Current: ‘Our GPUs are melting’ — OpenAI places limiter in after Ghibli-tsunami

CPUs are spectacular at what they had been designed for: versatile, logic-based operations. They’re constructed to deal with one or a number of duties at a time, rather well. That may not sound spectacular subsequent to the large parallelism of GPUs, however many AI duties do not want that sort of firepower.

Think about autonomous brokers, these fancy instruments that may use AI to finish duties like looking the online, writing code, or planning a venture. Certain, the agent would possibly name a big language mannequin that runs on a GPU, however all the things round that, the logic, the planning, the decision-making, runs simply nice on a CPU.

Even inference (AI-speak for truly utilizing the mannequin after its coaching) will be carried out on CPUs, particularly if the fashions are smaller, optimized, or operating in conditions the place ultra-low latency is not essential.

CPUs can deal with an enormous vary of AI duties simply nice. We’re so targeted on GPU efficiency, nonetheless, that we’re not utilizing what we have already got proper in entrance of us.

We need not preserve constructing costly new knowledge facilities full of GPUs to fulfill the rising demand for AI. We simply want to make use of what’s already on the market effectively.

That is the place issues get fascinating. As a result of now we have now a option to truly do that.

How decentralized compute networks change the sport

DePINs, or decentralized bodily infrastructure networks, are a viable answer. It is a mouthful, however the concept is easy: Individuals contribute their unused computing energy (like idle CPUs), which will get pooled into a world community that others can faucet into.

As a substitute of renting time on some centralized cloud supplier’s GPU cluster, you may run AI workloads throughout a decentralized community of CPUs wherever on this planet. These platforms create a kind of peer-to-peer computing layer the place jobs will be distributed, executed, and verified securely.

This mannequin has a number of clear advantages. First, it is less expensive. You need not pay premium costs to hire out a scarce GPU when a CPU will do the job simply nice. Second, it scales naturally.

The obtainable compute grows as extra individuals plug their machines into the community. Third, it brings computing nearer to the sting. Duties will be run on machines close to the place the info lives, lowering latency and rising privateness.

Consider it like Airbnb for compute. As a substitute of constructing extra lodges (knowledge facilities), we’re making higher use of all of the empty rooms (idle CPUs) individuals have already got.

Via shifting our pondering and utilizing decentralized networks to route AI workloads to the proper processor sort, GPU when wanted and CPU when doable, we unlock scale, effectivity, and resilience.

The underside line

It is time to cease treating CPUs like second-class residents within the AI world. Sure, GPUs are important. Nobody’s denying that. CPUs are in all places. They’re underused however nonetheless completely able to powering lots of the AI duties we care about.

As a substitute of throwing more cash on the GPU scarcity, let’s ask a extra clever query: Are we even utilizing the computing we have already got?

With decentralized compute platforms stepping as much as join idle CPUs to the AI economic system, we have now an enormous alternative to rethink how we scale AI infrastructure. The actual constraint is not simply GPU availability. It is a mindset shift. We’re so conditioned to chase high-end {hardware} that we overlook the untapped potential sitting idle throughout the community.

Opinion by: Naman Kabra, co-founder and CEO of NodeOps Community.

This text is for common data functions and isn’t meant to be and shouldn’t be taken as authorized or funding recommendation. The views, ideas, and opinions expressed listed here are the writer’s alone and don’t essentially replicate or symbolize the views and opinions of Cointelegraph.