From Shortages to Scale, io.net’s Approach to Rewriting AI Compute Access | AIM

The global AI boom has collided with a severe compute crunch. Demand for GPUs has outpaced the ability of hyperscalers to procure, integrate, and price capacity, stalling research timelines and raising costs for startups and enterprises alike. A new approach is gaining momentum: a decentralised GPU marketplace that aggregates idle or underutilised hardware and exposes it through an open, flexible, and cost-transparent network.

A Marketplace for the AI Compute Era

In a conversation with AIM, io.net CEO Gaurav Sharma argued that decentralised GPU networks may be the only architecture capable of scaling at “internet speed.” Rather than waiting months for new data centres to come online, io.net pools GPUs from individual contributors, data centres, and global partners, then routes workloads based on price, stability, and availability.

Sharma’s own path spans Linux kernel engineering and leadership roles at AWS, Agoda, eBay, and Binance. He says the mounting AI compute bottleneck directly blocks builders, and that developers want three things above all: control, affordability, and transparency. That’s the focus—not ideology. Many customers, he noted, don’t care whether the network is decentralised; they care whether it’s reliable and cost-effective. Think of it as a travel-style aggregator for GPUs: bringing together many suppliers and matching them with demand in real time.

Why Now—and Why India Matters

Traditional clouds face incentive and capacity constraints. Building data centres and securing high-end GPUs takes time and billions in capital—timelines that don’t align with the current surge in AI workloads. A decentralised marketplace can tap existing, fragmented supply far faster.

Sharma believes India will become a central node in this model, even though it’s not a decentralisation-first market today. He highlighted three advantages for teams training models with speed and cost in mind:

  • Abundant technical talent with hands-on GPU configuration skills—scarcer and more expensive in Western markets.
  • Cost-efficient operations due to favourable labour and energy economics, lowering per-GPU running costs.
  • Faster scaling made possible by a large engineering pool that can quickly provision and manage global AI workloads.

Scaling via Web3: Speed Over Steel

To avoid the slow, capex-heavy path of traditional infrastructure expansion, io.net leaned on a Web3-first strategy. The company raised $40 million through a Web3 round and used tokenomics to kickstart network supply, overcome the cold start problem, and activate a community of contributors. Airdrops and continuous product testing helped harden the network with comparatively low capital outlay, enabling rapid expansion while building early trust among developers.

Two Hard Problems: Quality Assurance and Trust

Rapid scale introduces classic marketplace challenges. First, quality and data accuracy must be maintained continuously. Any large, evolving network risks inventory drift—where claimed resources and real performance diverge over time. That means constant verification, telemetry, and curation to ensure that GPU nodes meet the promised specs and reliability targets.

Second, winning over cautious customers takes time. Sharma noted that even with ample inventory, newcomers typically start small—perhaps 10–15 GPUs over a couple of months—before expanding into larger clusters. This trial phase naturally tempers growth but is essential to building a durable flywheel. As peer networks and established companies share successful runs, he expects this friction to lessen.

Business Model and Early Traction

io.net earns through a platform fee and revenue-sharing arrangements with data centres. Because the platform is designed to scale horizontally, operating costs grow more gradually than traditional infrastructure-heavy models.

The customer mix spans universities, media, and robotics: IIT Bombay, UC Berkeley, Eros Now, and Frodobots are among those tapping io.net for training and inference. Use cases range from audio generation and image models to voice-to-song systems. Through partnerships with Antler and Y Combinator, roughly 15–20 early-stage startups are already building on the network. One highlight is Wondera.ai, which uses io.net for an LLM that can generate songs in the style of specified artists.

In just six months of monetisation, io.net surpassed $25 million in revenue, with larger enterprise contracts in the pipeline. The company expects India to contribute materially across supply, engineering, and demand as its marketplace matures.

Decentralisation Without the Dogma

Sharma emphasises that the value proposition is pragmatic. For most customers, decentralisation is a means, not an end. They want lower costs, immediate access to GPUs, and predictable performance. By intelligently routing workloads across a diverse, verifiable pool of hardware, io.net aims to deliver exactly that—while insulating customers from GPU scarcity and long hyperscaler queues.

The Road Ahead

The AI wave shows no signs of slowing, and hyperscalers can’t erect new facilities quickly enough to match it. If decentralised GPU marketplaces can maintain quality, reduce friction for first-time users, and keep pricing transparent, they stand to become a core layer in the AI stack. Sharma’s bet is clear: the future of compute will be aggregated, flexible, and global—scaling through networks rather than new walls of steel and concrete.

In a world where access to GPUs dictates who gets to build, a marketplace approach could move the industry from scarcity to scale. For developers, researchers, and companies racing to ship AI products, that shift can’t come soon enough.

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

Unlock Your Escape: Mastering Asylum Life Codes for Roblox Adventures

Asylum Life Codes (May 2025) As a tech journalist and someone who…

Challenging AI Boundaries: Yann LeCun on Limitations and Potentials of Large Language Models

Exploring the Boundaries of AI: Yann LeCun’s Perspective on the Limitations of…

Unveiling Oracle’s AI Enhancements: A Leap Forward in Logistics and Database Management

Oracle Unveils Cutting-Edge AI Enhancements at Oracle Cloud World Mumbai In an…

Charting New Terrain: Physical Reservoir Computing and the Future of AI

Beyond Electricity: Exploring AI through Physical Reservoir Computing In an era where…