IndiaAI Mission Compute Capacity To Be Underused, Says Report

The Takshashila Institution’s recent State of AI Governance Report has stirred conversations regarding the underutilisation of compute capacity under the IndiaAI Mission. Despite the Indian Government’s intent to stimulate startups and foster indigenous AI models by establishing robust computational resources, the report predicts that a significant portion of this capacity might remain untapped. A principal reason is that not many projects will meet the criteria for Government subsidies, alongside the intricate bureaucratic workflows potentially hindering effective resource access.

In March 2024, the Union Cabinet greenlit the IndiaAI Mission, earmarking an impressive Rs 4563.36 crore to enhance compute capacity. One of the Mission’s key developments is the IndiaAI Compute Portal, designed to provide AI projects with access to a range of services including compute, network, storage, platform, and cloud facilities. Projects and startups that gain approval will benefit from a 40% subsidy on AI compute services hosted on the cloud.

The State of AI Governance report spans the regulatory priorities across four major regions: the United States, the European Union, India, and China, assessing them on various parameters. These include aspects such as transparency and accountability, geopolitical strategies, societal wellbeing, and the encouragement of innovation, all rated on a scale from one to six. Notably, the report identifies India’s strongest emphasis as the promotion of innovation, although it warns of potential operational inefficiencies in executing these initiatives.

Similarly, the United States is shown striving towards innovation-centric regulations. The report delves into the implications of U.S. policies, including an executive order from President Donald Trump in January this year aimed at dismantling barriers to global AI leadership. This came as a counteraction to an October 2023 executive order by former President Joe Biden, which focused more on the safety and reliability of AI models. The U.S. strategy also significantly considers geopolitics, specifically striving to regulate AI chip exports through its AI diffusion framework.

As part of its export control measures, the U.S. has listed China as an adversary, complicating the procurement of American AI chips for Chinese firms. While India doesn’t feature on the adversary list, it also isn’t regarded as a close ally under these regulations. The report raises concerns that these controls may hinder India’s access to advanced AI chips and computing technologies.

Nonetheless, the report suggests that the present U.S. restrictions on AI chip exports may not intensify. This is attributed to the advent of Chinese AI firm DeepSeek, which has made significant strides, lessening the dependence on proprietary chips for technological advancement. A similar notion was echoed by India during a speech by Nandan Nilekani at the Global Technology Summit in Delhi in April this year.

Contrastingly, the European Union prioritises transparency and accountability, categorising AI models into risk bands – unacceptable, high, limited, and minimal. These categories come with their set of compliance requirements, and the EU implements extensive measures to shore up societal wellbeing through governance mechanisms such as performance evaluations, mandatory disclosures, licenses, and certifications.

The report forewarns that due to stringent restrictions, companies might opt not to release AI models or features in the EU, potentially leading to a dilution in the rigidity of AI regulations by the authorities.

In China, equal weight is given to AI governance capabilities and innovation promotion. The report underscores China’s formidable regulatory command along with the state’s ability to finance and manage extensive computing networks dedicated to AI development. By fostering public-private research alliances and promoting large-scale infrastructure projects, China aims to ensure equitable distribution of regional computing power and supports AI infrastructure investments through tax incentives.

The Takshashila Institution proposes that open-source and open-weight models are likely to proliferate from China and the EU, presenting a “pathway to strategic autonomy and technological leadership.” It contends that AI models from DeepSeek and the French AI startup Minstral are expected to remain open source.

Finally, the report casts doubt on the relevance of using computing power consumption during AI system training as a metric for regulatory requirements. This is challenged by observations like those in Biden’s October 2023 executive order, where AI models trained with computing power exceeding 1026 floating point operations per second or FLOPs must comply with additional reporting mandates.

The EU AI Act echoes this by stating that any general-purpose AI model training that surpasses 1025 FLOPs presents systemic risks. The Takshashila Institution argues that these benchmarks for regulating AI may soon become obsolete as inference capabilities evolve and smaller, more efficient models emerge.

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

Unveiling Oracle’s AI Enhancements: A Leap Forward in Logistics and Database Management

Oracle Unveils Cutting-Edge AI Enhancements at Oracle Cloud World Mumbai In an…

Charting New Terrain: Physical Reservoir Computing and the Future of AI

Beyond Electricity: Exploring AI through Physical Reservoir Computing In an era where…

Challenging AI Boundaries: Yann LeCun on Limitations and Potentials of Large Language Models

Exploring the Boundaries of AI: Yann LeCun’s Perspective on the Limitations of…

The Rise of TypeScript: Is it Overpowering JavaScript?

Will TypeScript Wipe Out JavaScript? In the realm of web development, TypeScript…