IndiaAI Selects 5 Projects To Advance Safe And Trusted AI In Country

New Delhi: The government on Tuesday announced the selection of five projects under the second round of its Expression of Interest for the “Safe and Trusted AI” pillar of the IndiaAI programme, a business division under the Ministry of Electronics and Information Technology (MeitY).

Chosen from over 400 submissions by academic institutions, startups, research organisations, and civil society groups, the projects were evaluated by a multi-stakeholder technical committee. According to an official statement, the selected initiatives aim to ensure AI systems deployed in India are reliable, secure, and inclusive—translating the “Safe and Trusted AI” mandate into practical tools, tests, and audits.

What the selected projects target

  • Real-time voice deepfake detection: Building capabilities to identify synthetic or manipulated speech on the fly, helping platforms and authorities counter impersonation and fraud.
  • Advanced forensic analysis for forgeries: Developing techniques to detect and analyse audio-visual tampering and signature forgeries, strengthening digital evidence and investigation workflows.
  • Bias evaluation in domain-specific models: Assessing gender bias in agricultural large language models to support more equitable outcomes for diverse user groups and regions.
  • Penetration-testing tools for LLMs and GenAI: Creating red-teaming and resilience-testing toolkits that probe models for vulnerabilities, jailbreaks, and unsafe behaviours before deployment.
  • Robust evaluation frameworks for generative AI: Establishing standardized safety, reliability, and performance benchmarks to guide responsible development and rollout.

Why it matters

The rapid rise of generative AI has amplified concerns around deepfakes, misinformation, data integrity, and biased decision-making. By backing targeted research and engineering efforts, IndiaAI’s Safe and Trusted AI pillar is moving beyond policy intent to deployable solutions—combining resilience testing, forensic capabilities, and bias audits to improve confidence in AI systems across sectors.

Real-time detection and forensic tools can help digital platforms, media houses, and law enforcement respond faster to harmful content and fraudulent activity. Bias evaluation in domain-specific models—such as those used in agriculture—can surface skews that may otherwise disadvantage certain communities. Meanwhile, structured red-teaming and standardized evaluation frameworks give developers and regulators clearer ways to measure and improve safety before systems reach end users.

The bigger picture

As AI adoption expands across governance, agriculture, finance, health, and education, safeguards must keep pace. The five selected projects signal a coordinated approach: building the technical infrastructure and testing regimes that underpin safe AI at scale. IndiaAI’s multi-stakeholder selection process—drawing on academia, startups, research labs, and civil society—also reflects the need for diverse expertise to address complex risks.

While the specific teams behind each project have not been detailed here, the focus areas underscore the government’s priorities: curbing synthetic media harms, strengthening digital forensics, mitigating bias in real-world applications, and institutionalizing rigorous evaluation for generative models. Together, these efforts are intended to support trustworthy AI deployments that align with India’s social and regulatory needs.

As the projects progress, their outputs—tools, benchmarks, and methodologies—are expected to inform both public and private deployments, helping stakeholders adopt AI systems that are safer by design.

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

Unlock Your Escape: Mastering Asylum Life Codes for Roblox Adventures

Asylum Life Codes (May 2025) As a tech journalist and someone who…

Challenging AI Boundaries: Yann LeCun on Limitations and Potentials of Large Language Models

Exploring the Boundaries of AI: Yann LeCun’s Perspective on the Limitations of…

Unveiling Oracle’s AI Enhancements: A Leap Forward in Logistics and Database Management

Oracle Unveils Cutting-Edge AI Enhancements at Oracle Cloud World Mumbai In an…

Charting New Terrain: Physical Reservoir Computing and the Future of AI

Beyond Electricity: Exploring AI through Physical Reservoir Computing In an era where…