The Government of India has selected five projects under the second round of its expression of interest for the Safe and Trusted AI pillar of the IndiaAI programme, aiming to accelerate work on deepfake detection, AI forensics, bias mitigation, and robust evaluation tools for generative AI. According to an official statement, the initiatives are intended to ensure AI systems deployed in the country are reliable, secure, and inclusive.
IndiaAI—a division under the Ministry of Electronics and Information Technology (MeitY) and the implementation agency for the IndiaAI Mission—said the selected projects were chosen from more than 400 proposals submitted by academic institutions, startups, research organisations, and civil society groups. A multi-stakeholder technical committee reviewed the submissions before recommending the final cohort for governmental support.
What the projects aim to deliver
Collectively, the five projects translate the Safe and Trusted AI agenda into practice by combining resilience testing, bias audits, and forensic capabilities to support responsible AI development and deployment. The selected workstreams include:
- Real-time voice deepfake detection to counter impersonation and synthetic audio misuse.
- Advanced analysis for audio-visual deepfakes and signature forgery detection to strengthen digital and physical evidence verification.
- Evaluation of gender bias in agricultural large language models to ensure domain-specific inclusivity and fairness.
- Development of penetration-testing tools for large language models and generative AI to assess model robustness and security.
Project highlights and collaborators
- A multi-agent retrieval-augmented generation (RAG) framework for deepfake detection and governance will be led by the Indian Institute of Technology (IIT) Jodhpur in collaboration with IIT Madras.
- IIT Mandi, working with the Directorate of Forensic Services in Himachal Pradesh, will develop AI Vishleshak to improve audio-visual deepfake identification and signature forgery detection.
Why it matters
As AI systems become more pervasive, the risks associated with synthetic media, biased outputs, and model vulnerabilities are rising. The selected projects aim to tackle these challenges head-on by building capabilities that can:
- Detect and flag deepfakes in real time, reducing the spread of misinformation and fraud.
- Enhance forensic tools to support investigations involving manipulated content and forged signatures.
- Surface and address gender bias in specialised AI models, particularly in sectors like agriculture where equitable access and representation are crucial.
- Stress-test generative AI systems to uncover security gaps and improve resilience before deployment.
According to the official statement, the initiative underscores a practical pathway for Safe and Trusted AI by integrating rigorous testing and accountability measures into the AI lifecycle.
About IndiaAI and the mission
IndiaAI, a division of MeitY, serves as the implementation agency for the IndiaAI Mission. The mission focuses on democratising the benefits of AI, bolstering India’s leadership in the field, promoting technological self-reliance, and ensuring the ethical and responsible use of AI. The newly selected projects align with these priorities by prioritising safety, inclusivity, and transparency in AI systems.
With this second-round selection, the government signals continued momentum in building a national ecosystem that can responsibly harness AI’s potential while safeguarding citizens and institutions from emerging risks.