Nervous Indian fintechs push Anthropic for access to Mythos – The Economic Times

A trio of leading Indian fintechs is pressing Anthropic for controlled access to its new AI model, Mythos, aiming to probe their systems for weaknesses before bad actors do.

One97 Communications, Razorpay Software and Pine Labs have asked San Francisco–based Anthropic for permission to test Mythos directly on their own infrastructure and applications. The companies want to run security evaluations and red-team exercises to spot potential vulnerabilities, according to people familiar with the outreach. Their requests followed Anthropic’s decision to limit the initial roll-out of Mythos, a next-generation large language model the company has flagged as too risky for a broad public release.

Why Indian fintechs want in

Fintech platforms are prime targets for fraud and cyberattacks, and AI advances can cut both ways—strengthening defenses while also giving attackers sharper tools. Access to Mythos, even in a sandboxed form, would let payments and lending players:

  • Stress-test customer-facing bots and support workflows against prompt injection, data leakage and social-engineering scenarios.
  • Probe internal risk, KYC and fraud-detection pipelines for model-induced blind spots or avenues for policy evasion.
  • Assess how an advanced model might generate or deflect attacks, from phishing and deepfakes to automated abuse of API endpoints.
  • Validate compliance controls—such as auditability, rate limiting and data minimization—before any wider integration.

For firms operating at national scale, even a narrow weakness can translate into large financial exposure. Proactive testing with the most capable models available has become a standard playbook for CISOs seeking to get ahead of emergent threats.

Anthropic’s cautious roll-out

Anthropic has characterized Mythos as a powerful system with elevated misuse potential, warranting a phased release. In practice, this typically means strict gating: access for vetted partners, enforced safety filters, rate limits, usage monitoring and clear red lines on dangerous capabilities. The company’s stance reflects a broader industry trend of shipping frontier models incrementally while expanding safety evaluations and external red-teaming.

The tension is familiar: enterprises want early access to understand real-world risk, while model providers want to limit exposure until they’ve mapped and mitigated the sharp edges. The resulting negotiation often centers on controls, accountability and verifiable safeguards.

What “safe access” could look like

If Anthropic entertains the fintechs’ requests, access is likely to arrive with enterprise-grade guardrails, such as:

  • Dedicated, sandboxed environments or time-bound API keys with strict scopes.
  • Comprehensive logging, audit trails and disclosure of model/system prompts and safety policies.
  • Content and capability filters to block generation of high-risk outputs.
  • Data-handling guarantees—no training on client data, optional zero-retention modes and clear data locality.
  • Co-developed red-team protocols defining permitted tests, escalation paths and remediation timelines.

These controls allow security teams to run meaningful evaluations while containing the blast radius of any discovered exploit or unsafe behavior.

Signals for the broader ecosystem

The push from One97 Communications, Razorpay Software and Pine Labs underscores how fast AI risk management is professionalizing in financial services. Payments processors, lenders and merchants increasingly expect transparency around model behavior, incident response and third-party attestations. In turn, model providers are being asked to offer more than an API—namely, testing frameworks, safety documentation and explicit commitments on data governance.

How Anthropic handles enterprise testing for Mythos could set an informal standard for frontier-model access in regulated industries, balancing innovation with duty of care.

Bottom line

India’s fintech leaders want to put Mythos through its paces on their own turf, not to deploy it immediately, but to harden their defenses against what such a powerful model could enable. Anthropic, wary of the risks, is moving cautiously. Finding a middle ground—tightly controlled test access with measurable safeguards—may be the fastest path to both safer products and greater trust on all sides.

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

Unlock Your Escape: Mastering Asylum Life Codes for Roblox Adventures

Asylum Life Codes (May 2025) As a tech journalist and someone who…

Challenging AI Boundaries: Yann LeCun on Limitations and Potentials of Large Language Models

Exploring the Boundaries of AI: Yann LeCun’s Perspective on the Limitations of…

Unveiling Oracle’s AI Enhancements: A Leap Forward in Logistics and Database Management

Oracle Unveils Cutting-Edge AI Enhancements at Oracle Cloud World Mumbai In an…

Charting New Terrain: Physical Reservoir Computing and the Future of AI

Beyond Electricity: Exploring AI through Physical Reservoir Computing In an era where…