Business News | Infopercept Releases Threat Predictions Report for 2026: Attacks on AI and Attacks Using AI
Ahmedabad, Gujarat, India — November 25: Infopercept, a platform-led managed security services provider with a global footprint, has published its 2026 Threat Predictions Report, mapping how artificial intelligence is poised to reshape the cyber risk landscape in the coming year.
The report, titled “Infopercept 2026 Threat Predictions: Attacks on AI & Attacks Using AI,” draws a sharp line between two accelerating fronts: AI as a target and AI as a tool. It offers one of the most forward-looking views yet on how generative and agentic AI will influence both offensive operations and defensive postures worldwide.
“Never in the history of cybersecurity have attackers and defenders shared equal access to the same source of power,” said Satyakam Acharya, director of exposure management at Infopercept. “GenAI has erased traditional skill gaps. Attacks that once required high levels of expertise can now be executed by almost anyone. Our 2026 predictions show how AI will accelerate attacks, amplify adversaries, and blur the line between human intent and autonomous action.”
Major Attack Predictions for 2026
A. Attacks on AI
Threats that directly target models, agents, data pipelines, and the broader AI stack:
- GenAI-driven development expands the attack surface: As non-developers ship code with AI assistance, adversaries may seed poisoned datasets, malicious prompt templates, and trojanized plug-ins to compromise software supply chains.
- Manipulation of the Model Context Protocol (MCP): Attackers could tamper with or reroute context sources, mislead AI systems, trigger recursive agent loops, or abuse overly permissive connectors.
- Multi-LLM setups enable gateway bypass: Split prompts, covert connectors, and unvetted AI endpoints may let attackers sidestep LLM gateways much like historical firewall bypass tactics.
- SOC automation becomes a poisoning target: Autonomous security agents inside SOCs may be manipulated to disable sensors, erase evidence, or mask intrusions.
- Identity-layer agents create new privilege risks: Stolen agent tokens and impersonated automation identities could enable token forgery, lateral movement, and privilege chaining.
- Compromised AI testing undermines SDLC: Poisoned AI-based testing tools might miss critical flaws or generate insecure “auto-fixes,” elevating systemic risk.
- On-prem and air-gapped AI erode isolation: Data bridges for model updates can introduce novel infiltration paths into highly restricted or critical environments.
- Shadow AI fuels hidden backdoors: Unsanctioned LLMs and departmental AI tools may bypass security controls and leak confidential data.
- Agentic malware and ransomware emerge: AI-driven code could start making autonomous choices—selecting victims, adapting to defenses in real time, negotiating ransoms, and self-propagating.
B. Attacks Using AI
How adversaries wield AI to supercharge offense:
- Scalable generative deception: Deepfakes, synthetic personas, and voice clones transition into turnkey kits for fraud, social engineering, and precision phishing.
- Autonomous exploit discovery: AI agents hunt for and weaponize vulnerabilities in minutes, squeezing defenders’ response windows.
- Polymorphic, AI-generated malware: Constantly mutating payloads evade signature- and behavior-based detections.
- Cognitive overload against SOCs: AI-crafted floods of realistic alerts overwhelm analysts, creating cover for real intrusions.
- Dual-layer decision hijacking: Adversaries aim to influence both human operators and AI-driven systems in tandem to steer outcomes.
Infopercept’s analysis underscores a pivotal shift: defenders must treat AI components—models, agents, connectors, and context sources—as first-class assets with their own attack surfaces. The firm argues that governance for prompts, datasets, identity tokens, and agent actions should be embedded across the SDLC and SOC workflows, alongside traditional controls like EDR, SIEM, and SOAR. Multi-LLM environments and AI-enabled identity layers will require rigorous validation, isolation, and least-privilege designs to prevent lateral exposure and privilege chaining.
About Infopercept’s Threat Research Team
Infopercept’s Threat Research Team blends offensive, defensive, and AI security disciplines. Working across red teaming, threat intelligence, and platform engineering, the group mines insights from the company’s Invinsense platform to anticipate adversarial tradecraft and produce forward-looking guidance for security practitioners.
About Infopercept
Infopercept is among India’s fastest-growing platform-led managed security services companies, supporting global customers across defensive, offensive, detection and response, and compliance mandates. Its cybersecurity platform, Invinsense, unifies SIEM, SOAR, EDR, deception, offensive security, and compliance capabilities, while its MDR services are delivered by round-the-clock experts. Learn more at www.infopercept.com.
Note: This article is based on materials provided via press release.