AI Adoption Surges While Governance Lags — Report Warns of Growing Shadow Identity Risk
Baltimore, MD — December 2, 2025 | CyberNewsWire
AI is now embedded in everyday enterprise workflows, but oversight is trailing far behind. That’s the central finding of the 2025 State of AI Data Security Report, which paints a stark picture: while 83% of organizations say they use AI in daily operations, only 13% claim strong visibility into how these systems handle sensitive data.
Produced by Cybersecurity Insiders with research support from Cyera Research Labs, the study aggregates responses from 921 cybersecurity and IT professionals spanning industries and company sizes. The results suggest a mounting governance gap as AI systems expand access, move faster than human controls, and operate continuously—often without the guardrails that protect human users.
Key findings at a glance
- 83% of organizations report day-to-day AI use, but just 13% have strong visibility into AI data handling.
- Two-thirds say they’ve observed AI tools over-accessing sensitive information.
- 23% admit they have no controls for prompts or outputs.
- 76% identify autonomous AI agents as the hardest systems to secure.
- 57% lack the capability to block risky AI actions in real time.
- Visibility is thin overall: nearly half report no visibility into AI usage, and another third say they have only minimal insight.
AI as an ungoverned identity
The report frames AI not just as a tool but as an emerging “shadow identity.” Unlike human users with discrete roles and working hours, AI systems—especially autonomous agents—read faster, access more systems and data, and operate non-stop. Traditional, human-centric identity and access models struggle to keep pace, making it harder to enforce least-privilege access, monitor behavior, or trace actions back to accountable owners.
This mismatch shows up in real-world incidents. Two-thirds of respondents have already caught AI tools pulling more data than they should. With nearly a quarter lacking controls on prompts and outputs, organizations risk both sensitive data leakage and decisions influenced by uncontrolled inputs. The result: expanding data exposure and compliance blind spots, especially when AI crosses boundaries between SaaS apps, internal systems, and data lakes.
Autonomous agents: the riskiest frontier
Autonomous AI agents are flagged as the most exposed area of modern AI adoption. According to 76% of respondents, these systems are the toughest to secure, in part because they can chain tasks, call external tools or APIs, and make decisions at machine speed. Compounding the problem, 57% of organizations lack the ability to halt dangerous AI actions in real time—meaning that once an agent starts a risky sequence, there may be no practical kill switch.
Visibility gaps drive uncertainty
Effective governance starts with knowing where AI is in use and what data it touches. Yet the report finds that nearly half of organizations have no visibility into AI usage whatsoever, and another third only see a sliver of activity. That leaves most enterprises unsure which models or agents are running, what they’re accessing, and whether their actions align with policy and regulation.
What security teams need next
While the report focuses on current risks, its implications point to an emerging set of best practices:
- Elevate AI to first-class identity: Treat AI systems and agents as distinct identities with explicit roles, attributes, and enforceable least-privilege access.
- Strengthen data-centric controls: Map sensitive data, monitor access paths, and apply policy where data lives—not just at app boundaries.
- Instrument real-time guardrails: Build the ability to observe, intercept, and block risky AI actions as they happen.
- Govern prompts and outputs: Standardize prompt policies, redact or mask sensitive inputs, and inspect outputs for data leakage.
- Close the visibility gap: Inventory AI usage across the environment—models, agents, integrations—and link activity back to owners and policies.
The 2025 State of AI Data Security Report underscores a turning point. AI is no longer experimental; it’s mainstream. But without identity-aware governance, granular data controls, and real-time enforcement, AI risks becoming the fastest, most privileged “user” in the enterprise—one few teams can see, and even fewer can control.