
The rapid pace of AI adoption has magnified the structural weaknesses within the cybersecurity industry. Tool sprawl and fragmented security layers have created a huge vulnerability gap and are hindering the ability to deal with emerging threats triggered by rapid AI adoption. According to Gartner, 40% of enterprise applications will be using task-specific AI agents in 2026, which makes visibility more critical than ever.
In an interaction with CXOtoday, Binod Singh, Founder and CEO of Cross Identity, an identity and cybersecurity company, talks in detail about some of these challenges and how they can be addressed. Singh also dwells on the evolving role of Zero Trust architecture in the age of agentic AI, and how and where his company is using AI to negate AI-driven risks. Edited excerpts:
Q. How do you see the Zero Trust architecture evolve in response to the growing use of GenAI and agentic AI?
Zero Trust architecture has seen shifts based on evolving threats and technological advancements. But the most significant shift is happening because of AI. The traditional user-to-app security model is being replaced by a machine-to-machine (M2M) and agent-to-agent (A2A) reality.
The first shift is the rise of non-human identity management. In the past, Zero Trust was essentially focused on verifying human employees. Today, autonomous agents outnumber humans in many enterprise environments. Organizations are moving away from static service accounts or shared application programming interface (API) keys. Agents are now assigned cryptographic identities. Every thought or action an agent takes must be signed and authenticated.
The second major shift is the micro-segmentation at the prompt level. Traditional micro-segmentation is used to isolate networks or workloads. With GenAI, the attack surface is often the data layer itself, not the networks and workload layer. If an agent has access to a vector database, it might accidentally leak sensitive information through a prompt response.
The third is just-in-time permissions. Agentic AI can move at machine speed, performing hundreds of tasks in a second. Enterprise cannot afford permanent permissions or standing privileges because you don’t know which privilege has to be changed at what point in time. If an agent with standing privileges is compromised, it can lead to massive lateral movement.
The fourth shift is continuous behavioral monitoring, because the behavior of the agent itself can be a little unpredictable. So, the shift that is happening here is that the security is moving from allow-deny list to continuous risk scoring.
Q. Given the distributed nature of modern AI pipelines, what are the most critical control points for implementing a Zero Trust architecture?
To understand this, let’s take a look at the nature of the distributed AI pipeline. Here, the perimeter doesn’t just disappear, it fragments into hundreds of micro boundaries. This is what differentiates AI pipelines from traditional ones. Implementing Zero Trust now requires moving security from the network edge to the logical touchpoints where data and logic intersect. Since enterprises frequently use a mix of OpenAI, Anthropic, and internal models, a centralized gateway becomes the primary policy enforcement point, serving as the single entry and exit point for all model traffic.
The Zero Trust actions that we are working on should also perform request-response sanitization. It involves scrubbing sensitive information from outgoing prompts and scanning incoming model responses for jailbreak toxicity before they reach your internal systems.
The second one is the M2M identity provider. In a distributed pipeline, the user is often a Python script in a container or third-party API. The Zero Trust action that we can take is to use the Workload Identity Federation, which will help us move away from static API keys in favor of secure, short-lived OpenID Connect (OIDC) tokens.
Q. Are API-based integrations becoming a liability because of AI?
It is a very well-known fact that APIs are one of the single points of failure when it comes to security breaches. Security vulnerabilities don’t happen because the applications are fractured. The applications are pretty solid. The vulnerability happens at the seams where the two are integrated. And that is leading to what we call API bloat, which is leading to vendor vulnerabilities.
The scary reality is that the response from our technologies (cybersecurity) is moving much slower than the AI technology itself, which is creating more and more issues.
Q. Has the vendor and solutions sprawl worsened the fragmentation problem and made managing security stacks difficult?
If you look at cybersecurity it is split into five different layers– data security, network security, application security, device security, and identity security. Each one of these layers are currently fragmented. Not only that, within a particular layer, you have more fragments. For instance, in network security, there are at least seven to eight areas, and each one has its own tools, which don’t talk to each other. And the same thing is true for the other layers as well.
That is why cybersecurity is so fragmented and that is the fundamental reason why we are not able to handle cybersecurity threats. The solution is to somehow make this whole cybersecurity run as one single machine where much of the issues because of the segmentation vanishes.
Identity security is the only one that can become that glue, because it is a common point, whether you are talking at the network layer, data layer or any of the other layers. Unfortunately, the Identity layer itself is fragmented into nine different components. So, when a unifying layer is so fragmented, how can it become a unifier for the other layers? That is the fundamental question.
Unless you do that unification, there is no way you can do bigger things. That is where cybersecurity as an infrastructure comes into play. To achieve this, we need a unified cybersecurity infrastructure, which eliminates integration taxes and API dependencies.
Q. How and where are you using GenAI and agentic AI in your security solutions?
We are trying to use these technologies across three primary layers. The first one is conversational governance. Instead of complex query builders, security professionals use natural language to ask things like what we can do, show me all agents that have not rotated their keys in the last 30 days, or generate a least privileged policy for our new HR chatbot.
The second area is autonomous remediation. If one agent is being tricked by another agent, an autonomous security agent can instantly revoke the session and quarantine the identity without waiting for a human to log into a console. So that is going to be one of the major uses of agentic AI for us.
The third area is smart user lifecycle management. The AI agent will handle the grunt work of identity provisioning. It involves identifying zombie accounts and performing automated access reviews by analyzing actual usage data rather than the static job titles. These are some areas where we are using AI, and it is making things far more effective.
Source: Click Here

