The rise of agentic AI is no longer a distant possibility, it’s underway. A recent Gartner study predicts that by 2028, 33% of enterprise software applications will incorporate agentic AI, a substantial increase from the less than 1% in 2024.
AI agents or agentic AI are systems designed to perform tasks or make decisions on behalf of users. They interpret surroundings, process information and accomplish specific objectives, continuously learning and improving through advanced algorithms and machine learning. This makes them crucial for driving productivity and improving efficiency, and an invaluable asset for businesses.
As these autonomous AI agents take on more responsibilities across businesses, their impact is set to be transformational. But with this evolution comes new and unexpected security challenges, some of which many businesses are not fully prepared for.
While AI agents are not yet widely used in the enterprise, adoption is accelerating quickly as organisations realise the huge benefits they bring. Employees across all levels and their associated identities – from business users to IT professionals to developers, and even to the devices and applications they use – will soon start interacting with resources and services through AI-powered agents. These agents will be embedded into operating systems, browsers and platforms, as well as everyday tools like Microsoft Teams. Companies will even start to develop their own agents or use agents-as-a-service provided by SaaS providers.
By learning to work alongside a combination of AI-driven agents, employees’ productivity has the potential to skyrocket. These AI driven agents will not only streamline workflows but also redefine how tasks are delegated and executed, effectively transforming users into managers of their own virtual teams. This shift will fundamentally reshape traditional roles, making work more dynamic, efficient, and strategically focused.
So, given their ability to integrate seamlessly into business operations and simplify multiple workloads, organisations will find it impossible to avoid adopting agentic AI. The key challenge then, is understanding and mitigating the security implications.
The unseen autonomy of shadow AI agents
One of the biggest security challenges is the rise of shadow AI agents, in other words, AI-powered tools that have been deployed without the knowledge of IT and security teams. These agents can be introduced by individual employees, often bypassing standard security processes.
Because these agents can function independently and without supervision, they can introduce risks in unforeseen areas, creating blind spots for security teams. Without proper oversight, they can become a significant security vulnerability with the potential to expose sensitive company data or provide entry points for malicious actors.
Developers as the new R&D and operations experts
The role of developers is also evolving. No longer just coders, they are now key players in research, development, and operations. Generative AI has already enhanced developer productivity, but now this is going one step further. Developers will soon manage the entire application lifecycle, from coding and integration to QA, deployment and troubleshooting – all autonomously.
With this increased autonomy comes increased privilege. If a developer’s identity is compromised, the risk escalates dramatically, making it one of the most valuable, but also vulnerable identities in the enterprise. Securing these identities must therefore be a top priority to prevent attackers from exploiting AI-powered development environments.
The risks and impacts of having humans in the loop
As organisations integrate agentic AI, humans will still continue to play a critical role in oversight and governance. These ‘human-in-the-loop’ process are essential for validating and ensuring that AI agents operate as intended, validating their actions, and approving exceptions and requests from agents. Human input will also shape the future behaviour of these self-learning AI agents.
However, malicious actors may target these individuals to infiltrate the architecture, escalate privileges and gain unauthorised access to systems and data. Balancing the necessity of human oversight with strong security measures will therefore be essential to minimising risk.
Managing millions of AI Agents
One of the biggest hurdles for enterprises is the sheer scale of AI agent deployment. Machine identities already outnumber human identities by up to 45-to-1, and 76% of security leaders anticipate the number of machine identities in their organisation to increase by as much as 150% over the next year. Meanwhile, NVIDIA’s Jensen Huang’s predicts 50,000 humans could manage 100 million AI agents per department, meaning the ratio could soar to over 2,000-to-1.
To maintain security, best practice will involve dividing tasks among multiple specialised AI agents, each with defined roles and responsibilities to help mitigate risk and maximise efficiency.
Advancing security for agentic AI
For the successful deployment of agentic AI at scale, organisations must place a strong emphasis on safety, regulatory compliance, and building trust in their systems. Key requirements for this include full visibility into activities, strong authentication mechanisms, least privilege access, just-in-time access controls as well as comprehensive session auditing to trace actions back to their identities. Doing this is vital for ensuring the security of both human and machine identities alongside the rise of agentic AI.
It’s important to note that given the rapid pace of advancements in this field, we could not have anticipated many of the challenges discussed here just a few months ago. While not exhaustive, the examples highlighted above reveal the dramatic shifts and potential risks associated with the widespread adoption of agentic AI.
As agentic AI continues to reshape enterprise operations, organisations must stay vigilant. While new challenges continue to arise, one thing we know for sure is that it is here to stay and the question isn’t whether enterprises will adopt it, but how they’ll secure it.