AI coding assistants are revolutionising software development – but can security keep up?

By Eran Kinsbruner, VP of Product Marketing, Checkmarx.

AI coding assistants are transforming how software is built, with a growing list of tools super-charging development lifecycles and democratising coding through ‘low’ and ‘no-code’ options.

Whilst tools like GitHub Copilot have been available for several years, the high profile launch of new tools such as Claude Code has put AI-powered coding at the centre of discussions on the future of software development.

But these benefits come at a cost. As AI writes more of the world’s code, security review processes are struggling to keep up. The faster software is produced, the harder it becomes to ensure it’s safe. Organisations must ensure that security can scale quickly enough to match the speed of AI-assisted development.

The AI development productivity paradox

With most developers facing increased pressure for shorter production times, the promised combination of efficiency and accessibility makes AI support extremely appealing. Our own research has found that almost two-thirds (63%) of organisations already sanction the use of AI tools for code generation, and over half of all code written uses AI to some degree. Almost one in ten (9%) of respondents even went as far as to confirm that between 80-100% of their code is AI-generated, showing just how quickly developers are now reliant on these assistants.

However, this uptake in its use comes with increased risk. While developers are writing code faster than ever, the productivity boom is not being matched by governance within teams to ensure that the right security guardrails are in place.

Why is governance falling behind?

Despite growing awareness of AI-related risks, most organisations still lack clear rules for how these tools should be used, and there is often a dangerously laissez-faire approach. We found that only 18% of organisations have approved lists of AI coding assistants in place. This lack of visibility could create significant challenges for security teams as vulnerabilities introduced by AI code could go undetected.

It also suggests that developers are moving too quickly for traditional review and approval cycles to keep up, and may assume AI-generated code is inherently secure. Meanwhile, tool fatigue continues to erode visibility; fewer than half of organisations

use mature AppSec capabilities such as Dynamic Application Security Testing (DAST) or Infrastructure-as-Code scanning. The result is that practices have evolved faster than policies, leaving business leaders struggling to reassert control over their software supply chains.

The threat of shadow AI

One issue resulting from this lack of governance is the rapid increase in ‘shadow AI’ – the use of tools without proper authorisation or oversight. One in five respondents in our research said that although they did not allow AI code generation tools, they either knew or assumed they were being used anyway.

Using unapproved AI tools introduces code provenance blind spots, making it impossible to verify critical code and whether it’s secure.

Shadow IT is a longstanding problem, but now it moves at developer speed and reaches directly into production environments. Vulnerabilities introduced by AI-generated code can spread quickly across repositories, meanwhile attackers are learning to exploit this expanded attack surface through tactics like prompt injection and model manipulation. As AI tools scale, so do the vulnerabilities they can propagate. Without visibility, audit trails, and clear policy enforcement, CISOs risk losing control of AppSec in the AI era.

How CISOs can close the gap

AI-assisted development doesn’t have to come at the cost of security. Closing the gap between innovation and risk now demands a new kind of AppSec strategy – one that scales at AI speed. CISOs must evolve from enforcers of compliance to architects of secure AI ecosystems, combining governance, automation, and culture into a single, adaptive framework.

The first step is visibility. Organisations cannot secure what they cannot see, yet most still lack formal policies defining which AI tools are approved, how they can be used, and how AI-generated code is reviewed.

Dealing with this effectively demands full visibility; every line of AI-generated code should be traceable, with clear provenance, audit trails, and mandatory security scanning. At the same time, organisations need the ability to analyse and fix issues in ways that are developer-friendly in the AI era. Traditional automation is no longer enough, and organisations are beginning to adopt agentic AI security assistants capable of detecting and remediating vulnerabilities in real time within the SDLC.

Finally, security must still scale through people. As developers shift from coders to curators, CISOs should prioritise skill retention and ongoing training to mitigate the risk of “AI skill erosion.” This is particularly important as the experience of developers is vital in providing the checks and balances on code quality; our own analysis has shown that it is possible to trick the security review function in an AI coding assistant so developers

need to get the security basics right in environments where they will run this type of assistant.

The future of secure coding in the AI era

AI isn’t just changing how we write code, it’s redefining how we need to secure it. As AI-driven development accelerates, policies and processes must match its speed. CISOs who embed real-time security into developer workflows, enforce clear governance, and empower their teams to take ownership will turn AI’s velocity into a competitive edge.

The future of secure software depends on this balance. A mature and measured approach to applying the latest AI innovations will make the difference between those that thrive, and those that expose themselves and their customers to critical risk.

By Manvinder Singh, VP of Product Management for AI at Redis.
By Andre Jay, Director of Technology at Warp Technologies.
By Rob Gates, Chief Architect & Innovation Officer at Duco.
By Danny Quinn, Managing Director at DataVita.
By Alexander Gittens, Utilities, Energy and Enterprise Sales Manager, Getac.ise Sales Manager,...
By Giuseppe Leto, senior director IT systems business at Vertiv.