While more than half of developers acknowledge that generative AI tools frequently produce insecure code, a staggering 96% of development teams continue to utilize these tools, with over half relying on them consistently, as per a report unveiled on Tuesday by Snyk, the creator of a security platform tailored for developers.
The survey, encompassing 537 members and leaders from software engineering and security teams, highlighted that 79.9% of respondents admitted to developers circumventing security protocols in order to leverage AI technology.
Simon Maple, Principal Developer Advocate at Snyk, expressed his astonishment at the extent to which developers flout security policies to leverage AI tools. He remarked, “I knew developers were avoiding policy to make use of generative AI tooling, but what was really surprising was to see that 80% of respondents bypass the security policies of their organization to use AI either all of the time, most of the time or some of the time.”
The report underscored that the absence of rigorous testing poses a significant risk of AI introducing vulnerabilities into production environments. Despite the rapid adoption of AI by companies, the failure to automate security procedures leaves critical gaps in safeguarding code. Merely 9.7% of participants indicated that their teams automated 75% or more of security scans, accentuating the pressing need for enhanced automation in this realm.
Maple emphasized the role of generative AI as an accelerator in code development and deployment, cautioning that without adequate testing, the likelihood of vulnerabilities slipping into production escalates. He noted, “Generative AI is an accelerator. It can increase the speed at which we write code and deliver that code into production. If we’re not testing, the risk of getting vulnerabilities into production increases.”
Furthermore, Maple pointed out that a notable proportion of respondents had augmented their security scans following the adoption of AI tools, signaling a growing recognition within organizations of the imperative to bolster security measures in tandem with AI integration.