Coding with generative AI, a machine to generate security vulnerabilities?


More than 9 out of 10 respondents admit to having discovered security flaws in the production of code generation assistants. (Photo: Michael Krause/Pixabay)

Already widely used, code generation tools tend to introduce security vulnerabilities. However, the companies have not yet adapted their processes to this new situation.

AdvertisingA highly exploited technology whose risks remain underestimated. This is the assessment prepared by the publisher of cyber security tools Snyk regarding the use of generative AI in development. Based on the responses of 537 development or cybersecurity engineers, the study highlights the very wide spread of generative AI tools: 96% of the teams to which these professionals belong use the technology to code, and in more than one case out of two, these applications are integrated into the teams’ daily lives. AI-assisted coding tools seem accomplished and compelling, notes Snyk. Unfortunately, this veneer and ease of use has created a false sense of confidence in coding guides and a herd reaction that sees developers seeing these tools as safe.

However, it is not. In reality, these tools continue to produce insecure code, the report’s authors write. More than 9 out of 10 survey respondents also admit to having discovered security flaws in the production of assistants, at least from time to time. About one in five experts even believe that these security problems are common. Despite these results, the surveyed developers tend to overestimate the security level of the code produced by generative AI, with 85% of them rating this level as good or excellent. A form of cognitive dissonance, for Snyrk.

Open Source: risk of a vicious circle

The consequence? 55% of respondents admit that developers in their organization most of the time or systematically bypass internal security rules to use AI-based code generation tools. All the more dangerous since a large proportion of companies have automated only a part of their code review (less than half in more than every other organization).

In addition to these results, already seen in a study from Stanford University, the study highlights the implications of the use of generative AI on Open Source. Among organizations that help produce open source code, more than 8 in 10 leverage generative AI to do so. Introducing errors, of course, but also creating a risk of a vicious circle, the authors indicate: as AI code generation systems use reinforcement learning algorithms to improve and refine their results, when users introduce unsafe code through their suggestions, AI systems tend to considering these components must be safe, even if they are not.

In short, a majority of organizations (55%) have integrated generative AI tools into their development processes without adapting them to the inherent characteristics of the technology, especially in terms of security. Even if, on average, every organization introduced a change in their security practices to account for the phenomenon, the relative impact of AI tools dedicated to code generation on security practices appears relatively weak, the authors of report.

Share this article

Source link

Leave a Comment