Large Language Models (LLMs) like GPT-3.5 can generate compilable C programs that contain a high proportion of vulnerabilities, which can be effectively detected using formal verification techniques.
DeVAIC is a tool that implements a set of regular expression-based detection rules to identify vulnerabilities in Python code generated by AI models, overcoming the limitations of existing static analysis tools.