Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ


Choose your language

InfoQ Homepage News GitHub Enhanced Copilot with New AI Model and Security-Oriented Capabilities

GitHub Enhanced Copilot with New AI Model and Security-Oriented Capabilities

This item in japanese

GitHub Copilot adopted a new AI model which is both faster and more accurate than the previous one, says GitHub. Additionally, GitHub has started using AI to detect vulnerabilities in Copilot suggestion by blocking insecure coding patterns in real-time.

GitHub has brought three major technical improvements to Copilot, starting with adopting a new OpenAI Codex model which is able to synthesize better code, according to the company.

Besides the new AI model, Copilot is now able to better understand context using a technique called Fill-In-the-Middle (FIM):

Instead of only considering the prefix of the code, it also leverages known code suffixes and leaves a gap in the middle for GitHub Copilot to fill. This way, it now has more context about your intended code and how it should align with the rest of your program.

GitHub says that FIM is able to produce better results in a consistent way and with no added latency.

Finally, GitHub has improved the Copilot extension for Visual Studio Code to reduce the frequency of unwanted suggestions, which could prove disruptive for a developer's flow. To that aim, Copilot is now taking into account some information about the user's context, such as whether the last suggestion was accepted or not. This new approach made it possible to reduce unwanted suggestions by 4.5%, based on GitHub's own metrics.

The cumulative effect of all these changes as well as of others released previously has been a net increase in the overall acceptance rate for Copilot code suggestions, which grew from 27% in June 2022 to 35% in December 2022.

As mentioned, another front where GitHub has started to apply AI is vulnerability prevention in code generated by Copilot. This is achieved by identifying insecure coding patterns, such as hardcoded credentials, SQL injection, and path injection. Once an insecure pattern is identified, it is blocked and a new suggestion is generated.

GitHub says it will further expand its LLMs ability to identify vulnerable code and distinguish it from secure code with the aim to change radically how developers avoid introducing vulnerabilities in their code.

The new AI model and vulnerabilty filtering system are available both in GitHub Copilot for Individuals and Copilot for Business.

About the Author

Rate this Article