BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage News Efficient DevSecOps Workflows with a Little Help from AI: Q&A with Michael Friedrich

Efficient DevSecOps Workflows with a Little Help from AI: Q&A with Michael Friedrich

At QCon London, Michael Friedrich, senior developer advocate at GitLab, discussed how AI can help in DevSecOps workflows. His session was part of the Cloud-Native Engineering track on the first day of the conference.

In the landscape of software development, particularly within the DevSecOps pipeline, artificial intelligence (AI) can help address inefficiencies and streamline workflows. Among the most time-consuming tasks in this arena are code creation, test generation, and the review process. AI technologies, such as code generators and AI-driven test creation tools, tackle these areas head-on, enhancing productivity and quality. For instance, AI can automate boilerplate code generation, offer real-time code suggestions, and facilitate the creation of comprehensive tests, including regression and unit tests. These capabilities speed up the development process and significantly reduce the potential for human error.

In the realm of operations, AI's role is equally pivotal. CI/CD pipelines, a critical component of modern software development practices, benefit from AI through automated debugging, root cause analysis using machine learning algorithms, and observability improvements. Tools like k8sgpt and Ollama Mistral LLM analyze deployment logs and summarize critical data, allowing for quicker and more accurate decision-making. Furthermore, AI's application in resource analysis and sustainability, exemplified by tools like Kepler, underscores the technology's ability to optimize operations for efficiency and environmental impact.

Lastly, security within DevSecOps benefits greatly from AI, with innovations such as AI guardrails and vulnerability management systems. AI can explain security vulnerabilities clearly and recommend or implement resolutions, safeguarding applications against potential threats. Moreover, through features like controlled access to AI models and prompt validation, AI's contribution to privacy and data security enhances the overall security posture. Transparency in AI usage and adherence to ethical principles in product development further build trust in these technologies.

After the session, InfoQ interviewed Michael Friedlich about how AI can help with DevSecOps.

InfoQ: Given your emphasis on AI's role in streamlining DevSecOps workflows and improving efficiency, how do you suggest organizations balance the drive for rapid innovation and deployment with the imperative to maintain robust security practices?

Michael Friedrich: Think of the following steps in your AI adoption journey into DevSecOps: 

  1. Start with an assessment of your workflows and their importance for efficiency
  2. Establish guardrails for AI, including data security, validation metrics, etc.
  3. Require impact analysis beyond developer productivity. How will AI accelerate and motivate all teams and workflows? 

Existing DevSecOps workflows are required to verify AI-generated code, including security scanning, compliance frameworks, code quality, test coverage, performance observability, and more. 

I’m referencing an article from the GitLab blog in my talk. The discussions with our customers and everyone involved at GitLab inspired me to think beyond workflows and encourage users to plan their AI adoption strategically.  

InfoQ: Specifically, could you share your thoughts on integrating AI tools without compromising security standards, especially when dealing with sensitive data and complex infrastructure?

Friedrich:  A common concern is how sensitive data is being used with AI tools. Users need transparent information on data security, privacy, and how the data is used. For example, a friend works in the automotive industry with highly sophisticated and complex algorithms for car lighting. This code must never leave their network and brings new challenges with AI adoption and SaaS models. Additionally, code must not be used to train public models and potentially be leaked into someone else’s code base. The demand for local LLMs and custom-trained models increased in 2024, and I believe that vendors are working hard to address these customer concerns. 

Another example is prompts that could expose sensitive infrastructure data (FQDNs, path names, etc.) in infrastructure and cloud-native deployment logs. Specific filters and policies must be installed, and refined controls on how users adopt AI must be added to their workflows. Root cause analysis in failed CI/CD pipelines is helpful for developers but could require filtered logs for AI-assisted analysis. 

I recommend asking AI vendors about AI guardrails and continuing the conversation when information remains unclear. Encourage them to create an AI Transparency Center and follow the example at https://about.gitlab.com/ai-transparency-center/. Lastly, transparency on guardrails is a requirement when evaluating AI tools and platforms. 

InfoQ: You highlighted several pain points within DevSecOps workflows, including maintaining legacy code and analyzing the impact of security vulnerabilities. How do you envision AI contributing to managing or reducing technical debt, particularly in legacy systems that might not have been designed with modern DevOps practices in mind?

Friedrich: Companies that have not yet migrated to cloud-native technologies or refactored their code base to modern frameworks will need assistance. In earlier days, this was achieved through automation or rewriting everything from scratch. However, this is a time-consuming process that requires a lot of research, especially when source code, infrastructure, and workflows are not well documented.

The challenges are multi-faceted: once you understand the source code, algorithms, frameworks, and dependencies, how would you ensure that nothing breaks on changes? Tests can be generated with the help of AI, and creating a safety net for more extensive refactoring activities also helps with AI-generated code. Refactoring code can add new bugs and security vulnerabilities, requiring existing DevSecOps platforms with quality and security scanning. The challenges don’t stop there - CI/CD pipelines might fail, cloud deployments run into resource and cost explosions, and the feedback loop in DevSecOps starts anew - new features and migration plans.

My advice is to adapt AI-powered workflows in iterations. Identify the most pressing or lightweight approach for your teams and ensure that guardrails and impact analysis are in place. 

For example, start with code suggestions, add code explanations and vulnerability explanations as helpful knowledge assistance, continue with chat prompts, and use Retrieval Augmented Generation (RAG) to enrich answers with custom knowledge base data (e.g., from documentation in a Git repository, using the Markdown format). 

If teams benefit better from AI-assisted code reviews and issue discussion summaries, shift your focus there. If developers spend most of their time looking at long-running CI/CD pipelines with a failure rate of 90%, invest in root cause analysis first. If releases are always delayed because of last-minute regressions and security vulnerability reviews, start with test generation and security explanation and resolution. 

InfoQ: Are there AI-driven strategies or tools that can help bridge the gap between older architectures and the requirements of contemporary DevSecOps pipelines?

Friedrich:  Follow the development pattern of "explain, add tests, refactor"; and add security patterns, preferably on a DevSecOps platform where all data for measuring the impact comes together in dashboards. Take the opportunity to review tool sprawl and move from DIY DevOps to the platform approach for more excellent efficiency benefits.

Speaking from my own experience, I had to fix complex security vulnerabilities many years ago, and these fixes had broken critical functionalities in the product of my previous company. I have also introduced performance regressions and deadlocks, which are hard to trace and find in production environments. Think of a distributed cloud environment with many agents, satellites, and a central management instance. If I had AI-assisted help, understanding the CVE and proposed fix could have avoided months of debugging regressions. A conversational chat prompt also invites follow-up questions, such as "Explain how this code change could create performance regressions and bugs in a distributed C++ project context."

I’ve also learned that LLMs are capable of refactoring code into different programming languages, for example, C into Rust, solving a problem with memory safety and more robust code. This strategy can help migrate the code base in iterations to a new programming language and/or framework. 

https://about.gitlab.com/blog/2024/04/02/10-best-practices-for-using-ai-powered-gitlab-duo-chat/#refactor-c-code-into-rust 

I’m also excited about AI agents and how they will aid code analysis, provide migration strategies, and help companies understand the challenges with older architectures and modern DevSecOps pipelines. For example, I would love to have AI-assisted incident analysis with querying live data in your cloud environment through LLM function calls. This aids Observability insights for more informed prompts and could result in infrastructure security and cost optimization proposals through automated Merge Requests.

Companies working in the open, i.e., through open source or core models, can co-create with their customers. More refined issue summaries, better code reviews, and guided security explanations and resolutions will help everyone contribute, with a bit of help from AI.

Access recorded QCon London talks with a Video-Only Pass.

About the Author

Rate this Article

Adoption
Style

BT