BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Presentations DevSecOps: Not the Tools, the Other Bits

DevSecOps: Not the Tools, the Other Bits

Bookmarks
41:31

Summary

Mario Platt presents how to improve and integrate governance, team practices and maturity development in how the output of tools are integrated.

Bio

Mario Platt has over 20 years of security experience, with roles spanning penetration testing, operations, engineering and governance, risk management and compliance. He is known for his strategic thinking and pragmatic approaches. Currently head of information security for a Fintech company called CloudMargin, and CISO/Exec advisor.

About the conference

InfoQ Live is a virtual event designed for you, the modern software practitioner. Take part in facilitated sessions with world-class practitioners. Hear from software leaders at our optional InfoQ Roundtables.

Transcript

Platt: My name is Mario Platt. We'll be talking about DevSecOps, not the tools, the other bits, because DevSecOps is really not just about automation. It's mostly about enabling communication. I'm the strategy director at HYSN Technologies, who have the practical-devsecops.com courses. I'm also the head of InfoSec for a FinTech company. I have my own consulting business. You can find me on Twitter with, @madplatt.

DevOps Principles

The best way to start thinking about DevSecOps is to look at DevOps first. At the end of the day, we're trying to secure DevOps. That's what people came to call DevSecOps. There's a pretty well accepted number of principles in DevOps that are usually called CAMS: Culture, Automation, Measurement, and Sharing. Culture is really about breaking down barriers in silos and everything that supports that objective. Without it, other practices usually fail. Measurement is about measuring the activities in CI/CD, and also whatever practices we add from a process perspective in terms of how we identify what needs to be addressed from a security perspective. Sharing, about sharing tools, best practices among teams in the organizations. Finally, automation.

Developing Culture - Assumptions

This is relating to Schein's model that is covered in "Accelerate," the book by Dr. Nicole Forsgren. Developing culture in this model comes down to three different things: assumptions, values, and artifacts. Assumptions is hard, because you can't really touch them, most often. You need to be an organizational anthropologist to try and understand how the organization came to be, where it is today, and what it is that you can do now, on the present. There's three elements. There are three different ideas that I like to use when looking at assumptions. The first one is path dependence. How does security feel like for other people in the organization? This usually comes from a history of how the organization came to be. If a previous or the current security team is pretty much gatekeeping, not really understanding the operational context, that that will create a certain path dependency and the assumptions that people make when they hear, talking about security.

Another element is about affordances. There's a lot more to this, but what I'd like to say about it now is how people work and make decisions. What it is possible to do in your environment. Different organizations will have different contexts, and some initiatives may succeed in some and not the others. Understanding what every stakeholder believes is the realm of the possible and how you influence that to be the case. It is understanding what it is that you can actually pull off in the current context that you're operating in. This is often a matter of trying to talk with people, understanding what's unacceptable, what's acceptable. What is within the realm of possibility that they see, and whatever other constraints they may have.

Finally, it's about agency and trust. Who does security around here? Trying to understand if security is embedded into people's work. If they consider it when they are developing new features, new products, or if they have any agency over those. In terms of trust, the functions that typically deal with risk management and compliance, for instance. Do they trust that the right processes and the right things are being done by the teams to create and operate secure software? These are all things that you can interview people and start getting a feel for. These assumptions are very important because they establish, is this possible for you to change within the context that you're in?

Developing Culture - Values

The other part is values. I really like this quote by Patty McCord, who used to be the talent officer for Netflix. She says, "The actual company values, as opposed to the nice sounding values, are shown by who gets rewarded, promoted or let go." I think this is really important because usually there isn't much in the way of tracking company values down to the level of information security, for instance. Often, the ones that are portrayed are not the ones that they're really affecting the organization. Some things that you can inquire or try to find out more about is, what happened after the latest incidents? Was someone rewarded, let go? How did the organization deal with that? Were there consequences? Did someone pull off the disciplinary process or did we make it a learning moment to learn and grow in, for everyone to embed these new learnings so we don't have recurring types of situations? With all of the way that the organization itself, not just line managers, peers, but also execs in whatever PR capability was, or not part of that. Are they afraid now? How are we making it psychologically safe for people to speak up? Those are all things that are usually deeply embedded within the teams, in trying to understand how behavior is affected every time something happens, helps to know the actual values, the things that teams worry about. The other part is, if you do have actual company values defined, can you connect your security program to them? Usually, that makes it easier to try and justify security transformation in the security program at the higher ranks of the organization. It's important to have those in mind.

Developing Culture - Artifacts

The other part is artifacts. I really like this quote by Shook that is also referenced in the "Accelerate" book. He says, "What my experience taught me was the way to change culture is not to first change how people think, but instead to start by changing how people behave, what they do." Some useful questions to try and understand the artifacts that go around in your organization that helps security become something that people consider and do something about is, who is providing the security capabilities for the teams to consume? Often, in the old world type of security teams, often just acquire licenses or boxes. They just give it to the technical teams and say, "Now you run with it. It's your problem." Without really understanding that all of these tools, they need to be nurtured. They need to be tweaked. They need to be made appropriate to the context and how people are using them. Not thinking from the point of view of a capital investment that we just put in and then forget about it, they need to be nurtured, but in, who is doing this? These types of artifacts and what they create are really important.

Then, the other part is, are artifacts required for security in the context of their work? What I mean by this is that I'm trying to integrate security vendors that may provide a completely separate, for instance, management portal to deal with results, to do anything relating to the validations that are in the feedback loops you're enabling in CI/CD. It's really important to understand if engineers will need to change their context to deal with your stuff. That means that you probably shouldn't. As much as possible, we really want to ensure that whatever people need to be doing to create secure software, that it can be done in the context of their own work. That means considering your policies. Are they Word documents that no one sees? Are they codified as policies that they can integrate into that process so they have immediate feedback loops? The more people need to move away from their context to do something, the less they'll be inclined to do so willingly.

The other part is, mind the UX. Developer experience should be the number one consideration when acquiring any security tool. If it's not providing the good developer experience, people will find ways around it. It's not going to be effective. It's going to be an uphill battle. Try not to do it. The other part is that artifacts must enable traceability. That's a big part of the challenge that I see in the transformation from ITIL based processes of waterfall, to moving to DevOps. Which is, often there's these big silos, particularly with regard to risk and compliance teams that then don't really enable any type of traceability back to the level of technical backlogs. That's a challenge because, on one hand, they can't develop and build that trust, because they aren't seeing any artifacts, any output that they can use. Also, it makes it really difficult for them, and they need to have all those boring meetings, all of those boring questions, all of those spreadsheets that people still rely a lot on, because the artifacts themselves aren't enabling that traceability. Thinking about how we can integrate this traceability into how you approach security is really important from the ISMS, or the risk management, the way you do it. All the way to backlogs and automated testing, so you can have continuous validation and continuous visibility into the state of compliance, management of risk. However it is that you want to frame your needs as. The last part, which is referenced also to the first one, is that you need to treat security capabilities as a product, not a project, so they need to be nurtured. Someone needs to keep an eye on them, receive feedback from people who are using the tools. Tweak them to meet their needs. All of those really need to be managed as products and not projects.

Meet Them Where They Are

This is about incrementality, at the end of the day. You're not going to be moving to a very bright future in a month from now. You need to make this iterative and bring everyone in the journey, because ultimately, it's about communication. It's about breaking down these silos. Understanding, for instance, that the gatekeepers, the risk and compliance people have a certain worldview, and they see within it, something that is preferable, plausible, possible, and things that fall outside of the realm of possibility for them. Understanding this and understanding that we need to be incremental about how we approach it, is knowing that, for instance, for a gatekeeper, a pen test and a code review is what they believe good looks like. Maybe we can use the opportunity to do some threat modeling, some policy as code, to help them build trust in the process.

After we do this for a while, we can start having wider integration with DAST and SAST, and other GRC tools that they may have or may use. Ultimately, if we do all of these things, and bring them on board the journey, and they can see the outputs and they can get further assurances and trust that the processes are giving them what they need, then let's talk about blameless post-mortems, chaos engineering. Because, currently, that may be too far off their view of the world. They may believe that the most appropriate answer to security incidents is to take up a disciplinary process and not a blameless post-mortem, for instance. There's a lot of elements of change that may need to change for a true gatekeeper to come on board with new and better ways of doing things, but it will take time. Knowing what they are is key, and bring them in also on the journey.

Same thing for what I call the rainbow makers, the engineers actually making magic happen every day. Maybe they don't currently have a lot of knowledge on methodologies for security, they might not feel they have the agency to do a lot of the security things. Let's start them with just threat modeling for question framework, STRIDE if you want something very simple. Let's start thinking about what can go wrong and what we can do about it. Also, process assurance. Within the things that they already see being their own agency, let's see what we can do to make them better. Let's look at our GitLab, or at our Jenkins, or let's look at our S3 buckets. Let's look at the things that they already feel it's fully within their agency and knowledge. Let's get them going and give them a platform to surface those issues. Then, we can provide some Jenkins libraries, specifically, build pipelines just to run the security validations. Let's give them tools, capabilities they can consume, to make it easy to consume security and use all of those capabilities.

The same thing with policy as code. Maybe you've got a QA team, can write tests in whatever framework they already use. Maybe you are resourced, and then you can choose a compliant Cisco domain specific language, Chef InSpec or Open Policy Agent, and do it yourself. Different organizations will have different needs, different affordances, and will have different solutions available to them. Then you can introduce a methodology, more scenario testing. Ultimately, maybe you want to bring them into the full risk analysis process. Making sure that they're fully on their own backlog and you serve more as a guide direction in enabling, and less about saying these are all the things that you need to be doing. Let teams fully own security within their own products.

Measurement

Finally, we have measurement. Always good to start a conversation about measurement with Goodhart's law, "When it becomes a target, it usually ceases to be a good measure." Always have that in mind. The thing with measurements in DevOps is that it's hard. It's hard because we have different customers, different people with interests. It goes all the way from the level of knowledge workers who are trying to plan the next sprint or the next three sprints, or looking just a tiny bit ahead on what it is that they need to do. Then we have the level of directors and VPs who deal with more the technical plans, within the next 12 months what we want to achieve. Finally, the C-suite that usually deal in multiyear cycles. Trying to come up with a way to aggregate all of this information and ensure that the narratives connecting the three different time spans have things connecting to each other, and they will actually relate to each other. It's non-trivial. It's important that everyone understands that DevOps is an organizational type of transformation. It touches everyone, and so does DevSecOps. This is my proposal on how we can potentially connect those timelines together.

What this can mean for instance, in this example, I'm using normalization of risk against business attributes like we do in SABSA architecture. Whatever their top level of risks, however you manage it, just make sure that you connect to it. If you want to be trusted, for instance, that can mean that from a risk management and VP and director perspective, you want to make sure that the systems are hardened. Simple to explain. Simple to do. Then you can think about some key risk indicators. What is the percentage of systems where a RAG status goes from yellow to red, or from green to yellow? Thinking really about some key risk indicators that can give, at the VP and director level, information about their systems, about the proper security properties that are relevant to them, and if they signed up to. That's key here.

What that can do is if we can identify that our systems need to be hardened, and we've got some key risk indicators for it, then when we introduce a practice of threat modeling, we can do risk-driven threat modeling. We know that one of the things that we want to do is have hardened systems, so that means that I will look for threats that relate to security misconfigurations. I've got a way to mitigate it, and a way to continuously validate that they are still in place, so that I always have the assurance that within my process, I am validating that my mitigations are still active. That's the key for scaling. That's the key for making sure that whenever people start consuming these capabilities, that they are allowed and even encouraged to move at speed because they have trust that the process is validating, that the mitigations are still working.

You need some process to connect them. This doesn't need to be every time. For instance, Mozilla's Firefox process does a risk analysis at the service level. That helps inform engineering stories, which trigger threat modeling activities, which gives us the assurances that we've dealt with the risks we've identified for that service. Getting into this type of process where we identify risks, and they mean something in the context of engineering, in the context of managing a technical backlog, and how that relates to activities such as threat modeling, is important. You need to augment this with metadata. Whatever you believe is the right way to connect the different timelines, either it's from compliance, from risks, from the policy statements, ASVS, cheat sheets, whatever the metadata you require for the conversations to be meaningful, and the artifacts to be meaningful across these different time spans. That's the work that you need to contextualize, what this can mean to you.

Product Risk Dashboard (Example)

That can lead to creating this type of dashboard. You agree on different key risk indicators for different attributes. You can actually provide the service to engineering for instance, on their own engineering risk, and integrate this into the metrics and how you collect them, and all of those. This provides not only a product level view per team on what it is, on their own compliance, on their own security, but it can also be aggregated up so a VP can have an understanding across their environment. What are the attributes that are more assured or less assured, and help in making informed decisions about prioritization.

Support the DORA Metrics

If nothing else, think about the DORA metrics. The DORA metrics, also established on the "Accelerate" book, part of the research done by DORA, are these four: lead time, release frequency, change fail rate, and time to restore service. If you don't think about anything else from a metrics perspective, think about whatever practices or tools that you integrate into your process, how they can affect the DORA metrics. For instance, lead time, manual reviews and upfront risk analysis can increase the lead time. If we're waiting for people to become available to do the risk analysis, and we can't move until that happens, then it's going to be a bottleneck. It's going to lead to an increased lead time. Lightweight risk-based, service-based, and establishing, for instance, in definition of ready, whatever are the criteria where further risk analysis is required. Would be a way to try and mitigate the negative effect here.

Same thing for release frequency. If you've got manual vulnerability scans or pen tests that need to be done every two weeks, every month, whatever, that will decrease the release frequency. Integrating the controls is always a better way. Finally, change fail rate. Failing builds on security findings increases the change fail rate. We should be very careful on what we set to blocking, and we ensure there are ways in the context of engineering for engineers to manage false positives. This means that we should really think about how they can manage false positives as code for instance. Where they can assess what the problem is. They can make a commit with the message, "I believe this is a false positive, because x, y, or z," and your system of record has a record on the analysis that was done, and why it was deemed a false positive. It's key to do this in context. If we're requiring people to always be moving around between spreadsheets, or portals and their code, it's going to be less integrated. People will want to avoid that. Finally, time to restore services. Making sure that we can ideally make security incident management a small extension to your overall incident management processes. Or, if not, that we instead do a lot of testing, so you know exactly how you can support the organization, if we come to have a security incident or when we come to have a security incident.

Measurement Takeaways

The main thing I'd like to point out here is the measurements need to be relevant, within the agency that people have into who's looking at them. The metrics at the board level will be different at the team level, but it's important that they're connected. The work is really to contextualize these metrics, and negotiate the targets. Just saying we need to have more than 80% of systems being hardened, when currently you have 10%, it's not going to work. That team would be in the red for the next year or two, maybe. That's not useful to create momentum on the program. These targets really need to be based on things that we understand what current looks like. We can give them a bit more, let's achieve the next target, so we can start building momentum on treating security findings.

Sharing

Finally, the sharing. The main thing about sharing is you really need to focus on making it psychologically safe to report weaknesses. We should really hold people accountable, no matter how senior, if they try to undermine that. If someone comes immediately with a disciplinary process, once you've got a security failure, that's not going to make a psychologically safe environment. People will try and make sure that those types of issues don't get reported. This is critical in DevOps. Make it psychologically safe to report weaknesses.

Whatever you can do to hold this, it will pay dividends in the short and the long run as well. In terms of sharing, don't just write documentation, and especially don't where people won't see it. Like the example of Word documents or spreadsheets that no one ever accesses. For every capability, write as close to client context as possible. Client, in this case, would be engineering teams. If they use Confluence for their documentation, write on Confluence. Ask them to create a sub-folder on their own pages. If they use GitHub for documentation, create GitHub Pages. Try to make it as close as possible to the context that they're used to.

The other part is, create short videos on them. Let's say you've created a new capability. Let's say you started using Trivy for container scanning. Create a GitHub page. Allow them to copy paste the command using Docker, let's say, and create short videos, "This is how you use it." Just show them, like short, sweet, simple. Make it easy for anyone at any time in an asynchronous way to consume that information. If someone asks, don't just send them links. Have a too long; didn't read version ready for them. Do you know how we use Trivy? "Yes, the TLDR is this command. You can just copy paste it. Here's more documentation if you want to do it. Let me know if you want to pair to do all of that." It's the type of sharing and making yourself available that really makes cultural change over time.

Also, communities of practice. A great way to establish a sensor network to understand really how people are consuming and living security. Try and keep organizational targets out. Make it a safe place for everyone to talk about it. The main thing is having a clear code of conduct and being ready to enforce it. For the security people out there, that's mostly a place to listen and support, not to talk a lot. Try and make it your sensor network to understand how people are actually using security and thinking about security.

Summary

DevSecOps is more than tools and automation. Focusing on culture, relevant measurement, and sharing is free but requires deliberate action. The way I like to frame it is, you need to have the right culture. You need to have the right sharing, or be doing the things to build the right culture and the right sharing. You need to have decent available measurements so that you can know what you're doing in that void to understand if you're going in the right direction in any reasonable technology. Technology is the part that is becoming commoditized. The more vendors are appearing, the more commoditization of security capabilities are happening. That's not where the focus must be. We really need to be focusing on the other three aspects, culture, measurements, and sharing. Making sure that those are key and center of our security transformation programs. As Dave Snowden has been saying for a while now, "Focus on doing the next right thing." This is the approach strategy that is embedded within "Frozen II."

Questions and Answers

Ruckle: We're all practitioners and technologists here. It's so easy to focus and fixate on tools. There's so much innovation with tooling and things happening in open source. I thought that your focus elsewhere, apart from the tools was really timely and needed. Because I think, in general, the more you spend time with this stuff, the technology is the easy part. People and changing behavior is the more difficult part. I love the phrase organizational anthropologist that you use. I've used that one myself to really get at that notion and give you a useful framework for thinking about these types of transformations and improvements. Also, I appreciated the focus on measurement. You had your Goodhart's law there. I'm like, "Metrics? Where's he going with this?" Then you circled back around with how important that was. Metrics incentives I think are really a crucial part of this. There's still a really high number of organizations that do things without any notion of outcomes in mind.

One of the things that leapt to mind is this notion of accountability. It's really central to ensuring people in different roles align around these bigger picture metrics and outcomes. Even here, people say security is everyone's responsibility. Tell us a little bit about this idea of accountability and how it fits into new practices and how people in different roles should be thinking about this.

Platt: It's a big problem with regards to accountability, particularly when it comes from the InfoSec core. By InfoSec core, I'm referring specifically to functions of governance, risk, and compliance. Because, on one hand, we've got this culture of the overuse of the word accountability, that every time a breach happens, someone says, who's accountable? What they actually mean is, who's culpable? Who's to be blamed? I think that view of accountability isn't very useful anymore. The field of safety, for instance, is already starting to evolve from those narratives. The thing is, the line of negligence will always exist at every company. The problem is the weaponization of, security is everyone's job, and the implications of all of that. Because, on one hand, we need to be focusing on the systems that enable the creation of secure software. On the other hand, just a very linear focus on an understanding of accountability is detrimental.

I'm going to use the example of the Challenger accident that happened at NASA a long time ago. One of the things that came on the report was that the information environment wasn't conducive of making good risk decisions. The example that Feynman refers, for instance, is that on the same slide, there was the use of the word critical. Critical was used in that same slide in five different ways, meaning from, this is delayed by two days, up to everyone dies. It was the same word. It's that view of the system, that view of the information environment that we're building towards management, that until we focus first on there, it's hard to make people accountable. An exec received a deck that had two slides about security, we summarize it to a point, making it meaningless. Then, we want to make them accountable for not making the right decisions. It's not that that line would ever exist, it always will be. I think, as an industry, we're getting too fast to the question, how could they be that stupid, or how could they be that negligent, and were not spending enough time on, why did it make sense to them at the time? With the information that they had available, why did it make sense? Why was it the right decision to make?

I think it's that difference that we need to start striking a balance with. Understand that security, because it's an emergent property, it's not something that we do. It's in the interactions of the outside world, the Ops people, attackers, it's on those interactions. This type of linear thinking and linear methods applied to them, are not so helpful. Instead of thinking about individual accountability, we really need to start thinking more about work systems. How we're dividing tasks. How we are connecting tasks. We still need to do more work as an industry to really connect these different timescales, these different risk, compliance processes with each other, so that we can then not weaponize the, "Security is everyone's job." Not just saying it as a way to deflect accountability, but as a way to build our work systems in a way that they enable true communication between the different parties that are a part of making things secure.

Ruckle: There's the accountability conversation, when you're looking backwards, looking for someone to blame in particular. Then there's the notion of accountability, looking forwards and being proactively accountable. You mentioned blameless post-mortems and retrospectives, and changing some of the language around that. I think that's a really key thing because things are going to go wrong. The key thing is how you respond to some of that stuff. There was the Fastly outage on the internet. That's a phenomenal story, and I think of how that kind of thing gets handled really well.

Do you have advice on how to overcome the chasm between policy as paper versus policy as code? From my point of view, that is a new way of thinking inside of a company. This is where I think some of the artifacts things and conversation comes into play.

Platt: Yes, I agree. I think a big part of the problem is the combination of traceability and skills. Because, for instance, most people that go into a risk or compliance type of role within security, often they make that call because they don't see themselves as technical enough. They do like security. They want to do it. We need GRC people, loads of them. They choose a path that they think best fits with their, either natural abilities or their current skills in a particular area. That's not a problem in itself. That's how things go. We're seeing what happened with DevOps. The whole DevOps movement, in a nutshell, the way I like to look at it, is it allowed people to collaborate together, to work on these things as value streams. The problem was, now we've got development teams writing their own tests, writing their own things to improve the quality, or the robustness of their systems, but security hasn't come into the practice, hasn't joined into the fun. We're still working on our spreadsheets, on our Word documents, and expecting things to be integrated. It doesn't work like that.

What I would say is that fixing that problem is very contextual. For instance, the current role that I have at CloudMargin, the security team is largely me. I use a lot of time from other people but it's largely me. It would not make sense in my own context for a startup type thing to go through a compliance domain specific language. Something like Chef InSpec or Open Policy Agent. I just would not be able to scale that. What I want is for my policy to be validated as code. If I already have QAs working directly within the teams, they need my context, it makes sense to try and upskill my current QAs so that they can provide that ability to scale, on particularly codifying things on policy as code. In other contexts, you may have a big compliance team. If you've got one or two more headcount for that team, instead of hiring someone with loads of compliance experience, go hire a QA engineer or a developer that has an interest in going into governance, risk, and compliance. Let them upskill the rest of the team.

A project that I'm working on right now that we're meeting every month is the ASVS agile delivery guide. We're actually picking up the OWASP ASVS standards, writing user stories for them, and then connecting them to the OWASP cheat sheets. You've got these off-the-shelf stories that you can just take, contextualize, and give it to your QAs. I think it's these types of initiatives that really try and break the siloing of the artifacts, and how it goes from start to finish, that will help the industry as a whole to move more towards that model. Because in my view, I think that because of all of these supply and demand in the upskilling, I think that in 10 years' time, if you are a compliance person that has no ability to code whatsoever, I'm not sure there will be a place for you in the industry. These are things that happen over time, as with everything. Someone that had built cars pre-Ford, and post-Ford, they bring whole different skill sets to the table. I think that's the type of transformation that we're seeing also in security compliance specifically.

Ruckle: I think that you make a great example there about somebody that is technical, but an interest in compliance. That is something that will address that skills shortage that we always are reading so much about, where you give people a path. You give them some growth areas to think about being these agents of change.

Platt: What I would say is if someone is looking into making that transition, so currently, a risk or a compliance person trying to look into that, the best places to look at is, grab requirements you already have. Learn how to write BDD type tests for them, write Gherkin syntax. If you can write the Gherkin syntax for it, then you're half of the way to actually writing validation as code. There are online labs, there are many things that you can do, and start getting you down that route.

Ruckle: How do you feel about more security responsibility being transferred to Dev/Ops people in a DevSecOps world? Can this not create risk? I love this question because it gets at this term, if we were all doing DevOps correctly, we wouldn't need to put the Sec in there. I think the Sec folks got left out of the conversation, as you were pointing out, that now there needs to be this shift left thing and you put this big focus on security, especially with ransomware. All these attacks in the news that are keeping a lot of CIOs and CISOs up at night.

Platt: Again, from a cybersecurity perspective, we are about a decade behind on some of the advancements that are already happening in the safety space. What I would say is, in safety, there's this distinction between what they call the blunt end and the sharp end. The sharp end being the people that are doing the doing, and blunt end about the people doing the governance, doing just the compliance stuff. The problem is the model of having this complete separation and no integration between those two areas, is a model that is dated. Someone that understands governance, risk, and compliance, and also understands their operational environment, knows that your processes will never cover 100% of the things that people need to do on a daily basis, to actually do their job. When you're an engineer, and I did a lot of engineering, more than a decade engineering experience, you are always fighting between three different points. On one hand, you don't want to have breaches or accidents. You know you've got economic pressures happening. You've got deadlines. You've got budgets. You've got all of that. You've got workload as well. You can't just check everything and do everything. It's just not possible. People have limited skills and limited bandwidth in their brains to keep a lot of the information doing.

Our measure of responsibility and accountability needs to consider this. It needs to consider what is the information that a developer needs at 9 a.m., when they log into the computer and they are about to commit code? In that context, in that scenario, what is the minimum security knowledge that they need to know in order to do things securely? That's the problem of weaponization of, security is everyone's job. Sometimes we just say that out loud without putting some boundaries on what that actually means. If we say, what a developer needs to do in their context is to verify insecure libraries, it's to verify if input is being validated by our application, so we need to help them build the minimum security that they need within the process and the constraints of DevOps. Through the definitions of ready, the definitions of done. Cater for everything that is not able to be captured within the minimum required expertise for developers, they need to have summoned the experts to go in the system. This is an enablement mode. It's not a command and control governance mode. It's enablement. It requires a completely different set of skills in order to integrate this word. Again, if I talk with just people from the technical field, they get that DevOps has to include security. It's for the security people that it's often much harder to get them on the journey. That's also my experience, generally speaking.

 

See more presentations with transcripts

 

Recorded at:

Nov 07, 2021

BT