BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Presentations Sustainable Security Requirements with the ASVS

Sustainable Security Requirements with the ASVS

Bookmarks
46:14

Summary

Josh Grossman provides a brief overview of what the ASVS is, but takes a closer look at balancing trade-offs and prioritizing different security requirements. Josh shares how to make the process repeatable and how to implement it as part of your own organization's requirements process.

Bio

Josh Grossman has worked as a consultant in IT/Application Security and Risk for 15 years now as well as a Software Developer. He is currently CTO for Bounce Security where he spends his time helping organisations improve and get better value from their Application Security processes and providing specialist Application Security advice.

About the conference

Software is changing the world. QCon empowers software development by facilitating the spread of knowledge and innovation in the developer community. A practitioner-driven conference, QCon is designed for technical team leads, architects, engineering directors, and project managers who influence innovation in their teams.

Transcript

Grossman: I want to talk to you a little bit about the ASVS, the Application Security Verification Standard, and how we can use it to actually get better value early on in the development process. Let's start off with a story, first of all. I was working with an organization, let's call them Lumber, Plc. This is a big doc of software organization. They had a variety of different products, variety of different lines, lots of different teams, thousands of developers, and they were looking to implement a secure software development lifecycle. I came into this quite late on. I came in when they'd already built up that documentation, and they wanted to help them implement it. They'd built this big document that was top to bottom, front to back, everything you might want from a software development lifecycle with security built in. They thought from the very beginning to the very end, all the way through, ok, here are the different activities we want to do, top to tail. They'd come up with the big bang approach. They decided, ok, we, the application security team, we're going to go into each product, each team, we're going to spend about three months with them implementing everything in this list, and then we're going to move on. That's already a little bit of an interesting approach, a little bit of a challenging approach.

They thought, what about requirements and design? What are they going to do for that? What have they got in the document for that? Now for that, they decided they were going to build the checklist. This is what it was called. It was called the checklist. It was a list of security items they needed to consider when they were doing requirements, when they were starting their design phase. I started to ask people about the checklist. I said, what is this? Have you completed this? Have you just done it once or you're doing it on an ongoing basis? Are you updating it when you make changes? Is this covering just one requirement or is this covering the whole product? I found it quite difficult to get answers. I found it quite difficult to actually get a good picture of what actually is going on with this list. Is this actually telling it's being used. Everyone was scared to talk about it.

Eventually, I sat down and I thought, I need to have a better look at what this checklist actually is. I looked at the checklist, and it was a monstrosity. It was huge. It was really long. It was complicated. I'm not surprised people were worried about using it. I'm not surprised people were scared of it. I was scared of it. I understood most of it. How can anyone work with this thing? How can anyone use that, especially in a more fast development process, a more Agile development process. What is anyone supposed to do with that? They can't go through all these questions every single time. The bottom line is we need security to be involved earlier on. We want security to be involved all along the way. We don't want security to come along at the end and say that's bad. We also can't just drop a ton of paperwork, a ton of checklists, a ton of information on developers early on, and expect them to be able to swim with that, to be able to cope with that, or want to cope with that, not for a day. We need to think about how we're going to do this in a sustainable way now, and also keep going in the future as well in a way that maintains a level of security but also doesn't drown the developers.

What Are the Problems?

What are the problems? What are the key problems here? Information and requirement overload. Lumber had come in and said, here's this giant doc of massive checklist that you have to deal with every single time, this always has to be in your head. We can't be having that much information. We need to customize our approach to how we put security early on. The next problem was that Lumber had this AppSec team that was diving in three months, dive out, go on to the next team. That was the approach they thought they were going to take. Security can't just be this outside force. Security can't just come in, make a big noise, and then go back out again. Security needs to be integrated and security needs to be contextualized. You need to be clear, what are we concerned about from a security perspective in this case? If everything is important, nothing is important. We can't just say you need to do all of these things all the time. Everything has to be done right now right away. We need to be able to prioritize. We need to be able to say, what's most concerning to us at this point in time? What should we be most concerned about right now? Finally, let's say for argument's sake, which this three-month approach, going in working on that today, going out again, let's just say that that went ok, which I can guarantee you it did not. What about tomorrow? What about going forward? We don't need this just to work today, we need this to work on an ongoing basis. We need to operationalize this process. We need to make sure that this process works today, tomorrow, and for future development cycles as well.

Profile, and Outline

I live and breathe AppSec. Even just a breaking side, working with developers, working with development teams, looking at how we can build software securely in a more efficient way. That's very much my day to day. I'm a consultant, which means I go from company to company and see lots of different environments, lots of different sizes, lots of different industries. Finally, we're going to talk about the ASVS. It's a OWASP project that I'm a co-leader of. Being a co-leader means obviously I'm quite familiar with using it. I've talked a lot in past talks about what is ASVS. That's not the main purpose. I want to focus on actually using it, actually practicalities of how we're going to build this into our development lifecycle. This is basically the plan. What is ASVS? How does it compare to other projects with OWASP? Then, how can we use it in our requirements process? These are different sections that we'll talk through.

What Is The ASVS?

What is the ASVS, or what is the ASVS not? Which I think is a good point for context. OWASP Top 10 Risks. This is the OWASP project everyone's heard of. It's a great project released three to four years, has a really strong team of leaders with a lot of security expertise. They also gathered a lot of public comments, a lot of public data for the most recent versions, 2017 and 2021. It's very frequently cited. It's mentioned by all sorts of different organizations as a sign to consider. It's designed as an application security awareness document. Awareness is the big word here. This isn't controversial, this is front and center of what this product is about. The idea is that it is an awareness document, is to build awareness about application security. It's not a standard. It's not comprehensive. It's not something that you should be assessing yourself against. These are the 10 things that we think are most concerning from application security perspective. It's certainly not all the things. It's certainly not necessarily 10 things either. One section may actually cover a wide variety of different issues.

The other drawback to the top 10 list is that it's bringing problems. It's saying, here are 10 problems, now what are you going to do? I don't love bringing that to developers. I don't like bringing problems. I want to bring some form of solution or something proactive. Like, for example, the OWASP Top 10 Proactive Controls. This is another great document. This is a guidance document for developers and builders of how to build software securely. We're not bringing problems, we're bringing solutions. I'm not going to go into detail in each of these sections, but you get the idea. It's about providing security mechanisms. It's a great starting point. It's practical. It gives sensible prevention advice. Also, a great team of leaders with a lot of experience. It suffers from some of the other problems that the top 10 risks suffer from. It's still not comprehensive. Again, it's more for awareness. It's more of a starting point. Ok, you want to secure software, let's start from here. Then you can graduate onto other things afterwards. It's also not organized as a standard. It's not very easy to assess yourself against it. It's more of a narrative document. It's a good read, but it's not necessarily useful for a comparison, ok, where are we? What is our position?

What are you going to do? What are we going to do if we want to build secure software in a methodological way? The answer is the ASVS. Finally, what is the ASVS? The ASVS brings a set of requirements that you'd expect to see in a secure application. It's designed to be an actual standard, designed to be something that you can assess yourself against, compare yourself against, compare to other applications, and generally be organized in a very methodological way. We consider it to be leading practices, which means that we try and look for requirements, for controls, mechanisms that are valid not just today, but also will be valid in the future. Maybe they're quite new today, we expect to see more standardized in the future. While, again, I'm lucky to have a bunch of very skilled, very experienced co-leaders, it's very much developed in the open by GitHub. Anyone can open issues. You can suggest changes. You can just provide your feedback, provide your ideas. It's all very much in the open, all the discussions are in the open. Which means that it's not just what the project leaders have seen, but we're also getting feedback from the wider industry as well. It's split into three levels. You have the initial requirements, which are considered the minimum level. Then you add more requirements on to that to get to the standard level where we'd like everyone to be, level two. Then finally, level three for the most sensitive applications, most sensitive data, and most high-value transactions, those applications as well, where at which point when you get to level three, you'll need to be doing all the requirements.

Security In the Requirements Process

This is what the document looks like. As you can see, it's quite detailed. We also try and make sure that each requirement is standalone. It's focusing on a particular issue that you can then look at how you're addressing in your own application. What I do want to talk about is security in the requirements process. These are the problems and the solutions that we talked about at the beginning. If we go back to the story of Lumber, Plc., and the challenges they had with their giant checklist. The ASVS is a big checklist, but the whole point here is to show, how can we actually do this practically? These are the four areas that I want to talk about. I'll give some examples of each one. Let's start off with customization. Lumber tried to bring all this information at once, they tried to drop all this information in one go. How can we customize to make it more focused and more specific to the particular problem at hand? The ASVS has about 280 requirements. We don't want to be looking at those for every single feature. We want to be able to focus on what counts. We want to be able to focus on what is important for this particular feature, for this particular stage in the application's development? The first useful thing is forking the ASVS: take the ASVS, take your own copy of it and start working on that copy. There are going to be things that are specific to your organization, things that are going to be specific to your situation. It's definitely worth starting off with your own version that you can then make certain modifications to. That's very much supported and recommended by the document itself.

There are a few do's and don'ts here. You do want to match it to your particular situation. If you could justify dropping certain irrelevant requirements, then do that. For example, if you're not using GraphQL anywhere in your organization, you probably don't want to be thinking about the requirements related to GraphQL. Certainly, if you've got changes that you think are not just relevant for yourself, but also for other users of the ASVS, please do send those to the upstream project. On the other hand, I strongly recommend not changing the numbering. I'd suggest that you try and keep the requirements where they are, so that if in the future you want to compare it to the main standard, you've still got that comparison point. Also, don't make changes without that rationale. Don't make changes or drop things without explaining why, because two years down the line, someone's going to ask why and you want to be able to have that clearly to hand. If you're going to drop things because they're not in use at all, then that's fine. Don't just drop things because, ok, today we can't think about this, we'll think about it tomorrow. Because it'll get lost and suddenly the things that were temporarily dropped will end up being permanently dropped.

The first thing is to tailor it to your organization. Again, what's relevant to you? You want to identify that upfront. You want to be clear, ok, we know that these requirements are specifically relevant, we want to focus on those. We know these requirements are less relevant, we want to focus less on those. We don't want a situation where your developers, at the cold phase, at the actual point of development, they're trying to make that decision, trying to make that determination. Or they're pushing back, and say, "Why have you given me this requirement? This doesn't relate to me. This doesn't relate to what I'm doing right now." We want to try and that in some way up front. We want to make it specific to a feature or a product. We could do that by saying we're going to focus on this level of requirements, or we're going to focus on this chapter. It's not super straightforward that way. The levels are not necessarily matched to how a particular feature works, or how a particular organization works. Similarly, here are the chapters. Maybe if you're building an authentication mechanism, then, yes, you can take the authentication chapter and focus on that. Often, it won't be as clear-cut as that. It won't be as straightforward. You may need requirements from a variety of chapters, which is where the idea of custom splitting comes.

The idea is you want to create a customized way of saying, here are the requirements we want to see for these features. The way I suggest doing that is by having categorizations for features, almost like questions, saying, does the feature do this? Does a feature do that? Does the feature change your authorization mechanism? If so, we probably need to show the requirements for a relevant way to do authorization. If it doesn't, then we don't want to show those requirements. Does a feature accept untrusted input? If so, then we need to start thinking about how we're protecting ourselves against that input, and how our inputs are going through the system? If it doesn't, if it's just a simple view that doesn't really accept anything to the process, then again, we don't need to see those requirements now. Does it perform encryption operations? There are a whole bunch of requirements in the ASVS related to cryptographic keys and key management and algorithms. If we're not doing any encryption that's not TLS, then we don't need to talk about that now. We don't need to see that now. We don't want our end users to have to go and not applicable. We want them to just not have to worry about that. We want that off their head.

Here's an example I built for an organization. These are questions we started asking about the features. You'll notice that this is specific to the organization, it's difficult to make this generic. That's why it doesn't come in the ASVS as it is. It's difficult to make this generic and to make this something that will apply to all organizations. For example, they were using Auth0 for authentication, so a lot of authentication stuff was actually offloaded to Auth0 in the first place. Here we can see that for this particular feature they selected that it relates to business logic, and that it's modifying how OTPs are generated, which is a strange combination, but there we are. Now we can see it's given us a list of ASVS requirements. These are the ones you need to be worried about. These are the ones you need to think about for this feature. Everything else, wait for a feature where it becomes relevant. Then we can select maybe another couple of questions as well, maybe the scope of the feature has been expanded. Again, alongside that, extra requirements come in. We've still got requirements to go through. We can't get away from thinking about security at the requirement stage. We can say, which requirements do we need? Which aspects of security are important to us?

In summary, we want to take a copy of the ASVS, take a fork of the ASVS that suits us. We want to tailor the requirements to the organization. I strongly recommend using some custom splitting, custom categorization to make that feature-specific, and make sure that what's coming in front of the individual developer is only what they need. They don't need to start not applicable in all sorts of other items.

Security As an Attribute of Software Quality

We talked about Lumber, Plc., and the security team parachuting in, being around for a few months, and then thinking they are going to jump back out again and go somewhere else. This is representative of a wider problem of security thinking about themselves in two ways. First of all, they're not part of the team. They're the security team. They're special. Security are this special team that come in and have this expertise and they're going to say something and it's going to be like a commandment that everyone is now going to follow. Everyone's going to do that, and that's it. Security said it, it must be amazing. I'm going to give you some advice certainly for security people here, it's going to be quite dangerous for you if you don't take it the right way, and you don't apply it the right way. Security is not special. Security is not a special snowflake. This is not some esoteric, on the side, unusual aspect that is completely separate to the process of building software. If we think about software, we think about, what do we need in software in order for software to be considered acceptable? Considered, we can deploy this software and people can use it. If you look at Wikipedia, and you talk about the quality attributes of software, it gets pretty long. ISO 25010 defines quality attributes for software. It talks about performance. It talks about portability. It talks about usability. Surprisingly, it talks about security. Security is just another attribute of software quality.

If someone's going to deploy a piece of software, they're going to say, does it perform acceptably? Does this feature respond in an acceptable amount of time, or is a user clicking and then waiting 10 seconds for something to happen? They're going to ask, ok, can a user understand how to use this? Is it clear what the flow is? Is it intuitive for a user to walk through the feature, or are they going to be scratching their heads, searching Google and getting frustrated? We should be using the same logic, is this feature secure? Is this feature going to operate in a way that we expect to? Is it going to expose all of our user's data, or crash our whole application? No one's going to accept a feature to be released that doesn't perform acceptably that users are going to get angry about because they don't understand it. Why should we accept a feature being rolled out that's not secure? I think that's an important way of contextualizing how we think about security in our development process, and in our application process, overall. Security is just another attribute of software quality.

Threat Modeling

One minor problem is that security doesn't necessarily mean the same thing to everyone. Not everyone has the same threat model. Different organizations have different security concerns. Different things are going to ruin their day. Threat model is obviously a bit of a buzzword. I want to quickly define that in a very simple way, just to make sure it's clear what I'm talking about here. From my perspective, threat modeling in this context is, we are intentionally considering, we're not just doing it as a side thing, but we're actually thinking about, what can go wrong? Things that are going to go wrong in this particular case. What are the security things that could happen and make things bad for us, for our particular case, intentionally considering what can go wrong in your case? I think that's a very simplified way of talking about threat modeling. I did get this approved by a threat modeling activist who also happens to be my boss. I'm one of the authors of the Threat Modeling Manifesto. If you want to read about Threat Modeling Manifesto, you can read a lot more about threat modeling. I think for our purposes, this gives us a simple enough definition of what we're talking about by threat modeling.

The idea is, what's going to be worse for our business? What's going to make our organization, our business, have a really bad day? We want to make sure these issues are documented. Maybe that's something that's already been done. It may be that the security team somewhere, have actually prepared that. Maybe someone in the company, the risk management organization has prepared that. Maybe they know, what is the worst impact to the business. We need to have that in our mind, because that's going to guide the next stages that we go through. That's going to give us the context to think about, what do we need to be concerned about from a security perspective? Consider these potential impacts. If we lose customer data, then our reputation will be irreparably damaged, and we wouldn't have any customers anymore. These aren't the only impacts. There are obviously ethical considerations as well. I'm just trying to pull out specific examples. If our site is down for more than 30 minutes, we will lose more revenue than we can afford. Maybe we're a very busy e-commerce site, and the day before Christmas, it goes down. Suddenly, we can't take in orders, we can't take in money. That's going to be a huge revenue hit. If an attacker could alter data in our system, then our customers wouldn't trust us, and they'll go elsewhere. Our system, our business model relies on trust from our users. If our users don't trust us, we don't have a business anymore. These are examples of what might be the most concerning thing. For each organization, that's going to be different. Each organization has to figure that out.

In summary, we want to provide context. We want to understand that security should just be another quality attribute, should just be another thing we consider as, is this software acceptable or not? We need to think about, what is going to be the worst impact for our business? We want to think, what's going to make our business have a really bad day? We want to document that, or find where someone else has documented that and use that as a guide for the next other problems we're trying to solve here. Ultimately, for Lumber, they really needed to have this context. They didn't have this context, they were just giving a one-size-fits-all dive in, dive out. They'd had someone who was more permanently attached to that particular team, or was bringing better context to their process. It would have been easier to show the teams, here are the important things we need to consider.

We now need to use this context to prioritize. We need to think about, what do we want to consider today? What do we want to consider tomorrow? We can't do everything today. If everything is important, nothing is important. We've got a bit of a challenge, maybe that's a little bit more difficult, because, if security is just another requirement, if security is just another part of software quality, then we have to balance it against everything else. That's why we need this prioritization. We need to be able to say, this is the most important thing to do. Therefore, when I go into a product manager and saying, we need to implement this particular requirement, we need to carry out this particular mechanism, then we can balance that against what the product manager says, "I need this developer to work on this to make this faster, or make it more understandable." Because we have to balance against everything else. We have to be able to come with the most important aspect. For that, we've got the threat model. Again, this is a problem with Lumber. With Lumber, Plc., they didn't have this prioritized approach. They couldn't just say we just need to do this there. Their idea was, we need to do everything, we need to cover everything. Now we've got our threat model from before that we can use to figure out how to prioritize.

Real Problems Based on Threat Model

We talked about potential impacts before. If we lose any customer data, our reputation will be irreparably damaged. Maybe in a particular feature, we're thinking, a malicious user could use this feature to access data, to view my profile, view my user profile feature. An attacker could use that to view other users' data as well. This issue here, this key impact here is one of our big impacts. It's one of our main items in our threat model. This is a scenario we're going to be worried about. We were going to want to make sure that the requirement like this, 4.2.1 around preventing insecure direct object references, and preventing accessing records that a user shouldn't have access to, is going to be front and center for this feature. Going back to the second potential impact around availability. Maybe we're creating a photo upload feature. Maybe if someone sends lots of very large files, very large photos up to this feature, it will crash the application. Maybe the application will spend so much time trying to churn, trying to process these files, they won't have time to do anything else. No one else will be able to use the application. In which case, maybe we have wider performance questions, but certainly requirements around large files such as 12.1.1, again, we're going to want to make sure that we're hitting those first, we're considering those first.

Or maybe it's an integrity question about users trusting the data in our application. Maybe we're building some currency exchange platform, some money exchange platform. If a user can look at the exchange rates ticker, can I get £1 for how many U.S. dollars? They can, not just view that ticker, they can change what that ticker shows. If users start making trading decisions based on that updated incorrect ticker, then those users are going to lose trust in our application, and they're going to drop us and go somewhere else. In which case, we might want to make sure that this principle of least privilege, making sure that what's read only is definitely read only, what's writable is writable. What functions they can access, only the correct users can access those, as set out in 4.1.3. We want to make sure that that's going to be key to our considerations for this particular feature. Again, we're already at the stage where we've customized. We should have already customized. We should have said, these are the requirements that are relevant to this particular feature. Now, maybe we're zeroing in even more and saying, we know these requirements are important from this split. We know that this subset of those requirements are important, because they're specifically indicated by a threat model as a potential issue.

Tradeoffs

The other thing to think about is tradeoffs for prioritization. We need to think about what we want to consider when we say, I'm going to do this requirement, I'm not going to do that requirement. I'm not going to prioritize this, I'm going to prioritize that. Difficulty versus criticality. We want to think about, how important is a requirement versus how difficult is it to implement? If you've got a requirement that's going to bring us no benefit, and it's going to be really difficult, then, of course, yes, that's not a question, we're probably not going to go for that, now if ever. Then it becomes a question, what if we've got a very important requirement, but it's really hard to implement? Maybe we've got a slightly less important requirement, but it's super easy, it's a nice quick win. I would certainly recommend trying to balance between the quick wins and the longer-term wins. On one hand, you want to show progress, you want to show that some controls have been implemented. On the other hand, you can't lose sight of the more difficult tasks that are still important. You want to make sure that you can try and progress on both those fronts. Which brings us to perfect versus good. We don't want to be in a situation, some people who are constantly complaining about security control. "This security control isn't 100% effective, because on this edge case, or on that edge case, it won't work. It's terrible." Password managers are a great example. People love to say, "Password managers in this very niche case, if they didn't encrypt your password in multiple ways?" That doesn't invalidate the use of password managers. It doesn't say, what you should do is use the same password everywhere, and then write it on a Post-it note on your laptop. An imperfect control is better than no control. Some incremental improvement is always going to be an easier way of achieving that. Again, it gives us these quick wins. It gives us an initial stage that we can then progress and say, we've done a basic version of this control, of this requirement, and we're going to enhance it going forward.

There's also a cultural aspect here. We don't want to have to battle about every single security requirement, every single issue. We need to consider, what's the team's current security appetite? How much of their time, bandwidth, and mind space has been occupied recently with security issues? Do we want to add to that, or can we delay that slightly? I've seen teams where they're just completely drowning in security issues of one sort or another, especially tool output. Tool output is great for making very angry developers. You get 1000 results from a static analysis tool, from a code scanning tool. Developers are like, "No, this is nonsense." They have to spend a day doing that, and now they're really angry with security in general. You want to avoid that situation to begin with. The wider point here is we want to think about, how much has security been on these developers' minds? Can we avoid overloading them? Can we say, let's talk about this most important thing today. This other thing, let's talk about it tomorrow, or next week. We don't want to have to bring all of our battles at once.

Exercising Balance

With that being said, it will always be harder to do this later. We don't necessarily want to have to go back and add security controls after we've already deployed something. It's going to be potentially technically more complex. It's going to be hard to get buy-in to go back to that. You're always going to have new security things as well. There is a balance to be had here. We don't want to try and do everything at once. We also need to be mindful that we don't want to just push things to later and expect that we're going to have time then. Hopefully, through these customization and prioritization mechanisms, that gets a little bit easier, and you can have a better way of coming up with that decision.

Security Backlog

One thing I would recommend is having a security backlog. We know about the product backlog, where we keep a list of all the features we haven't implemented yet that we want to implement at some time in the future. We keep a security backlog as well. We should know, these are the requirements that we know we should be doing. These are the things that we should be doing, and the mechanisms that we know we should be doing. We haven't got them implemented now, but we should want to implement in the future. We probably want to say, how much effort do we expect it will be to implement that? How difficult will it be? How complicated is it? How much research does it require?

Maybe a little bit about what it involves, what sort of work it is, so it's a little bit clearer about what this actually involves. The product backlog tends to be a little bit free-form and product managers decide what they want to take here and there. The security backlog, you probably want to be a little bit more strict about. You probably want to say, this needs to be required and implemented before the next release. This can maybe wait until this particular deadline, three months, six months. This can just be prioritized alongside everything else. We probably want to be slightly stricter. We don't want just to leave it to someone else's discretion of, I'll put this when I've got time. That means we need to be monitoring these service level objectives. If our objective is by the next release, we need to be monitoring, did we get all those requirements implemented in the next release? If it's a few months, did we get these requirements implemented by that point? By monitoring that, and then reporting back, we can assess, how well is the backlog working for us? How well are we managing to keep on top of our security requirements, while also maintaining velocity? If we're seeing that too much is being left by the side and objectives aren't being hit, maybe we need to take a stricter approach earlier on, and prioritize more things earlier on.

In summary, for that section, we want to prioritize. We want to figure out what our worst-case scenario is, and therefore, what are the requirements we want to consider first? We want to think about the tradeoff considerations. Do we want to focus on the most critical versus the most difficult? How can we balance between those? How can we balance between getting quick wins, or an imperfect control and the longer term when we get a more perfect control? Also, think about the risk appetite of teams. Also, if we're going to have this prioritization of going to delay certain items, how do we track that? How do we keep a backlog? How do we make sure that we've got tracking of that going forward? Certainly, in Lumber, Plc., they hadn't done prioritization, they were just giving everything every single time. They didn't really have this mechanism to say, here are the most important things at this point in time. Here's what you can delay. They also didn't really have that mechanism of having that discussion, of having that tradeoff discussion. The idea was, ok, you need to do all these security requirements. They ask you, this is what you said you were going to do, so now you're going to do it. There wasn't that ongoing engagement, that ongoing discussion about, how do we maintain our velocity while also actioning the most important items? The final big question is the today and tomorrow thing. Lumber went in for three months to go and implement this. They did a big bang approach of training, where they're like, developers are going to do this training, and we've told them what to do. Then, off we go. That's the process running, we now know the process is running. We need to make this operationalized. We need to have a reusable way of doing this. We need to make sure that it works going forward as well.

Security Fragmentation

One of the big challenges here is fragmentation. We know that certain security challenges, certain security requirements repeat themselves over again. How you authenticate users, how you validate their permissions, how you make untrusted content, safe in a particular context, these are all things that repeat. If we have different solutions for these problems in all different places, then we end up with a bunch of problems. We don't have a unified way of doing this. We don't have a unified policy on how to do this. We don't have one place where we decide what that configuration is going to look like, or these things are considered safe, these things are not considered safe. It's going to be a lot harder to test these items as well. It's going to be a lot harder to say, is this operating correctly? If we suddenly decide that we've changed policy, we need to now go and do that in multiple different places. We need to go and change that policy in lots of different places. Maybe there's inconsistency between developers as well. One developer thinks, this is standard. Another developer thinks, that's the standard, and they're trying to communicate and they're working from a different rulebook. Ultimately, it leads to a much more complex, and risky situation.

Unified Solution

I think it's very important to have some form of unified solution. When we're solving a particular security problem, we want one specific solution. Ideally, that should be across the organization, across the company. Even if we're using different languages, maybe it's a different underlying library that's doing this, but the rules should be the same. The rules that are defined of how it is, should be the same, and it should be centrally documented, and centrally managed. Ideally, we want one single source of truth. We want a one-stop shop where developers can go and say, this is the organization's development security policies. These are the libraries or the rules I need to comply to. That one place, it's easy to maintain. It's easier for developers to find.

Documented via ASVS

If we're already using the ASVS as a way of documenting what we think developers should be doing, we can add to this information as well. We can have this specific solution whereby we're saying, this is the ASVS requirement.

Here's how we do it in our organization. Effectively adding the how to the what. You can either add extra requirements. Maybe there are specific security requirements that aren't specifically mentioned in the ASVS, but are relevant for your organization, you can make sure they add it there as well. For example, maybe we've got two for one from the ASVS about how passwords are stored. Maybe Lumber, Plc., could have had specific libraries that they used to perform that storage, to hash, to validate. Suddenly, we're not telling developers, "Use this algorithm, use this mechanism, use this configuration," because it's all done by this library. This library is already handling it. The developers are seeing that requirement alongside of how they do it. There's no question, ok, now I need to go and look up in OWASP. Now I need to go and look up in Google. How do I do this in my language? Because it's already there. It's already centrally stored. Do I as a developer have this one-stop shop to go to?

Another example, output encoding. Very complicated, lots of different ways of doing these different contexts. Maybe we know that all of our HTML has been written using Angular or React, and we tell developers you have to use a standard binding. You need to be using a standard binding because standard binding will protect you. If you're using a non-standard binding, you need to get that review specifically, so they know, if I'm using a standard binding, I don't need to worry. I don't need to get too concerned about how I'm doing this. If I suddenly do something non-standard, I start rendering HTML in an unusual way, I know that I need to go and speak to the security team and figure out how we do that.

Other examples, ASVS contains all sorts of requirements about security headers. Maybe Lambda has a fluid reverse proxy that all applications will be going through. That proxy will add those security headers automatically, so the developers don't need to worry about it. They just need to know, if we're using this proxy, we know that these headers are going to add it, and these requirements are ticked off our list as well. In this case, you may not even show the developers the requirements, maybe they don't need to see them. Maybe we say, they don't need to have any action. They just need to say, use a reverse proxy. Again, we're giving them the how, not just the what. That means that they're not just knowing how to do this now, but they know in the future how to address these requirements.

Lumber, Plc., had this big bang approach. We go in, we talk. We go out again, everything gets done. We need to make that work an ongoing basis. We need to be able to give our developers the tools to be able to do that today, tomorrow, next month, next year. All the while, we're using somewhere centralized. We can also keep that maintained, keep that updated, add in the latest guidance, and we're only having to do it once. We're not having to go to every single team and say, this is a new guidance. Here's how you should be doing it now.

Summary

We talked about four main ideas here. Contextualizing. Making sure that we're clear about how security fits in. Customizing, making sure that the ASVS is focused to a particular use case, to a particular feature, to a particular functionality that is currently being worked on. Prioritizing, defining, what's most urgent here? What are the requirements that are addressing our biggest issues? Finally, operationalizing. Finding ways that we can make this reusable. Finding ways that we can give the how and not just the what. We can give developers actionable guidance alongside the requirements that we're giving them.

Key Takeaways

You tailor the ASVS, tailor it to your needs. Consider security as a characteristic. Identify what's most concerning to you. Use that to prioritize and find ways of making security reusable, applicable, and operationalized to the future, so that developers can take this in bite-sized chunks and carry on using it.

 

See more presentations with transcripts

 

Recorded at:

Jan 12, 2024

BT