BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Presentations Security Vulnerabilities Decomposition

Security Vulnerabilities Decomposition

Bookmarks
56:07

Summary

Katy Anton looks at security vulnerabilities from a different angle, flipping the security from focusing on vulnerabilities (measured at the end) to focusing on the security controls which can be used by developers from the beginning in the software development cycle.

Bio

Katy Anton is a security professional with a background in software development. In her current role as Application Security Consultant at Veracode, she works with security teams and software developers around the world and helps them secure their software.

About the conference

Software is changing the world. QCon empowers software development by facilitating the spread of knowledge and innovation in the developer community. A practitioner-driven conference, QCon is designed for technical team leads, architects, engineering directors, and project managers who influence innovation in their teams.

Transcript

Anton: How many of you know companies that implement some security because the company needs to comply with certain regulations, GDPR, PCI DSS? The majority, which is exactly what I see a lot of out there. At this point, the security in most of the cases is a checkbox, and is there in place for compliance regulations reasons. The security team will come up and create compliance policies, which contain security vulnerabilities each company is interested to comply with, SQL injection, cross-site scripting, or any other type of vulnerability each company is interested to comply with. In most of the cases, they're the OWASP Top 10. In most of the cases, some of the OWASP Top 10, or even all of the OWASP Top 10. Now we have these compliance policies that then are used to check your software. For this, they sometimes use Static Code Analysis or SAST, Dynamic Code Analysis or DAST, or even manual penetration testing. Then the reports are compiled, usually, huge reports that contains lots of vulnerabilities, and sent to you, the developers, to fix them. How many of you receive those huge reports in an unfriendly format in most of the cases for you to sort out?

When the Report Is Published

I totally understand how you feel because this is exactly what happened to me in my previous job when I joined as a newbie. Soon after I joined that company, I was given a manual penetration testing report and asked to sort out all those flaws, which were not few. We have all of these flaws now in this huge report to sort out. At that point, there is a shift because as developers we start thinking of those vulnerabilities. We want to make those vulnerabilities disappear from our software. The problem with this approach of thinking about vulnerabilities is that we still produce a high number of insecure applications, and injection for example is still king. In the latest OWASP Top 10, 2017, the injection is still in pole position. There is a problem because we can test for these security vulnerabilities only after we have done the software, after we have written the software. After all, you cannot check for SQL injection, until you actually have written that piece of code that makes the interaction with the database. The question is, can we do something else about this? Is there any other way that we can look at this problem?

This is what we will explore as part of this presentation, where we will look at security vulnerabilities from a different angle. We will decompose the vulnerabilities into the security controls that prevent them, are familiar to you, and you can use them while you write the code. Instead of focusing on these security vulnerabilities, which can be measured only at the end after the software has been developed, which is way too late, to focusing instead on these security controls that we have identified. You can verify, and you can use them from the beginning in your software.

Background

My name is Katy Anton. I come from a software development background. This is when I actually joined as project co-leader at the OWASP Top 10 Proactive Controls, because for me, and for my team, the OWASP Top 10 did not work. The OWASP Top 10 Proactive Controls is a project for developers that should be used in any software development project. I currently work as an Application Security Consultant at Veracode. As part of this role, I help developers around the world on how to secure their code, on how to correctly fix various vulnerabilities found in their software.

Common Weakness Enumeration

The way we measure the security of an application is using, primarily, the Common Weaknesses Enumeration or CWE. Probably, this is something that you heard security professionals talking about a lot, CWE. What is this? This is a formal list that gives a common language to describe the software security weaknesses, which have been introduced at architecture, design, and during the code. This is what the CWE is, various security weaknesses introduced in the software at various stages. CWE has been studied for a long time. They have been organized, classified, which come in from architecture, which are introduced at code level. Actually, there is a nice classification which you can find on NVD database. The point of this one is not for you to look at in detail, but the point that I'm trying to make is that there has been a lot of classification and analysis of all these CWE.

CWEs in Injection Category

Let's go into one of these categories in detail. The biggest one is the injection category. It is still one of the most common vulnerabilities found in software applications today. This is a broad category. It contains multiple types of injections. You have command injection, cross-site scripting, XML code injection, CRLF, LDAP injection, SQL injection. If we go into any of these, actually, there is further classification. For example, in the case of the SQL injection, depending on how the attack is performed, you can have inbound injection when you have the attack and the exploit in the same channel, or out-of-band injection. There are lots of classification types of injection. It can become overwhelming for you as developers to think of all of these types when you write your code. I know from my experience as a developer that when you write your code, then your entire focus is to deliver that particular functionality. You cannot think of all of these types of injections and vulnerabilities that are out there.

Decompose the Injection

Is there anything else that we can do? Is there another way to deal with this? To answer these questions, I will start with a very basic definition of what the injection is. We go back to the basics. If we go to the very basics, the injection occurs when you have some data, which is then combined with a syntax, and that result is sent to a parser. The data is not only from Get or Post, but also from every file upload, HTTP headers, also data from devices like database configuration files. All this data from a wide variety of sources is combined with a syntax, and is sent to a parser. If we want to store the data to the database, then that will be sent to the SQL parser. If we want to render a web page, that is sent to an HTML parser or the browser, and so on. That's when we end up with the initial data, the input, which is executed as part of that code on that output.

Extract Security Controls

The next one, I'd like to focus a little bit on this output, the red bit. Because that's where we end up with the data which is executed as part of the code. I'd like to take this view and flip it so we can focus on the red part. In the case of the SQL injection, you create the command by having good data which is combined with the SQL query. The best defense to prevent the SQL injection is to parameterize the data, by separating the data from the actual SQL query. That is sent to the parser. The parser can now differentiate which is the data input, and the actual SQL command. The defense happens at the parser level. This is why data query parameterization is the best defense for SQL injection because the defense happens at the parser level. As Defense-in-Depth, we still have to validate the input. In the case of the cross-site scripting, this occurs when you have input combined with HTML sntax to create the HTML page. The best defense for this is to contextually encode that output to neutralize the characters that can trigger the code execution. For this reason, the output encoding is the primary defense, and the input validation, we still have to apply it as Defense-in-Depth. Similarly, for XML injection, code injection, LDAP injection, and command injection.

Instead of focusing on all those types of vulnerabilities that you will end up having in your reports, what I'd like you to focus instead on, are the security controls that help us to prevent those vulnerabilities in the first place. For example, to prevent injection, use as a primary the data parameterization if that's available. If that's not available, then use the encoding. As Defense-in-Depth we still have to validate the input, because if we do this too, consistently, you may prevent vulnerabilities that you might not necessarily be aware of. A good example of this is a second order SQL injection. This is a type of injection where the injection payload is recorded and stays dormant in the database until it finds the right environment to be exploited.

Lack of Intrusion Detection

The next category that I would like to discuss is the intrusion, or better said, the lack of intrusion detection. The problem that we see out there quite a bit is that for your logins, high value transactions, and other critical data is not logged. If there is some logging then another problem is that the format of the logs is not consistent enough within a company to allow the operation's team to aggregate all of those logs, centralize them, and process them in a reasonable amount of time to get some meaningful data for any suspicious activity. To put it simply, if a Pen tester is able to get into a system, that is a very good indication that there is not enough logging and monitoring in place for that particular system.

Security Controls: Security Logging

What can you, as developers, do about this? For this, you'll have the security logging. This is the control that will help you to log security information during the runtime operation of an application. Let's go a little bit more in details to see what exactly I mean about this security logging. There is a very nice OWASP project called AppSensor. That has two parts. One of them is a tool and the other one is the documentation. According to that project, there are six types of detection points, which are considered good attack identifiers. These are authorization and authentication failures, client side input validation bypasses, whitelist input validation failures, obvious code injection attacks, like when somebody sends an obvious SQL injection string or cross-site scripting injection string. A high rate of function use. This is when you have a high number of requests for a certain page in a very short period of time.

Examples of Intrusion Detection Points

Let's go a little bit more in details, in some of this, to give an example of what I mean and what exactly you can do when you go back to your companies. In the case of the input, if your application expects to receive Post, but instead, it receives Get. That's a very good indication that somebody has intercepted that communication, intentionally changed from Post to Get. That type of anomaly, that exception should be logged. This is something that a Pen tester might do as well. Because what they can do if they change from Post to Get, they can actually automate those. That's something that a Pen tester would do. Another one is additional form or URL parameters. Another thing that a Pen tester would do is add debug = true. If you detect on the server side, the fact that you have extra parameters, that's a good indication that that was intentionally added. That type of exception, again, should be logged. In the case of authentication, if extra parameters, form or URL parameters are sent during the authentication, then something that a Pen tester would do is try admin = true. Let's see what happens if on top of these variables that I have sent, I also send admin = true. If this is detected, this is another type of exception that should be logged. Another example is the application expects to receive two variables, the username and password. Instead, it receives only one of them, the username, because the password has been removed. This is another type of exception that should be logged.

Let's move on to some examples of input exceptions. Let's consider that the input validation on the server side fails, despite the fact that there is a client side validation. As an example of this, think that you have created a form. In one of the elements for that form, you have a maximum length defined. However, when that string reaches the server, the length is greater than what it was defined in the client side in the form. This is a very good indication that somebody has intercepted that since it has left the client, intentionally changed that particular string. That type of exception as well should be logged.Also when the validation on the server side fails for known user editable fields, radio buttons, checkboxes, hidden fields. A very good example of this is on an e-commerce application, on the shopping basket, if there is a hidden field, price, then that can be very tempting to play with. That as well should be validated on the server side to ensure that it is the value that is expected.

During my previous job, I actually asked my developers to break their own code, the one that they have actually created. After they got some basic training into how to use some basic tools like Burp to intercept communication and change the data. During that exercise, the developer that was working on the e-commerce site actually discovered that he could purchase items without paying the VAT because he did not validate on the server side that hidden field. That's something that you can also do when you go back. Try to break your own code and see how many of these hidden fields you validated. Then try to validate and put in place these errors and log exceptions. These are just few examples. There are more in the OWASP AppSensor project.

Secure Data Handling: Basic Workflow

If we are to recap these two categories, injection and intrusion detection, if we are to map a basic workflow, every time our application receives data, we should validate it. Any exceptions, we should log, because by logging and putting these exceptions in place, what we are actually doing is we give the software the mechanisms to respond in real-time to these possible identified attacks. The ones that we have identified. We can reduce or even stop these attacks depending on how we choose the software to behave at that point. Any output, you should contextually encode it to neutralize the characters that can trigger that code injection. Any time we store data into the database, we should parameterize it to separate the data from your actual command.

Data at Rest and in Transit

The next category is the sensitive data exposure. I'll look at both data at rest and in transit. When it comes to data at rest. There are two types of data that we need to store. One is the one where we need to know the initial value, like credit card. That must be encrypted. The other one is where we don't need to know the initial value, like user password, and that should be hashed. Every data that is transferred, in transit, should be done over an encrypted channel. When it comes to encrypting data at rest, a challenge that I see out there is how to correctly implement this, and how to actually securely store the keys that is used then for encrypting, decrypting that particular data.

Data at Rest: Design Vulnerability Example

I'll go through an example that I discovered in one of my consultations, when, during the review of a piece of software, we discovered that in the same folder, there are two files. One of them, encrypted password. It was the actual encrypted data. The other one was quite intriguing. The password entities, which when we opened, it contained three elements: a seed, a salt, and iterations, which turned out that all of these were used in conjunction with a password based key derivation function, which is a hashing algorithm to generate the key. In the same folder, we had both the encrypted data and everything we needed to actually generate that key for that data. If an attacker would get access to that particular folder, they could have everything, the data encrypted which were the credentials for the database. He could actually get access to the database as well. Happy days for everyone. This is an example of a vulnerability at the design stage and it takes a bit longer to re-design it correctly.

Encryption: Security Controls

When we encrypt data, we need to ensure that we still use a strong cryptographic algorithm and AES is still there, but we have to store the key completely separately from the encrypted data. We have to actually protect the keys in dedicated vaults. The keys are the secrets. They are used to encrypt and decrypt the data so they have to be protected in dedicated key vaults. We have to avoid, as much as possible, these homegrown solutions because they can easily go wrong in the case of encryption. It's also important to define a key lifecycle, and have implemented the ability to actually change algorithms and change the keys in the software. Because if the key has been compromised, then you will have to do that one anyway. It's better to actually have already implemented this. Also, document this entire process, so when this happens, you know exactly what to do. You've already done it. It's already documented.

Data in Transit: Security Controls

When it comes to data in transit, we're getting pretty well at encrypting the communication between the client and the server side. Especially, now with Let's Encrypt, there is no excuse to not have the transfer over HTTPS. I still see a problem of the transfer between the application server and non-browser components like the database, especially if this one is behind the firewall. It is important to ensure that any communication is done over an encrypted channel, including behind the firewall. Because there have been precedent when intruders have been on a network. Probably, the best known case is the Marriott where the intruder has been there for over 4 years. You never know what's on the network. Our responsibility as developers, we still have to make it as difficult as possible for someone if they try to get into our own application. To encrypt the communication behind the firewall is, as well, very important.

Using Software Components with Known Vulnerabilities

The next category of issues are the third-party components, or using software components with known vulnerabilities. When I refer to these known vulnerabilities, these are vulnerabilities that have been published and are available for everyone to see, on publicly available databases like NVD, National Vulnerability Database. These are out there for everyone to read and understand how to exploit. Even more, in some cases, they're already made exploits. It's very easy to automate it as well. In terms of which libraries your software uses, and which of this version is vulnerable, there are quite a few tools out there for you to use. There are commercial tools but there are also free tools like OWASP Dependency-Check that you can start using in your software. The problem I see is that when it comes to results with lots of components, it takes a long time to do something about these particular components.

There has been a report where over 85% of .NET applications and 92% of C++ applications have at least one component with one known vulnerability. It usually takes quite a long time to do something about it. I will go a little bit more in details of what you can do to get on top of these components and start upgrading them, and even replace them if need be.

Root Cause

The type of software that has this problem is the type of software that no developer wants to touch because it's so difficult to understand. It's the type of software that is so easy to break. You change one thing in this part of the software and something else breaks in a completely different part. It is the type of software that is very difficult to test. You might have some manual testing, but it's a very low coverage. All of these problems make it so difficult to upgrade, and in the end leads to a high level of technical debt. How many of you have seen this type of software? Quite a few.

What Is Attack Surface?

I'm going to introduce another concept from security. This is the attack surface. When it comes to software, we refer to attack surface as all of those points which can be used by an attacker to enter data or extract data from a system. In security, we have a very simple principle which says that we need to ensure that we have a minimal attack surface. When we bring in this component, especially the type of components that actually have some interaction, whether with devices or with UI, that's a way to increase your attack surface. We need to ensure that when we bring it, we do it in such a manner that we minimize this attack surface.

Components Examples

The first one is using an open source library. Something that you will always use like a logging library, commonly used. The next one is using a vendor API, something that is again used on a common basis. The third one is using a complex package that has been developed by another team within the same company. In large companies, you have this way where one team develops a library, and then it is reused on a wide range of applications.

Example 1: Implement Logging Library

I'll start with a logging library. This is something that you would bring in and you would do on a common basis. When you bring in a new login library, as a ready-made library it has a wealth of functionality. It's highly likely that your software will not need all that functionality that has been there. For example, in the case of a logging library, your application might need only three logging levels, the WARN, DEBUG, and INFO, and that's all you need. For this type of scenario, you want to expose to your software only a subset of the functionality provided. A good software design pattern is the Simple Wrapper. This gives you the ability to expose to your software only that functionality that you need, and hide unwanted behavior. If there is a vulnerability, but that is not exposed into your code, then your code will not be exposed to that particular vulnerability. It's also a good way to reduce your attack surface. There is a good book by Robert C. Martin, "The Clean Code" which gives details, in "Clean Boundaries" section, how to do it and can have the unit tests. If it's done correctly, you can easily upgrade this particular library. If it is not used anymore, if it has been deprecated, you can also replace them without much penalty. This is what we are looking for, to get into controls of the components that we bring into our library, into our software so we can upgrade them, even replace them without much penalty. In the end, to reduce the technical debt. That's all about it.

Example 2: Implement a Payment Gateway or API

The next one is a payment gateway or an API. There is a trend these days. If an attacker wants to get to a target, but might be a little bit more difficult, one way to get to its target is through a partner. A vendor API is a very nice example. They can actually get through a vendor API to their intended target. This can as well be compromised. When we bring in a vendor API, we need to do it in such a manner that if something happens, we can change it or shut it down. For this scenario, a good software design pattern is the adapter design pattern, because this allows you to have multiple adaptees at the same time. For example, in the case of e-commerce, you can actually have multiple payment gateways and APIs, at the same time. If anything happens with one of those, this as well allows you to quickly switch off that one and have your website up and running. It's another example of another design pattern that can help you to get into the control of your software.

Example 3: Implement a Single Sign-On

The last one that I'm going to analyze is the case of a library that has been developed by a team, and now it's reused on multiple applications within the same company. A very good example for this one is the single sign-on. It can sometimes be quite complicated to bring that one in. It can be very tricky to actually remove it if you want. The question is, what can we do about this? For this, a good software design pattern is the façade, because the façade simplifies the interaction between your application and a complex subsystem. It can also be used for legacy applications and poorly designed APIs. The façade gives you that one point that if something goes wrong, then you can control it into that one point. It can hide away details from the client.

Secure Software Starts from Design

As part of this analysis, I've just looked at three types of components and software design patterns. What I would like you to understand is that the security of the software starts from the design. From the point when you choose, how am I going to implement this library into my software? From that point, the security of your application starts. You can use a Simple Wrapper, if you want to hide unwanted behavior and expose into your software only a subset of that functionality. You can use an adapter design pattern when you want to have multiple adaptees and you want to interchange them. For a complex subsystem, you can use the façade, or also poor or legacy application works as well. You can actually come up with other software design patterns that might work for you, as long as you ensure that you reduce the exposure of your software to that third-party library. If something happens, you need to replace it. You can do this without much effort, because you can upgrade it, replace it without much penalty.

How Often to Verify?

We have discussed some types of vulnerabilities, not all of them. How often should we check for these security controls that we have discussed? To answer this question, I will go through a real example. Rick Rescorla was a U.S. Army officer born in Cornwall. After the army, he decided to move into the corporate world. He worked as Director of Security at Morgan Stanley in the World Trade Center. From the moment he moved in to the World Trade Center, he was aware that the building posed unusual security risks, partly because of its sheer size, its symbolism. The biggest building in New York and in the world. He wanted to better understand the type of security risks this type of building would pause. Together with one of his ex-army friend, they just started walking around the building. His floors were secured nicely, but when they got into the basement, where the garage was, they saw that the doors were wide open. There were no security guards. Trucks were coming in and out as they pleased. They saw that as a weakness into the overall security of the building. He made the report. He filed the report. Evidently, he was ignored.

On February 26, 1993, the terrorists detonated a bomb truck in the basement of the World Trade Center. After the attack, he warned again. They cannot come through the basement, because now there are huge doors which are closed. The next time will be by air. Because the first time he was right, actually the management believed him the second time. The lease contract would end in 2006. The company did not want to move until the lease contract would end. They agreed to implement anything that he wanted until then. Because for Rick, it was not a matter of if there is an attack, but when there is an attack, he wanted to have all his people safely exit the building. That was his main objective. For days, he put in place regular evacuation drills. Every 3 months, he wanted to have everyone involved and participate in those evacuation drills. Evidently, their worst compliance in particular was from senior executives. Because what this meant for them, when the drill was on, they had to put the phone down. Sometimes they might be in the middle of an important transaction, they had to put the phone down. Then go down the stairs, 77 floors. He actually managed to change the culture in the end, and have everyone involved in these regular evacuation drills. On the day of the 9/11 attack, all but six of the Morgan Stanley employees safely exited the building, just as they were trained. Among the casualties were himself and other security officers. Rick's consistently rehearsed evacuation plans, saved the lives of 2687 people on September the 11th.

Recap of Security Controls

What about us? Let's go back to our software, the software that we produce. Today, we have software in cars. We have software in planes. We have software in medical devices. What about this software that we produce, how often should we validate and check for these security controls? Every time we receive data, but not only from the client side, also from devices like database, we should validate it at that point to ensure that it is the right format and in the right length. Any exceptions should be logged. Every time we output any data, we should contextually encode it to neutralize any characters that can trigger injection. Every time we save data into the database, we should parameterize the queries to separate the data from the actual SQL command. If we really need to use an operating system command, sometimes you don't need, you should parameterize the data, again, to separate between your dynamic data and the actual command that is about to be executed. Every time we encrypt the data, we should store the keys in dedicated key management solutions. Any transfer should be done via TLS, including behind the firewall. Every time we bring in a library, we should choose a software design pattern that helps us to reduce the attack surface, exposes only the functionality that we really need into our own software. It allows us to quickly upgrade or even replace that library if it has become obsolete.

Are these the only controls? Not necessarily, because we can also add to this list. For example, in the latest OWASP Top 10, 2017, there was a new entry, and that was for XXE. The control for that one is to harden the parser. We can add that control to our list as well to harden the parser every time we receive an XML document. It's also important to do this on a consistent manner in order to effectively prevent any vulnerabilities in our software. We have to remember that an attacker needs only one flaw to bring down a system. As builders, we have to defend everything.

Takeaways

What I would like you to take away is instead of actually focusing on these vulnerabilities, which we can measure only after the software has been developed, instead, we should focus on the security controls that prevent them. I have just gone through some. When you go back to your company, have a look at, which are the security controls that are relevant for your own applications and have those implemented on a regular basis, in your software. Also, it's important to ensure that once we have implemented them, and you implement them correctly, we have to verify that they are there and they are correctly put in place to effectively prevent vulnerabilities in our software. What we need to do is just like Rick Rescorla, change the culture, and focus on the security controls that help us to build a more secure software from the start.

If you want to have a little bit more, you can actually have a look at the OWASP Top 10 Proactive Controls. There are more security controls there. For the actual details of how to do various low technical things, you can have a look at the Cheat Sheet Series to see how to do a certain task in your particular software.

Questions and Answers

Participant 1: What is your opinion on network firewall as a security control in the modern world of microservices?

Anton: It doesn't work. The best example for that one is Equifax. Equifax is a very good example where that type of vulnerability could not have been prevented. Still, upgrade your libraries.

Participant 1: What will we use instead, nowadays, instead of the firewall? What would you recommend?

Anton: The security in the software. That's the best one. The firewalls have been out there but they have shown that they do not prevent. There are certain types of vulnerabilities that they do not work for. That's why putting all these controls into your software is probably the best way to. At the end of the day, these controls are not rocket science. You validate your data every single time. You parameterize your query every single time. You need to be consistent. Also, apply clean code principles to make it easier for your fellow developer, because if he goes into that bit of code, and he doesn't understand it, that's when you actually introduce further vulnerabilities and further errors. If he can understand what it is exactly about, then he can actually manage the code as well.

Moderator: You're saying that firewalls, they're reactive security controls. They react to something going on. We should start from proactive security controls. Something we built in into our code, into our infrastructures to prevent problems, for the first time, then to use reactive controls to limit, to control.

Anton: Yes.

Participant 2: My question is about intrusion detection. You actually gave a few examples of when we should log attempted attacks. Are those logs actionable? I'm asking because in large systems, it is very difficult. It's simply not feasible to address every attempt. That's why we implement protection in the first place.

Anton: The problem in many companies, and in particular, large companies, is there is a lack of standards of the logs. If those logs are processed and automated, then you'll be able to detect some suspicious activity, in a more reasonable amount of time. If they are not consistent enough, that's when you're going to have problems.

Participant 2: What would you do once you detect? It's not feasible in a large system, if you have a lot of attempts, and log them. Let's assume the logs are clear and standardized. What do you do with them? Are they just for forensics, or at some level of attempt, you start doing something about it? You just cannot react to every single attempt.

Anton: If you log the right information, when that happens, you should be able to detect any intrusion. For example, if somebody enters obvious SQL injection strings, that should be detected. There have been cases when that particular IP has been traced back to a certain person, and then forensics have been in place.

Moderator: You also can distinguish logs and security events. It can be just on different things. You calculate, you grab and store logs for forensics, but you use security events as sources of alerting. You build metrics on security events that say that delta amount of decryption mistakes, authorization errors starts growing during the last hour. This is something you react. You build alerts based on some anomalies. These anomalies, you can calculate grabbing logs or security events, and use SIEM, Security Information Event Management Systems, like all log analyzers with them.

Participant 3: I think I missed what we just said about static code analysis as a tool to help teams build in security. We very successfully use SonarQube, and it does detect security smells. It has the OWASP plugin. The other one, you form your code off to the cloud and they do some analysis for you and you effectively manage your false positives. Would that be part of the toolset for you?

Anton: Definitely. You have various tools, and each tool will give you a view. Static code analysis is a very good one. It helps you to actually go at the code level. This is what I actually do a lot of. The way I deal with, I actually start off with what exactly is wrong and then dig in a little bit more into. Sometimes I go, not only at the code level, but at the architecture, like that type of flow. That's a good way that helps you to find vulnerabilities that might not be found with a dynamic or a Pen test. It's a good one. It can help you out with architecture as well.

Participant 4: It's really complex and frustrating to use this framework because you cannot control something. It's always frustrating when you cannot be sure that [inaudible 00:52:57] follows the rules and recommendations you give. The key point for me is the culture across the company. What would you recommend to deploy this recommendation along your medium-size company? How can you be sure that all your developers are really aware of what you have to do to follow these recommendations?

Anton: Can you repeat the question?

Participant 4: Particularly, what would you recommend to deploy this framework in a company? Would you, for example, do some OLA sessions with slides, maybe? Would you do some serious games? How would you deploy this thing in a company?

Anton: To build more secure software?

Participant 4: Yes. To be able to force some developers to apply these things.

Anton: One of the things that I've done in my company, and they really work, is some basic web defense, and actually, help developers to think that their code can go wrong. I've also asked them to break their own code. It helps them to change their mind that things can go wrong with their own code. Some people would like more about these, and those should be encouraged. If you find that there are developers that actually enjoy this, encourage them, because they can spread the word much better within the developer community.

Moderator: There is an idea of having security champions. People who are curious. Not always developers, it can be any person with any title. If they could assist in security, you can encourage them. You can help them share knowledge internally, to speak, to educate. Also, when you try to implement SSDLC, Secure Software Development lifecycle, you educate developers that security is part of the process, as design, as patchers, as testing, as quality assurance. Security is part of this. It's not something you need to do as an end, when the application is ready. It's piece by piece. At some point, it may be a good idea to tell developers that not many companies do this. They're better than many of them. At some point, developers start to feel like, "We're really cool. We are doing really cool things." Mind games.

 

See more presentations with transcripts

 

Recorded at:

Aug 25, 2020

BT