BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Presentations The Three Faces of DevSecOps

The Three Faces of DevSecOps

Bookmarks
42:14

Summary

Guy Podjarny unravels the different stages in the evolution of DevSecOps. He separates the term into securing DevOps technologies, methodologies and shared ownership, giving concrete examples of good and bad in each. In the end, he talks about the tools we need to choose our interpretation of DevSecOps, and chooses the practices and tooling we need to support it.

Bio

Guy Podjarny is a cofounder at Snyk.io, focusing on open source and cloud security. He was previously CTO at Akamai following their acquisition of his startup, Blaze.io, and worked on the first web app firewall & security code analyzer. He is a frequent conference speaker, the author of "Responsive & Fast”, “High Performance Images” and the upcoming “Securing Open Source Code”.

About the conference

Software is changing the world. QCon empowers software development by facilitating the spread of knowledge and innovation in the developer community. A practitioner-driven conference, QCon is designed for technical team leads, architects, engineering directors, and project managers who influence innovation in their teams.

Transcript

Podjarny: Thanks for coming back into the session. We'll talk about the three phases of DevSecOps, and like the first talk in this track, not sure if you've noticed, I also have over 100 slides to get through in 40 minutes. So just brace yourself. A few words about me. I'm Guy Podjarny or @guypod [on Twitter]. I'm the CEO and co-founder of Snyk. If you haven't heard of us, come check us out in the booth. I previously founded a work performance company called Blaze, it was acquired by Akamai, was CTO there for a bunch of years. In general, I've been in the security space since about '97, and been working on Devops and Performance, member of the Velocity Programming Committee and the likes since about 2010. I've been doing a lot of this type of writing, speaking, and the likes.

What are we going to talk about? We all love DevOps. If you don't love DevOps, you're welcome to leave. Otherwise, we probably all love DevOps. But why? Why are we doing this? Come back a little bit to the core principles about what drove us here in the first place. There are a million definitions here, I'm going to choose mine. It's totally self-serving here, which is, fundamentally, we use DevOps because of the speed. We use DevOps because it allows us to deliver value and adapt faster to market needs, faster and at scale. That's the core principle, whether it is about efficiency internally, whether it's about user value, understanding from a statement, from a customer to be able to deliver it to that person, or if it's adjusting to market needs, because the market changes on us all the time in part because of DevOps technologies, and we need to be ready for it.

But what does doing DevOps mean? What is doing DevOps? Once again, google that and you're going to get 70 different opinions from 50 people. I'm going to choose a specific trial here that says, when we talk about DevOps, you can split up what's doing DevOps is into three core buckets. When somebody says they're doing DevOps, what do they mean? They're roughly in order of evolution- and I'll use this as the scaffolding here for the rest of the talk- which is DevOps technologies, containers, and the likes, cloud, etc. DevOps methodologies, microservices, continuous integration, changing how we work. And last but not least, DevOps shared ownership, the idea that DevOps breaks the barriers, that it's not something you throw over the wall, that it's everyone's problem to operate the software and it's everybody's responsibility that is of high quality.

Using this foundation, what does DevSecOps mean? DevSecOps is this buzzword that I sort of love and hate. I love it because it represents emotion that I believe in around changing the way we do security. I hate it because a buzzword cannot contain nuance. Everybody now uses it for all these substantially different purposes. We're rolling with it, it's the term as you are, just like DevOps is a very imperfect term. When you think about this, I'll use the same formats to guide us through how do we think of a DevSecOps as, first, securing DevOps technologies, second, securing DevOps methodologies, or, rather, embedding security into DevOps methodologies, and again, last but not least, the notion of security or including security as part of this DevOps shared ownership. And hopefully, I give you some tools by the end of the talk to be able to assess somebody saying, "I'm doing DevSecOps," or a thought of, “I want to do DevSecOps," and maybe split that up into things that are a little bit more useful.

Securing DevOps Technologies

Let's get going. Let's start by talking about DevOps technologies. DevOps created a whole slew of these new technologies. Some of them they created. DevOps, this movement, just popularized the use of open source. It was there for many, many more years. But with this motion, it accelerated. It accelerated substantially. So cloud, containers, serverless more recently, open source libraries and component, the fact that we assemble software today, versus build it or write the code, a lot of these different technologies.

That creates two types of problems for security. Again, simplifying. The first problem is fairly technical, it's fairly administrative, which is, security solutions oftentimes just simply don't work in these new surroundings. You can't just take the problem. The threats that they're addressing are still relevant, but they just technically do not operate in this new setup. Let's give a couple of examples. Web app firewalls. Who here uses a web app firewall? Fewer people than I would have hoped. Hopefully, some of you are using web app firewalls and just don't know it. So web app firewalls sit in front of your application and they try to block attacks, somewhat successfully and sometimes not. Sometimes they block legitimate traffic. But as a whole, they try to block attacks that come into a site.

This is a visual from Imperva, maybe the leader in web app firewall, at least in the appliance side. It traditionally has been an appliance, maybe a VM, instead of an appliance that you put in front of it. That worked somewhat well, the Firewall and the web app firewall, but it really broke down when you start talking about cloud. How do you protect? In general, this is true to the appliance model, and specifically to a bunch of these appliance security controls, when they sit in front of an auto-scaling web application. How do you address that? I'm starting from the easy ones. Simple, right? Very easy to solve. This is an actual diagram from the Imperva website showing how you auto-scale the firewall. We joke a little bit, but these architectures are indeed fairly elaborate. But the one solution for that is the immediate, the very clear revolution, which is, “I'm just going to do the same thing my application does, and I will auto-scale in front of it.” And then the very same company actually also introduced a different solution, my clicker here in borderline, a different solution, which is the notion of introducing this as a service, so a different way to address that the cloud is to move to the cloud, to move your own services to the cloud, and then put yourself in the line of fire. So this is Imperva.

This is one way to adapt, it's the same functionality. You still need a web app firewall as much as you needed a web app firewall in pre-DevOps and post-DevOps, but you need a different way to apply it. Because this new model comes in around being in the cloud, then that actually opens up an opportunity for other players that are already in the cloud and already knew how to be in the line of fire to your site to introduce capabilities, and then today, you actually see that in the web app firewall industry, some of the leaders are actually the CDNs, the Akamai, Cloudfare, those players that were already proxying your site, and now they can add this layer of protection which before wasn't perceived as really the right place to deploy it.

So that was one aspect of cloud. Another troublemaker in the world of security and when it comes to DevOps technologies, is containers. Containers are very disruptive. They sit squarely in the twilight zone between application and infrastructure. I'll give you a couple of examples about why containers cause security people some grief. One is endpoint protection. Endpoint protection is the broader term for an antivirus and malware protection and anti-exploits. Containers can be exploited just like everybody else. They don't have superpowers, they can be exploited. If they're exploited, you want to know. You know to know if there's a virus on this, you want to know if it's malicious. And yet, existing endpoint protection systems are very much designed to sit on baremetal, to sit maybe on VMs. They're not really designed for these elements.

How do you do this? How do you identify malware or a virus inside your containers that we think about from a dev angle? So once again, adaptation. The web app firewall example is something that the industry has already embraced. This is more a work in progress for the industry. But you can see things like. This is Symantec, which is one of the leaders in endpoint protection. They introduced this cloud workload protection that has an agent that sits on the host machine of the containers. When you start getting into cloud situations, when you don't have the host machine, you're just running a container on some cloud platform like a Fargate or such, they're a little bit in trouble. But it is adaptation. They run those and they scan them.

A different problem containers create from a security perspective is, how do you patch your servers when your container is so ad hoc and disposable? Patching servers is a very important practice today. The vast majority of exploits or of breaches happen because you have some system and it's been unpatched. I say today, but really, for the last decade or two, that's been the case. Unpatched servers are the primary cause for breaches. And yet, when you move into this container land, suddenly, it's developers that push this container and it has the OS on it. The IT person or the person operating the servers, if they're already an entity that is separate, sits there and says, "Okay, I found a vulnerability, what do I do?"

Suddenly, patching the system, even logging in and all that, is outside of their purview. It's not something that they do, it's development that does it. So this is happening right now where there's a sequence of solutions that are able to adapt to this, it's the same action around scanning an image. I used my own solution here in Snyk, but, really, there's a sequence of them, there's Clare, there are a bunch of commercial solutions out there that can scan an image and find vulnerabilities; the same scan that you might have done on your infrastructure, but now, we just adapt to running in a different surrounding, run on the container image.

The first challenge that we have is that security solutions that are logically valuable in the DevOps context need to adapt to the surrounding, and if you are using those technologies, you have to think about just being able to map out what are the security concerns, and apply them to your surrounding. This is one problem with DevOps technologies. The second problem with DevOps technologies is new risks that these technologies introduce. Every technology has pros and cons; they introduce strength, and they introduce weaknesses. That's just it. It's an axiom, it's a truism.

Let's take a look at a couple of those. Maybe the one that hits the news the most is these unsecured buckets. It's not a brand-new problem from DevOps. You could have had public facing storage that was unsecured before cloud, but you didn't, but now we do. We have tons and tons of them. That lends to a whole bunch of stories. Let me show you a couple, just because it's fun. Uber. It’s so much fun to so pick on Uber in a security conference. Unfortunately, or fortunately, they give a lot of room to work with. In 2016, attackers accessed the details of about 600,000 Uber drivers, and some professional info- could be fairly sensitive professional info- of 57 million Uber users in the U.S. That's pretty much anybody using Uber at the time in the U.S., and they leaked that information.

How did that happen? Well, a developer pushed S3 tokens into a private github.com repository. So they're on the cloud in github.com. Somehow, we don't know exactly how, attackers gained access to that repository, and went on to steal those tokens. There's a side story here, which is Uber tried to bribe those people, pay them $100,000 through a supposed bug bounty program, tried to keep them silent. It’s a whole story here, less relevant to our talk, we're going to keep that out. But basically, a developer pushed the token. This is actually a slightly better version than what happened to Uber in 2014, which is that a developer, hopefully, a different developer, pushed a secret URL to a public repository, public gist in this case, which was found and only 50,000 drivers' information was leaked then.

You had an access key? You were lucky. I don't know how many people get the Monty Python reference here. But at least they had an access key. If you look at the news, there's a whole bunch of cases where there really wasn't an access key at all. If you see Accenture, you see medical data, you see governmental data, these things just happen, it seems like we get desensitized to them. Just every week, there's some bigger and bigger blow. Michael showed those are the beginning. This is a problem that has been amplified by the world of Devops and we have to address it. We have to address the security risk and introduce security solutions that monitor it.

A very close cousin to this problem is insecure configurations. We launched these databases, we launched very easily, Elastic, Mongo, a variety of others, and we need to secure them. They might be insecure. We see, once again, a whole bunch of these types of stories. I was waiting for the last minute to add to these titles, because I knew there were going to be some fresh ones. Dow Jones just now had 2.4 million high risk individuals, risk of fraud or money laundering and the likes leaked through an insecure elastic database. A very similar elastic database was also leaked from Rubrik, which is a big data backup- something ironic about it, you know, data backup system- that has exposed this database repository outside. And then not to pick on Elastic, there were actually 28,000 public instances of Mongo that were found a couple of years ago through Shodan, through the search engine, that exposed information, just used credentials that were the default credentials, and were exposed to the web.

These are types of problems that are new risks, and you need new security solutions to address this. Indeed, you see a couple of solutions come up. On one hand, you see cloud security configuration solutions that statically scan them. CloudCheckr is one example of it, I have a slightly grainy picture of them over here, that would scan your config and would find cases that you are indeed insecure, so you need to apply. If you're going to use cloud configuration in mass, you need to apply those solutions. Then a different angle to the same problem comes from an expansion of an existing practice, which is scanning outside. Who here has heard of Nessus? Okay, some good hands. Nessus is a very old and true and awesome tool, that is an open source tool, that can scan a system and find problems by probing it to say, "Hey, do I see this thing installed? Maybe I tried to get in." Tenable has a slightly more cloud-oriented, slightly broader version of that. I use Tenable and CloudCheckr. I'm really just using examples. Each of these things is a subset of an industry.

So this is cloud, it's an example of a new security risk. You don't think I'm going to continue without containers. Containers introduce their own world of security risks. Maybe the one that is most well-known is that of sandbox escaping. Containers are awesome. They're very lightweight. They're very quick. A part of the reason they're so quick and easy to work with is because they're not really, really fully isolated. Unlike a VM, they do leave some collaboration, some sharing of resources between containers that run on the same host. Then your risk is a malicious container, or a compromised container, jumps up to the host, and is able to affect neighboring containers. If that was a cloud instance, and a compromised container was compromising your own container, you wouldn't find that so funny, right? It's not something you want to do. We just had a recent reminder of this with a serious runC vulnerability, which kind of runs the container, that indeed did precisely that. It allowed a vulnerability that was very widespread in all those cloud providers as well, that allowed a malicious container to break out into the host, and get some group permissions on the host.

Once again, a whole set of companies come along, these are slightly more complicated problems. You actually see more startups kicking into the space, as opposed to adaptations from the bigger companies. As problems get more complex, it tends to be more startup realm than it is established companies. Twistlock, Aqua, and TrendMicro is veering into that. That's probably the biggest company that I'd point out is making good strides here.

To summarize security, DevOps introduces all these different technologies, and these technologies create security challenges. When you think about securing the technologies, you have to think about two aspects. One is look at the security solutions you already have in place and think about whether they are still relevant in the context of DevOps and how do you apply them. And second, is think about these new technologies, and think about the security risks they introduce and think about what security solutions you want to apply there.

Security in DevOps Methodologies

This was technologies. Let's go on to methodologies. DevOps also changes methodologies. If you haven't noticed, the number of times people said microservices, and this like, microservice and CI, there must be some word bingo, buzzword bingo here, right? How many times does CI get told on the keynote? So let's indeed look at these problems. I'm going to try the clicker again, see if it works for me this time.

CI/CD. So CI/CD is very interesting for security when you talk about security pipelines. In concept, it is a positive thing. It allows security automation. In practice, it's harder. Let's dig in. Security, conceptually, has worked in this methodology of saying, "These are the points in which I will audit. You run, you build your software in this waterfall model. Even if you are sort of forward-thinking and that point of time was not just before you ship, there were points in time in which you stopped and I will audit. So pause here, give me a couple of weeks, I'm going to audit.” And that's really never been an awesome idea. But that's been the way that security works. In fact, it still works in many places.

In CI/CD, you can't stop, it's continuous. That's the whole notion here, there is no stopping, it just rolls out. The solution for that, conceptually, is CI/CD. It's actually the same element, which is introduce automation in a continuous fashion that does security testing. You want this to be statically done and dynamic. You want to be able, from a security mindset, to explore the things that are being built as well as the systems that are being deployed that works, that is a solution kind of. So let's dig into the three primary security capabilities that are actually being put into CI/CD.

The first one is static analysis. Static analysis means scan your code, it does something called taint flow analysis, which is it tries to theorize how data flows from a source, like a form field, through your database to a security sensitive sync, like a database call, and see if it hasn't been sanitized in the process. It finds vulnerabilities, scans your code, finds vulnerabilities. Conceptually, great. Wouldn't you want to just throw a thing, scan it in the build, find the vulnerabilities, and you'd fix? This isn't same as a linter. Except security scans, static analysis, takes hours to run, hours or days to run, depending on the size of the code base. These are the modern tools that take that amount. Builds don't take hours. If you introduce something that takes 10 minutes into the build, generally, there will be some outcry. If you introduce something that introduced hours into the build, the whole notion of a blameless culture might go out the window. There might be some challenges involved. So, this industry had to adapt, they started at a good place of automation, they had to adapt.

How do they adapt? Fundamentally, incremental scans. You run this massive scan over the weekend, that still takes its couple of days to run. But then what you do in the build is you run smaller scans that only test the Delta. These scans are not as comprehensive, but they're good, they still would find some issues. This is static analysis. Side note, but important side note for static analysis: static analysis still has the challenge of the fact that it reports many, many, many false positives. So at the end of the day, running a test that is just flawed, that doesn't give you the run results, is not great either. And that is a different challenge that static analysis industry is facing right now.

So this is SAST. These are Gartner names, so apologies. DAST. Dynamic analysis, used to be called black box testing, is a different type of slide that I meant to animate. Assume these are bullets in automation. What does it do? It's like an automated hacker. So you launched it against a running system, and it would go off and it would test as a black box, it would start probing it and trying to see if you can break in, run some SQL injection payloads, run some cross-site scripting payload. DAST, again, conceptually is automation, it's a positive thing you want to run it. It should be a runnable in the build, but has two challenges. One, it requires a dedicated environment to run against. Some of you probably have a running environment where you run the build, and you deploy an environment and you test against that environment. That is something everybody wants to have, and most people don't have, or most pipelines don't have. That's a challenge. And once again, it's very, very long. It's very, very long to complete.

What's the adaptation here? All in all, don't use DAST. It's the adaptation that is happening, it just doesn't get embraced into CI/CD too much. But maybe a slightly smarter way that is still finding its way, is this notion of IS, interactive application security testing, which means you instrument the application, you run your unit tests, and then the tool tries to deduce from what has been found whether there's a security flaw or not. It learns your application through your unit tests, and then it just applies a security perspective to it. It's not exactly DAST, but it's similar. It's less comprehensive, but it works within your unit tests that you're already running. So very imperfect, this space is really struggling with these methodologies, but it might work.

In DAST, there's an interesting alternate approach, which, as I said, like DAST is very much adapting to the space from a company called Detectify, that I kind of like their approach, which is they said, "You know what? It's never going to be fast enough to run into build, you're never going to have these environments that are your own. What we're going to do is we're going to be able to test your production or your staging environment as is." So from the build, you kick off a scan from your build, but the scans come in at a band. They come async afterwards. We scan the application that has been deployed by your build system. That's interesting. Pros and cons to it, just sort of sharing, I guess, the perspective that the industry is trying to do.

Then the last one that people do in the build is this SCA, software composition analysis, or scanning for open source vulnerabilities. This is the one that actually you see more adopted. I'm a little bit biased because this is the space I live in. But it's just one of those things that is, indeed, more DevOpsy in its mindset. It's a fast scan that explores which open source libraries you are using, and tells you if they are vulnerable or not. You might break the build if you find a library that has a vulnerability or some license problem. It's fast, accurate, it's kind of naturally CI/CD friendly. The number of conversations is like, "Hey, can you also do this for SAST?" That has been the sample of success. And now you're trying to say, "Can I apply those other technology in a similar fashion?"

CI/CD is maybe the pillar of what is DevOps technology, what is DevOps like. What's the one tech that really enabled - actually, I don't know, maybe cloud, it's hard. It's definitely one of them, one of the key technologies that are there in DevOps. Another one that is very key, that amazingly has been marked as the sort of early or late majority in the slides this morning is microservices. What do we care about for security for microservices? Well, when you talk about security, monoliths are really convenient. They have a clear perimeter; their flexibility is limited. There's a controlled flow, there's this set of inputs, and there's this set of outputs, and you might have a mess inside, but I don't care. There are inputs and there are outputs, and the deploys are done in this one full unit that can be audited, and they can be tested.

Microservices are a mess. That's generally true, but specifically, that's true from a security perspective. Suddenly, there are all these disparate entities, they get deployed independently, what the hell is a perimeter at this point? The flow can change, data that yesterday went through these services today would go through these services. It's a mess from a security perspective. You need solutions. You need some ways to address it. We're getting into a world where my examples of what the solutions are, are increasingly thin, because it's a world where the security world is slower, it's more behind, but you see solutions like Aporeto's monitoring of different microservices. So starting to say, from a monitoring perspective period, and security monitoring specifically, can I track data flows across those microservices and secure them? If I learned those, maybe I even apply some AI to it to understand what is normal, and I will flag anomalies. So just understand that you have to accept it and look in line, and also try to visualize that to the user.

A similar challenge happens with the deployment side. Before, you would have these wholesale deploys, and then in those wholesale deploys, you can install an agent that will do that type of monitoring. Today, as we talked about, developers deploy some Docker file, it has a bunch of content inside of it. You have to adapt your installation, some mundane bit, but it's critical for its success. You see Signal Sciences, which actually do a lot of cool things. They just don't have a lot of great screenshots for me to share here, around adapting to DevOps. One of the things as they say is, “Okay, install this agent, install the monitoring agent from a security perspective that they still require as a part of your Docker file.” Copy and contribute. So it means it's a natural part of your application, at least the installation bit.

Security solutions have to adapt these new methodologies in order to stay relevant. The good news about these methodologies is that they also offer an opportunity. There's this general conversation that happens around, is DevOps good for security, or is it bad for security? Oftentimes comes down to this- everything I described up until now was pain; it was the negatives from a security perspective. But these methodologies also present an opportunity to improve how we work. Let me show you some examples. Maybe the best example is around response. If you have this big VM, and you detected malware, what do you do? You have to alert things, you have to start containing things and all that. If it's a contender, you just kill it, you know? You remove it, remove it from the equation, and the new one would spin up and unless your origin is malicious, that new one would not be compromised. That's awesome. That's a solution that is far more powerful than what we could do before. And indeed, Aqua, Twistlock, all these companies, when they do Sysdig, which is Sysdig Falco, which is an open source version of it, they do precisely that when they catch a violation.

Continuous deployment means faster patching. How many people here in this room had to handle the red carpets deploy when there was a problem? There's a serious problem. Not many, but maybe the term is a little bit different. But there was some severe problem in production and you had to do an out of band deployment to get a thing deployed urgently, because there's this severe security vulnerabilities. Now, guess what? You have a pipeline, it just goes straight to production, it is the paved road, so you want to embrace those. It makes us fast, it makes security teams fast.

As we talked about before, CI/CD is room for automation. There's all this desire in security teams to apply constraints, to apply policies to be able to embed some of these security questions and necessities inside of our development process. CI/CD opens up the door to do it. It's a home for this type of security testing. It's great in that sense. Then maybe a slightly more modern version, which I'm personally a big fan of, is this notion of GitOps. When you talk about pipelines, many people in this room might have 15, 20, 100 different pipes or pipelines, different systems, different people run them, I don't know how it works. Increasingly, those same organizations must have consolidated on a single GitHub, Bitbucket, Azure DevOps, one of those environments, GitLab, and more of those applications are actually running on that or are moving towards that single element.

GitOps or Git, actually, all these platforms of Git, most of it is not in the core Git protocol, actually allow us really interactive controls. They allow us to fail pull request. They allow us to open pull request with fixes and recommendations. They allow us to leave comments. So you can build automated or not automated security tools to just use that source as a review. I think security GitOps would be a big deal moving forward.

To summarize, security for DevOps methodologies is around, one, adapting, again, the existing security solution so they'd be able to run in these environments. You can't just throw them out the window. They are valuable tools. You can, but you'd be exposed. The security risks that they tackle are real, and they're just as real in this new world. But you have to adapt to them to these new surroundings. Then the second thing that the methodologies open up is opportunities, and you don't want to just be chasing your tail. You also want to be looking forward and saying, "What can I do better now than what I did before?"

Include Security in DevOps Shared Ownership

The last section is, when you pause to think about everything I just described, a lot of that was also relevant for virtualization, for mobile security, those trends have also changed a lot of the technology stack and required security tools to adapt. But one of the things that's interesting in DevOps that hasn't happened as much in those others is this notion of shared ownership, is the changes to people, is the changes to culture.

That gets me to the last bit, which is maybe the most important one, which is the notion of including security in DevOps shared ownership. Let me tell you a story here. It sounds like the rabbit and the hare, or whatever, the turtle and the hare. The Syrian Electronic Army and the Financial Times. This is a story of an attack that happened a few years ago. A bunch of employees at the FT started receiving these phishing emails that had some seemingly CNN link that was actually a false link, just an HTML link that led to an attacker-controlled website. A subset of people clicked it and got redirected to some spoofed FT single sign-on page, a page that looked like the FT single sign-on page that was started, I think it was a Google single sign-on page, and a subset of those entered their passwords.

So some people at the FT get phished, and now they had their passwords. The attackers used those compromised accounts, and they emailed similar phishing emails from those FT addresses to other FT employees. Now you have better credibility because it's coming from an internal FT address. They also adjusted them a little bit to the internal usage of it. So more users got compromised. This is my favorite part. IT finds out and they send this email to everybody that says, "Don't click these links, you need to do it ..." The attackers see the email, they're in the inboxes. They see the email, and they send an identical email, an identical email. So it's like you got the email twice. But if you click this one, it basically gets you to the attacker website. Genius.

Long story short, there's been more evolution here. But long story short, the attackers gained access to a whole bunch of official Twitter accounts and blogs, they just wanted it for vanity, and to make statements, and the FT, being kind of a true journalistic entity, actually chased them down and wrote a story about this hack later on. But most of this information we know because of a great blog post by Andrew Betts, who's a brilliant guy and a very security-conscious developer, maybe the most security-conscious developer I know, who wrote this post called "A Sobering Day," well-named. It talks about how he was one of those people that got compromised. To an extent, he was actually a highly privileged user because he's a developer. He's a developer in a DevOps shop. So he has access to systems, he has access to a lot of things.

He writes in this post that “Developers might well think that they'd be wise to all this. And I thought I was.” When we think about this Nigerian prince scam, or these spelling mistakes written email, we don't think about an email that looks like it's coming from our IT department but is named - the email address is different, right? In fact, I interviewed this lady called Masha Sedova who used to run security education at Salesforce, and they ran this phishing test across Salesforce, and she was just sharing, who came out worst or best in it, the worst group was marketing. And before you laugh too hard, the second worst group was developers. To an extent, marketing can be excused, because it's kind of their job to explore and to click all these links. They send the links as well. And developers, really generally do this because they think they're better. We think we're better, we think we’d be smarter, we'd identify the problem.

So in this world of DevOps, compromising a highly privileged developer is hitting the jackpot; that is a very, very good target for an attacker. You have to remember that DevOps makes developers more powerful than ever. You couple that with the fact that the pace of shipping code is skyrocketing. Also, with a very, very routine access from developers into production systems, into user data, you take all of that and you add to the fact that in a typical organization, there are 100 developers to 1 security person. I'm not sure about the 10 ops in there, there might be fewer ops per developer, but 100 to 1 is oftentimes a generous rate, definitely when you talk about application security. And you get to the inevitable conclusion that as developers, we cannot outsource security. You can't have it be another team's problem. It is core; nobody else can keep up, and that is just going to get more and more true.

What do we do about it from the context of DevOps? Well, first of all, the good news. We actually ran this survey and asked a whole bunch of developers, it's dominantly developers that filled in the survey, and said, "Who's responsible for security?" The biggest number was us. The biggest number was developers, 81% of the survey consensus thinks that developers should at least co-own security. You can see that the numbers don't add to 100%, it was a multi-choice element, and developers is the one that really shine to the top. The other bit that came out is that 68% of users feel developers should own security in the container world, security responsibility of container images.

The intent is there, the responsibility of people feeling like they should do this, but there are two primary challenges. One is the tooling. Security tools are generally designed for security professionals. I know I've built some earlier on, AppScan and AppScan Developer Edition. It had developer in the name, but it wasn't really for developers; it was an auditor tool integrated into Eclipse. So from a security perspective, we need to understand that integrating a security tool into IntelliJ- at the time it was Eclipse- doesn't just make it a developer tool nor does running it in Jenkins. That doesn't make it a good developer tool.

What does make a good developer tool from a security perspective or a developer tool in general? Well, great documentation, like that of Auth0. It's not really a security solution, it's a functionality solution. It's about authentication, that a security conscious company that has amazing self-serve documentation. On that note, the ability to run things self-serve, like HashiCorp, Vault, which have great open source self-serve tooling. This is very much a security solution. This is a secret management system. And if you're not using it, well, maybe you're using one of the KMSes, but unless you're doing that, I think it's a very good choice.

It's around educating for non-security experts, or PagerDuty on its own right, not necessarily a security company, but they do a lot of incident responses, including being used a lot for security purposes. When you look at the good developer tools, they have a lot of education out there that caters to people that have the typical developer knowledge. They don't push content out there that assumes you're a security expert. They push content out that assumes you're a developer or you're familiar with dev technologies, and they explain to you how to handle incident response, including security in it.

Then my favorite is actionability. Generally, a developer's job is not to find issues; it is the fix them. That feels so trivial, except all security solutions just find issues, they just create work. Then you're surprised that people hide under the desk when you walk around with this sort of bulk of issues that they need to fix. It's not the mindset. The mindset should be one that helps fix the issues. You want to find or build, depending on whether it's tools that you're providing to your team, or vice versa, the security tools the developers will actually use, will actually embrace and consume it.

The second challenge, which is maybe the biggest challenge, is one of adoption. How do you get developers- the first one was more the onus on the security industry, how do you get developers to embrace security and security to embrace dev? Unfortunately, I don't have a one, two, three list over here, but what I do have is some advice for people who know better than I do. I have the pleasure of running "The Secure Developer" podcast, and I've had some great people on the podcast that run security teams that have modern approaches to security, or that work with very modern developers.

I've picked a handful of them to just quote a few examples of how do they do it, how do they get their dev teams to embrace it, or vice versa? I have four of them. The first one is PagerDuty. I had the whole team there, Arup and two other folks. Again, these are hour-long, 30, 40-minute podcast. I hope you enjoy them if you go check it out. But I was saying, we have a phrase we like when our security team, which is, "We're here to make it easy to do the right thing." Their goal is ease of use. I love that notion. There's how much you care about security, and there's how easy it is, and you just need to make it easier or how hard it is. You just make it easier than how much you care. You can inch up how much you care, but you can really draw up how easy it is. One of the things that they do for that is that they actually treat security problems as operational problems. This is true to what they're using the product as well. They use Chef, Splunk, AWS tooling and their own PagerDuty tooling to do that. So that's good advice there.

Other advice comes from Optimizely. Kyle Randolph who was, I think, their first security hire, talked about giving out T-shirts. So they actually look at developers that do good things and they have this security hero T-shirt. Had Zack from One Medical talked about hoodie-driven security. They have hoodies that they give that are sort of a similar notion. They're very exclusive, they're high quality, and it makes people want them. Just sort of a simple social incentive. Once again, he makes a comment that is similar to that a PagerDuty, which is, they use a lot of Spinnaker as a security tool which is not a dedicated security tool, but it is very useful for them.

The New Relic CSO talks a lot about teams. He almost talked about the negative sentiment that you can do with this. He says you can turn off a developer very easily if you give them unactionable information, or something that they don't understand, or don't know how to fix it. So basically, if you just make it work for me, I generally don't want to hear from you. It's just that's a natural human response. If you're just creating work for me and you're not helping me, you're not my favorite person. That's just a natural sentiment.

The Slack CSO, Geoff, talked about org structure. This is more a lesson for dev teams, maybe. Security was this delegated IT part of the company and actually moved to be a first-class citizen of the engineering organization. There, they can affect change much more effectively. And then the second thing he talked about was more the community bit and the Slack team and a bunch of others now, but I think they first started sending cakes and cookies to other competitor security teams that suffered a breach or in some tough states, just showing some solidarity, which I think is amazing.

So you want to look for ways to engage developers in security and vice versa. I'm kind of running out of time here. Just to say, including security in this DevOps shared ownership means, one, on the tooling and the tech side, find tools developers would actually use. And second is look for ways to engage developers and security and vice versa. To summarize, DevOps is all about delivering value and adapting to market needs faster and at scale. We do it for speed. And if you don't address security, that's going to get in your way, it's going to nullify all of this value. What you want to do is you want to secure DevOps technologies, methodologies, and include security in this shared ownership.

Just to summarize what I've shown before, technologies imply adapting the existing security tools to these new tech stacks, and finding the new risks that they introduce and doing something about them. Methodologies means, once again, adapting to these new methodologies, but also tapping into the opportunities the new methodologies present for security. And then shared ownership means finding the perspectives that actually makes developer embrace security, both from a tooling perspective and from a security perspective.

One last point before I close off here, is that we actually have it backwards. I talked about these different things, but in practice, they don't go that way, they go the other way. DevOps is first and foremost about people. It's about the changes that we do in how we work, and everything else derives from that. So if you were to do one thing of all those things that I talked about, it would be the third. It would be embracing this DevOps shared ownership of security, and if you do that well, everything else will follow. Thank you.

 

See more presentations with transcripts

 

Recorded at:

Mar 29, 2019

BT