Transcript
Brunton-Spall: I am going to talk about the evolving practice of security. I am Michael Brunton-Spall. My pronouns are he, his, and him. If you have any questions or anything you can email me. I wrote a book a couple of years ago on Agile application security. It's a little dated now because it's two years old and security's moving quite fast, but it contains a lot of the stuff that we're going to talk about today. I also write a weekly newsletter called Cyber Weekly that just analyzes that news that week and gives you some interesting links to go read that you might find interesting.
This talk is going to really give you context for today's track in cyber security, and my aim is to give you three primary things. One is, why is security evolving? Why is it changing at all? Is it not just perfect as it is? Two, where we've come from. Where it is, why it is that security people behave the way they do today, why it is that we're asked to do things in certain ways? And then three, where are we going? What is the direction that security is going? How will that change?" This is a high level talk. I will not be telling you what to turn on in your AWS account to get better security. I will not be telling you how public key encryption works, etc. Hopefully, I will give you a good overview of security and why the decisions and stuff that'll happen today will happen. You'll hear from other people. Some of the people on this track are brilliant, and you'll hear some really good usable things from them about it, and hopefully this will give you a good context and good understanding.
Some Context
Really what I want to address is how do we rethink security practice in organizations? Because to me, I think there's some fundamental issues with security as it's practiced today in most organizations. And in order to do that, I'm going to give you a bit of context. So I'm going to rewind about 15 years or so.
Back in 2005, we had one of the first major data breaches that we recorded all information about, which is AOL lost 92 million customer records. One of their systems administrators was bribed by a spammer to just download the account name and the email address of everybody who had an AOL account back in 2005, which turned out there were 92 million of us. I think I may have been about 20 million of those, because I had multiple AOL accounts one after the other every time my trial expired, like many of us. But that was one of the first big data breaches, 92 million accounts went missing. The systems administrator was caught. They had received a small amount of money from the spammer. They received a much bigger fine for losing the data.
But in 2005, this is kind of what breaches looked like. The U.S. Department of Veteran Affairs lost a whole bunch of data. People got in and hacked some of the data. AOL again lost more information. But not that much goes on. We fast forward about five years, and actually what starts happening is breaches become more common, more visible. This, by the way, is Information is Beautiful, a data visualization on breaches. It all bubbles around, moves, and you can hover over any one and it'll tell you more details about it. It's a lovely way to explore the history of data breaches.
But here we have things like the Sony PSN hack, 77 million accounts went missing. That was LulzSec, who they got five years orders to prevent them accessing computers for doing some of that. We got the LinkedIn hack up here, which was a 117 million records were stolen from there. The embassy cables. I worked at the Guardian Newspaper when we had the embassy cables so I was the beneficiary of that data breach, but the U.S. Embassy was less happy about the fact that the cables were now in my possession, in the Guardian's possession. We had Apple, we have Tianya, we have the U.S. Military, we have Heartland, we have all kinds of people, and this is in 2010.
Of course, security is really good, so in 2013 things were significantly better. What we actually have is increasing numbers of breaches. Target lost 70 million credit card details. OVH, a popular hosting firm. Adobe, 36 million, which was big because of the fact of the way they stored passwords was actually quite poor and it meant that somebody created the Adobe password crossword, which allows you to go and look at somebody's password hint and then try to guess if the password is correct. There are, I think, 250 crosswords you can do with increasing difficulty, and they're quite fun to do. It turns out many of them are quite easy. When people put their hint as, "My password- password1," and it's seven letters with a number, you kind of know what the password might be. The name of my dog, things like that. We have Ubisoft. We have JP Morgan Chase, 76 million. Deep Root Analytics, which was a 198 million.
And of course we move into 2018, and it gets better even more. So what we start seeing is Aadhaar, the Indian biometric database, one billion citizens' details which were available for purchase on the dark web if you knew where to go. Twitter, 330 million accounts in which the passwords were stored in log files by accident and potentially accessible to third party people. River City Media, Friend Finder Network, Myspace, MyHeritage, Firebase. One you can't actually quite see because it's a tiny, tiny, tiny one at the very top was Hong Kong electoral database. So Hong Kong decided to have an election for city officials. They had backup laptops with the entire electoral database on in a hotel room just in case there was a hack of the electoral system and people had to go to the hotel to register. They put the laptops in a locked room. They locked the room. They went off to have the election. They came back at the end of the day and all the laptops were gone.
Maginot Line
That's what we call bad in cyber security. But this is what it's like in 2018. People, we do all these security activities, things we're told to do, but this is the background to which we're working. I'm going to go even further back in time. I'm going to go back to something called the Maginot Line. Does anybody what the Maginot Line is? Hands up if you heard of it. Quite a few people. It's a nice European audience. Essentially, the Maginot Line is a line created by a general in the French army who was Maginot. And essentially, to sum up 1930s France military strategy, "We'd really like the Germans not to invade." That was what they thought. They had just been through World War I. It was very unpleasant for everybody involved. Nobody was sure, but growing tensions across Europe meant that there was a suspicion that Germany would increase its military presence and would attempt to invade. So they decided to build this line to defend France from Germany. This was the line that they drew coming up past Strasbourg, up the edge of Germany, and then up through Belgium.
Now, it turns out their view was the Germans, if they're going to invade, would do the same thing as they did in World War I. They would come in over land through the border between Germany and France, because they aren't going to invade a neutral country like Belgium. Nobody's going to try and do that kind of thing. Luxembourg, well, nobody really cares. We've got a line along the edge of Luxembourg just in case, because it's quite small. But more importantly, in World War I, the armies advanced very slowly. They were infantry armies. They would crawl forwards, they would dig new trenches, and then they would sit in trenches, and the two trenches would face each other, and things didn't move very fast. They didn't move very far.
And so the Maginot Line was these. Now, these are amazing defensive bunkers. These things are made from masses of concrete. They are essentially immune to attack by infantry. None of them were overrun from the front by infantry. The concrete is thick enough that they can resist pretty much every bomb that existed in 1939. Almost none of them were blown up. The Germans did invent some towards the end of the war that actually did take out some, but pretty much very few of these actually fell. They had underground train networks between the fortresses that ensured they could bring supplies in even if it was entirely surrounded, which was an outstanding solution to the problem of a big infantry marching over land at you.
Unfortunately, in the meantime the Germans had invented what they called Blitzkrieg, or lightning strike, which is mechanized infantry. They loaded their infantry into sets of small tanks, and they drove the tanks. When we go back to that Maginot Line, the intention was they thought they would come in from this on the right over land. Nobody would come in over the Alps. That would be ridiculous. But north of that is the Forest of Arden. If you're bringing infantry through a forest, it's incredibly expensive, it's incredibly slow. It takes a lot of time to chop down the forest to build your trench, and then the forest gives the opposing people lots of cover to shoot at you while you're digging your trench.
But for mechanized infantry, it's much easier to get through the forest, and they got through it significantly faster. In fact, they came through round into France towards Paris, and the French surrendered even while all of the Maginot fortresses were still well-defended and capable of resisting for several more years. They didn't want to surrender. They were informed their government had surrendered on their behalf and they had to give themselves up.
The Evolution of Compute
The problem here was the French were fighting a war from 1920 against an adversary who had developed new tactics in 1939. This, to me, is something really relevant for us today. In order to talk about that I'm going to talk about the evolution of compute. Simon Wardley is speaking tomorrow at this time actually, in this slot, and I recommend you go. He does an outstanding thing around how evolution affects computers and so forth, and I'm going to address a lot of this. But this is a diagram he came up with. He says that products go through genesis. That is the very beginning of a product, is there is only one of it in the world. Your company might be the only company in the world that builds a thing. The Lyons Electronic Organizer - can’t recall what the O stands for - LEO, was one of the first computers made and it existed. It was the only one at the time, and then we started getting another only one, and another only one. That's the genesis of a new thing.
After a bit, you go to custom-built. We have five of them in the world. We have 10 of them in the world. Nobody is ever going to need more than 20 computers in the world. We're going to custom-build every single one of them. They'll be similar to each other. And then they start to move into product. Back in the day when we talked about computers, the custom-built computers, when the internet was invented in 1960-mumble. They had to build something called PDPs. These were translators, because every individual computer had been custom built in every university. It was different to each other and they couldn't talk to each other. The PDPs provided a centralized way of the computers talking to each other.
But then we start to see the productionization of it, and then eventually things move in towards commodity. Now, this is an arguable point. People will all agree that a gold bar is a commodity, but it is not really ubiquitous. Not all of us have gold bars, but we all look at it as a commodity. iPhones, probably arguable whether it's a commodity or not. Very ubiquitous, but is it actually a commodity? Well, actually there were lots of different mobile phones. But this border here between product and commodity is where things get really interesting. Technologies change. And the reason technologies change is we can start to rely on the ubiquity of that technology.
From on-Premise to Cloud
In computing, we move from on premise to cloud. It turns out I'm getting old. I'm in still denial. In my head I'm still 18. But about 20 years ago is when I started my career. When I started my career, somebody literally turned up with a physical server and was like, "Can you put your software on this? And then we're going to drive it up to Glasgow, install it in the place we're using your software, and it'll run there." It was a physical machine. And it turns out that was how people did this for a long time. People had physical machines. If you've ever worked on a server where you physically had to modify it and then send it somewhere, you'll know what it was like. It turns out it was a pain. I once accidentally ran a program on the server that renamed it, which when it's a domain controller turns out is a really bad idea. And somebody had to drive four hours north to Glasgow to physically log into the machine to rename it back so that the remote management software would work again.
It's a bad idea. So companies started saying, "Well, actually, why are you having to host a data center? Why are you having to host all this stuff?" So we moved to what we called at the time COLO, colocation. That is I can go find somewhere who will host a server for me. They will physically buy it. They'll send me the invoice and photos if I want. They'll rack it for me and then they'll connect it to my premise so I can manage it remotely, which is really cool. It's really nice. But then you start saying, "Why am I paying for a whole server if I'm not using it all the time?" So the product evolved again, and we started seeing virtual machines in the data center. Did anybody ever buy a VPS, a virtual private server? A few of you. I wanted to buy a COLO but I couldn't afford it, so what I bought was a bit of a server. And somebody would run a virtual machine underneath and 10 computers on it. If I was really lucky, then one of the other people running the virtual machine would not be doing blockchain mining, or something like that, and actually my server would run reliably.
So these things evolve, and then they evolve to virtual machines at scale. I talked about Simon Wardley and Wardley mapping. Wardley talks about this growth curve. But the interesting thing about this is we can start to think about how things evolve over time. The Wardley mapping allows us to think about the value chain. Simon talks about a company he ran to do online photo editing. At the top we have things that are of a high value to the organization. So the customer, closest to the customer. Customers don't care about your data center. Sorry to tell you this, but they really don't care about what data center you use, whether you use AWS, or Azure, or whatever. They care about online image manipulation, photo storage. That relies on a website, which relies on a platform, which relies on you having computing.
And what we can do is we can start to think about, "How do these things change over time?" Simon comes up with a strategy that we can understand, which is as things move to the right and get more efficient and become more easily replaceable, I can't tell the difference between a Dell server and an IBM server because they're running under the same operating system, they're running all the same stuff. People move up the value chain and they start saying, "I'll just sell you a server. You don't have to care about it. I'll sell you a DOS compatible server. I'll sell you whatever." These Wardley maps allow us to see these changing landscapes. They allow us to discuss strategies. And one important thing that I think is really important is the map isn't reality. There are issues and errors around this and so forth. It is an abstraction, but it helps us talk about those abstractions. So it helps us understand what's going on.
As Servers Move from Physical to Virtual, Single to Multiple, Practice Evolves
In essence what we're saying is things evolve, things change over time. And if you think about your career over the last 20 years, you'll think this is probably true. But the interesting thing from my perspective and from a security perspective is as servers move from physical to virtual, single to multiple, the practice of how we manage them also evolves. It's not just as simple as my servers become AWS servers, but I do exactly the same things I used to do on the physical servers on these AWS servers. My first time with using AWS, that was exactly what we did. We tried to manage them like physical servers. It turns out that's really hard. Simon calls this the coevolution of product and practice. It looks a little bit like this.
If I have something down here, compute, which has a high mean time to recovery, I build a set of practices that help me scale that thing. We call it scaling up individual things. When I had physical servers, the way to scale the website that I had on it was to put more RAM, more hard drives, more CPUs in the physical server. As we moved that behavior across and we start saying, "Well, actually we get commodity servers. I can now buy lots and lots of servers really cheaply, really easily," suddenly I start using a different architectural practice. We use scaling out. This for some of you was back in 2005, some 2010, some of you last week. I apologize. What we get is a new set of practice for how do we manage scaling out? Suddenly we need to think of different things. We don't just scale because we add RAM. We don't just scale because we're making the machine faster. We're now having to manage multiple machines. We've turned our problem into a distributed problem, which is a much easier problem. Everyone will tell you until you've done it, and then they'll say, "Ha-ha, gotcha."
We develop a new set of architectural practices that help us do that. And what's interesting is this is the birth of DevOps essentially, is what we're seeing here. What we then see is people build a product on top of those practices that help us differentiate between those practices. We see the growth of things like Puppet, of Chef, of tools that help us manage the practices for managing servers, that helps us go from there. So we go from pets to cattle. How do we administer the servers? We stop worrying about hard drives, CPUs, power, etc. Those cloud providers give us abstractions. The cloud providers move up and help us prevent ourselves worrying about what hard drive is actually in it and start saying, "Well, actually all I need is 20 gigabytes of elastic block storage, and I don't care whether it's network attached storage, or on the machine, or whatever." I might care about properties, "I want it high-I/O," "I want it low-I/O," "I want it reliable," "I want it unreliable," but I don't actually care where it physically is located.
We stop worrying about whether a hard drive fails in a server. We stop worrying about some of those things, and it results in the changing practice. It results in the growth of DevOps, of SRE. And importantly, it results in a change in the way that developers consume that operative practice. I worked in a team where we had a set of systems of admins and I wasn't allowed to touch the servers. I had to hand them a pristine JAR file and they would install it into the web application server. And it would run, and I was never allowed to know the name of the server, whether it was running, how much RAM was in it.
As DevOps changed, actually, my understanding of how I consumed it changed. I no longer cared about the brand of the server. I still don't care about the brand of the server, but I do now want more control over it. That drives us towards things like Kubernetes, it drives us towards Serverless. This is what's changing now in operations, is the fact that we're moving to these new models.
What Does This Mean for Security?
Going back to the original thing, what does this mean for security? Well, most security people haven't really been involved in any of these conversations. But if you think about it logically, the way we think about security just has to change, because the coevolution of the practice and the product means that they have to change. If we keep trying to use the old practices on the new products, it won't work. And security practices are evolving. They're going through this phase. They are moving up the value chain. They're moving stream.
Traditional security is all about assurance. It's all about, "Where will my data sit?" I had a lot of conversations with people across the government recently about whether your data is sitting inside the UK or in Europe, because it turns out in four weeks that might matter an awful lot to a bunch of people. But it turns out that an awful lot of traditional security people are very worried about where the hard drive is physically located. If it's sat in somebody's desktop in a London flat, they're much a happier about it. But it turns out if you're buying that hard drive to be managed by a company run out of Luxembourg, is it still insourced or is it outsourced? Which one is it? If the hard drive's in the UK but the people who manage it are based in Hong Kong, how do you worry about it? A lot of security people are like, "But I care where the hard drive is. That's the thing that matters." Because they've missed that, with the adoption of the cloud techniques, that's no longer the things that matters.
The other question is, "Where does the data go?" How does it transit? Where does it move? Is it encrypted when it goes there? Who has access to it? Where does it get put into logs? All those kind of things. These are concepts and things that really matter when you're worrying about computers, when you're worrying about data centers. It works when you have individual servers, but it doesn't work when you have modern cloud. Or rather, to be more precise, it doesn't work the same when you're on modern cloud. The concern is actually still pretty much the same one. You're worried about data and where it's going, but you're no longer worried about which physical server it's on. Is it on Rabbit, or is it on Tortoise, or is it on Hare? Those of you who remember, we used to name our servers after things. One organization I worked would name them after things in the kitchen, and Tap was the last thing to die. But we had Tap, we had Fridge, we had Cooker, etc. People had cute naming things, because their servers lived for a long time.
At the Open Security Summit in 2018, some people took some time to apply Wardley mapping to security practices. They said, "What is happening with security practices? Where do they exist in the current thing?" You can go find this on GitHub. It was a really good bit of work. The Open Security Summit will be again this year, and hopefully I'll be going, and we'll add more to it. I think it's a really interesting piece of work. I'm going to quote another great philosopher, Wayne Gretsky, who is an ice skater: "Skate to where the puck is going, not where it's been." He's actually quoting his dad, but people tend to forget that and think Wayne Gretsky is a clearly cooler ice hockey player than his dad. So, that's what they think. But his thing was, how do you be such a good ice hockey player? And he's like, "I don't skate to where the puck is. I don't skate to where everyone is. I skate to where it's going to be. And then when I'm there so is the puck, and then I can take it and use it."
Let's think about it. For security, where was the puck yesterday? In essence what I'm saying is, what are the solved problems? What are the things that we already know are commonly solved the same way in every organization, in every place pretty much? You might have unique and special requirements. Some people do have special requirements. But generally, we're all solving certain problems the same way. These are productionized processes. These are processes that have moved to the right and are in the componentized, productionized thing. I can buy these processes. I can get other people who can understand these processes. These are things like secure software development life cycles, the assurance of supplies, network assurance, hardware assurance. To be honest, pretty much everyone is doing these things in about the same way. They're fairly consistent.
In fact, all cloud customers have mostly the same concerns of their cloud supplier. It doesn't matter whether you're the UK government, whether you're the tax office, whether you are running a startup, whether you're working in Ocado, in Sainsbury's, whether you're working in a publisher. You still care that somebody who works for a cloud company can't just come in drunk one night and delete all your data. You care, potentially, that those people do that have audit controls that check that they can't just do anything they want to do, that they can't look at your data willy-nilly because they're interested in doing so. You care that the cloud providers provide you with a level of assurance that they can't do that.
If we were in software development, we would often repeat the mantra, "Buy, don't build." Security doesn't say this very often for some reason, or at least not in the same way. But actually, for a lot of these things compliance by certificates is probably the right way to do it. You should care that your cloud security provider has an ISO27001 certificate. It might be, and I have opinions on it, that that's an entirely useless certification process to go through, because all it proves is that you do changes the same way every time. Doesn't mean your changes are good, and it doesn't mean they were well-intentioned. It doesn't mean that you have the right category of product driver behind it.
But, it does give you some confidence that random people don't turn up and just change things on the servers. It gives you some confidence about the change process. You might care about the cloud security alliance certifications. Far more important if you're buying cloud servers. You might care about SOC, FISMA, HIPAA, the things that ensure that data is maintained and separated from customers, that you can't leak individual patient data in health providers. If a cloud provider has these certificates, they're demonstrating a level of basic assurance that we should just be able to buy. And there is a process by how we buy that. If I go and buy a car, I might want to buy a Ford, but it turns out there's a big difference between a Ford Escort and a Ford Transit. One is a big van, one is a small car. I don't want to buy the wrong one. But I can have some confidence if I go to a Ford supplier and they give me a warranted Ford that I know that I'm getting a car that is going to go, and it isn't going to break down five miles down the road.
Where’s the Puck Today?
That's where we were yesterday. How about where's the puck today? So where the puck today is in this middle stuff. It is custom-built. It is early products. It is when you're buying a product from a startup that is not the same in every organization, but similar. It is where you have to build your own security process because you don't know how other people have done it. You go to a conference and people say, "We did our thing this way." And you go, "Oh, that's a good idea. We're going to do that that way as well."
Over time, our industry will productionize these and we'll find ones that win. If you came to QCon five years ago you would've heard cloud talks about whether you should be running OpenStack, whether you should be running OpenShift, the number of past providers was huge. Today, everyone talks about Kubernetes, because essentially they won the war. They've become the standardized one. That is what happens as things drift to the right. Right now a bunch of security practices are in this place. They are custom per organization. They've been custom built. And they are things like continuous integration, continuous deployment, and DevOps. Those practices are not standard. I cannot find a DevOps certificate that I will trust that tells me that you do DevOps the same way another organization does DevOps.
These things matter because they give us security properties that we can rely on, and those security properties are the same. If you do continuous integration, continuous deployment, I guarantee that patching is a conversation that you can have in a really happy way, in a way that people who don't do this come. The question is, "How quickly can you patch?" Because it turns out almost all of the 48% of breaches were caused by remote attackers in the last year were caused by out of date software. Patching is the number one thing that will prevent attacks on your service today. If you can patch straight away, then you will be in a significantly better place.
The NSA announced a statistic last year, I'd have to double check the date, that on the unclassified and classified military networks that the NSA protects, in two years there had been no zero-days used. Every single one of the vulnerabilities that was used on military networks against the U.S. by a whole variety of attackers were things that could have been patched, but weren't. If you can patch within hours, you are in a position where you are better than the U.S. military. Admittedly, that's not necessarily a high bar, but you are in a better place.
DevOps gives you a whole bunch of capabilities that give you security benefits around the management of your service. Do your servers live very long? Because it turns out if I'm an attacker and I can drop some malware on your server, and it lives for three years because that server's never been turned off and nobody dares to turn it off, there's not much you can do about it. If on the other hand, your servers are recycled every hour because you do deploys, and actually that server's going to be destroyed and a brand new one's created, it turns out getting malware onto them is much harder to do. DevOps gives you a set of security properties and security benefits.
How secure is your code? If you're doing continuous integration and continuous deployment, you have tests, you have code reviews and pull requests, you have people checking the code. If you talk to security specialists who worked in the old day, they will tell you, "Oh, the thing you can do is get somebody to do a security review of your code." It's a 100,000 lines of code you're dropping this year in the single release that you're going to do, and we'd like somebody to review the whole lot and make sure there's no security vulnerabilities.
When it's a seven line pull request, it's much easier to review it for security vulnerabilities. You get two eyes on every piece of code released. I worked at GDS in Gov.UK and every single piece of code we released over six years had a second pair of eyes reviewing it. It doesn't mean it had no security vulnerabilities, but it means the chances that we caught them before went up. It's not a 100% foolproof, but now we're much less likely than an organization that just pushes code regularly with no code review.
Things like staff identity, single sign on, these are important things that are problems that can be solved, and every organization has solved them in different ways. You can buy them, but often they're slightly shonky or they all rely on SAML, which is probably one of the worst protocols ever created. Many of them rely on single sign on systems that are shonky and don't integrate with your software as a service solutions. You might have Active Directory on premise but can you integrate it with Trello? Oh, I'm not sure. Can you integrate it with your GitHub sign on? Maybe if you pay twice the price for every GitHub account. Can you integrate it with Confluence, with Jira, with everything that you use? Single sign on is a problem that every organization is solving slightly differently, and we're learning what good looks like. Some organizations have done it really well, and I think you'll hear a talk later about how to do it really well. BuzzFeed did an outstanding single sign on proxy. But, everyone's doing it differently. We should be learning from people who've done it well, because they've done something that can be productionized, a set of behaviors that can be productionized.
Zero trust networking. Has anybody here read BeyondCorp paper by Google? Has anyone heard of zero trust networking? Interesting. Those people who've heard of zero trust networking, haven't read it, Google the Google BeyondCorp paper. That's a really weird thing. Can't use Google as a phrase, and a verb, and a noun in the same sentence. BeyondCorp paper is an outstanding paper that talks about why it is that Google went to this model of zero trust networking, of saying, "Just because you're on the network doesn't mean we trust you in the slightest." In fact, in many cases it probably means we should trust you less, I would argue. Most of the networks I've seen across government have been compromised at some point in the past. Being on the network is not a good indicator of trust. Owning an assured device is. Having an identity that can sign onto single sign on is. Having access to certificates that are on a device that you know was issued to your staff is a good sign of trust. Zero trust networking allows us to do that. But again, we know Google have done BeyondCorp. We know other organizations are doing zero trust networking, but there's no standard for that.
This is the stuff that we see. It's in early custom-built, early product positions. You can buy zero trust networking, I'm sure. I've not looked, but I'm sure if you went to the right company and said, "I'd like to buy one," they will. In the same way, you can go buy DevOps in a box. I'm sure it'll give you about as much value as DevOps in the box will. But you can understand from people who've done this, custom-built it. If you want to custom build your own, you should follow the same things they do. Go to conferences, learn from people who do it.
Where is the Puck Going?
But more importantly, what I kind of want to tell you, and the thing I guess you are all really interested in is, where's the puck going? What's the interesting stuff that is actually happening in security, and how does that change stuff? And it's all this stuff on the far left. It's the stuff that is genesis, where there are only one or two examples in the world at the moment of people doing it. It's the stuff that is custom built in every organization, but there aren't many organizations that have the resources, time, and energy to build it. This is things like adversarial thinking. And adversarial thinking is probably the most productionized of these I can think of. This is Mitre's (it's pronounced) attack. It's spelled ATT&CK, because somebody thinks that ampersands sound like an A. I don't know. This is a framework for understanding how malware operates, how do adversaries find your network? How do they get on there? How do they hide the malware they're doing? And it gives you a whole set of standardized language so you can say, "We know that North Korean actor X does this thing, and we know that Russian actor Y does this thing, and we know that British actor Zed does this thing." But actually, they're all doing the same kind of thing, and we can start tracking it.
We can also start tracking what we call tools, techniques, and processes. We can say, "We know that this is the same actor doing the same thing, the same adversary, because they behave in the same way each time." They might change their software slightly, they might encrypt it slightly differently, but they always have an in-memory encrypter that sticks stuff into the registry, and that's a sign of which adversary. And the important thing with adversary thinking is adversaries have goals. They care about doing something. They also have restrictions and all of the marketing firms in security would have you believe that nation states will turn billions of dollars of effort at you as a small startup because they're mean and horrible, and if you buy their magic black box and stick it on your network you'll be protected against that. They all want to do that. The sad truth is no adversary has unlimited funds, time, and energy. Even those who work in governments have to work out what the cost is of carrying out an attack.
One of the ways that you can think about this as an organization that we're seeing in organizations, is anti-personas. Who here has seen personas when you're building software, personas of your users? Hands up if you've done personas? Not as many as I thought. So personas are an outstanding design artifact. When you're thinking of designing your service, your system, your tool, you think of, "Who are the people who are going to use it, and what kinds of people there'll be?" There will be Jeff the sales manager who's short on time and just wants to get the invoices sent out. There will be Gary who's the accountant and wants to make sure that the invoices all add up and so forth. These personas help you work out who wants to use your software.
We use anti personas in some organizations. I've worked with Universal Credit which is a big program, lots of money. It meant that we could do all kinds of fun things. Anti-personas help us work out who wants to break into the system. For something like Universal Credit, a benefits program, it's, who wants to commit fraud? It turns out everybody, from little old Ethel who hasn't told you that she's got five more hours a week, and it turns out that's fraud, but we call it error because we're polite and we're nice. We're like, "We think you made an error in submitting your benefit claim." All the way up to people from foreign nations who actually want to steal billions, hundreds of millions of dollars, billions of dollars. Organized criminal gangs.
And we build these anti-personas. This is the example I normally use for people, because it normally gets a laugh. But it's fine, you don't have to laugh. Han Solo. He's motivated primarily by money but also works for the Rebel Alliance. He's capable of using common tools and modifying tools on the fly, doesn't want to be caught, and takes effort to avoid head on confrontations. It gives you something to talk about when you're building your systems, when you're building your services, to think about, "Who is likely to attack me?" And the ATT&CK; framework help you understand what kinds of options are open to them.
Red teams. These again are more common in bigger organizations, but they're expensive and so not everyone is doing them, and they're all done differently. For those of you who haven't seen red teams or done them, it is essentially internal pentesting. Although red team people will get quite upset when you call it that, because they have a dim view of pentesters. But red teams are people inside your organization who fake attacks on your organization, or conduct real attacks on your organization, to find out if your defenses actually work. They're constantly probing your organization to find out, how good are you at defending your organization?
We see threat hunters and threat hunting. This is the buzzword of the year. RSA is going on, I think this week, possibly next week. Almost certainly you can buy threat hunting in a box, threat hunting as a service. Threat hunting is essentially assuming that somebody's already breached you and going to try and find out where they are, rather than waiting for them to attack you. You just go, "Well, they're probably here already. Let's go look at various things." They work very similarly to red teams in that they come up with scenarios they think may have happened, and then they go look for evidence of those scenarios having happened. Again, using the ATT&CK; methodology and stuff like that.
DevSecOps, I think, is the worst named thing in the world. Partly because I was a big fan of DevOps, and to me it ends up becoming DevQASecBA-etc.-Ops. Adding things into DevOps doesn't make it better. We should all work together, is less catchy, but it's probably what we mean by it. But DevSecOps is beginning to bring the DevOps methodology, the way that we think about doing DevOps in organizations, into a security world, which we cover as security as code. I know very few people who are actually doing security as code. Their security appliances, their security systems are written by them and are done as things they can encode. Their code is visible, they can audit it, they can check it, and they have confidence because they deploy their security appliances through a build process, that that security is being applied in the way it is. Very few people are doing this, as far as I can tell. I found individuals around the world who are doing this, and they are fascinating individuals and will tell you at length how valuable they are. But this is not a thing you can buy. This is not a thing that is easy to do, is simple to do. This is very much the bleeding edge.
The one that's slightly more common is compliance as code. There's a reason for this being slightly more advanced, which is it turns out cloud platform providers have a lot of advantage here and want you to do compliance as code, because it means you'll use their platforms more. So this is an example from 2015 from AWS. Every time you adjust your IAM policy, you can get an email alert to your security team. Because adjusting your IAM policy should be quite unusual, and actually getting an alert that somebody adjusted it can tell you whether that IAM policy is correct, whether the person who adjusted it is the right person, and make sure you know it.
Cloud configuration as code. The ability to say, "I'm going to use Terraform. I'm going to use CloudFormation," whatever it is, "stored as code." Which means your pull request exists which act as an audit trail. Why did you decide that you wanted to spin up those machines in China? Well, it turned out you have a pull request that tells you exactly why you did it. Again, that is the development of this emerging practice. But more importantly, the cloud providers really want to build these platforms, these things above it that encode those practices as standard practices, and sell you something that you will pay money for. So you'll find things like AWS System Manager. You'll find things like Azure Policy. You'll find things that are trying to encode this compliance as code, but it's still fairly new. It's still quite hard for security teams and organizations to get to use to understand.
Final Thoughts
That is, I think, probably one of the fastest whiz-throughs in quite a lot of technology that you're going to get all week. But I've got a couple of final thoughts, or at least one final thought I'd like to leave you with. Because what I've shown you is a changing framework, that to many people should be quite scary. If you are used to managing servers because you say, "I want to make sure I have two firewalls back to back of different brands, because that way no two of them will have the same vulnerability." Or, "I want to make sure everyone's on a VPN because then everything will be secure," this world is terrifying to you, because everything is moving faster than you can. And I have yet to meet a security team who does not essentially think their job is to hold the fire extinguisher. Their job is to put out fires nonstop all the time.
You remember the resistance to DevOps? From systems administrators saying, "I would do that Puppet thing if we had time, but we're too busy patching your servers and making sure they're staying up under all the load to do that stuff." It took a long time and it scared a lot of people. A lot of people got hurt in that process. Security, I would say, is in the same place as operations was about 10 years ago when DevOps started coming around. I know from personal experience some of the fights, some of the pain, some of the difficulty that caused to systems administrators at the time, who had to learn that their role was not necessarily systems administrator any longer. Their role was changing. They could be a DevOp, and nobody knew what that was, but they could definitely be one. They could become a site reliability engineer. Nobody was entirely sure what that was in you weren't in Google, but you could become one. You could get all kinds of new job titles, but would that mean you wouldn't get the same pay? Would it mean people would make you talk to customers? Would it mean that you'd have to write code? You didn't want to write code because it wasn't Perl. It wasn't as good as it could be.
How do you get value from these organizations? The thing I learned from DevOps, the thing that I think is still incredibly important in DevOps, is empathy first. The thing that you need to do when you are talking to your security teams is have empathy first. I would encourage you to remember the Prime Directive from Retrospectives, from Agile Retrospectives, "Regardless of what we discover, we understand and believe that everyone did the best job they could, given what they knew at the time, their skills, their abilities, the resources available, and the situation at hand." Because all of those security people are in very difficult situations, and we need to have empathy with them and come to them not just with a, "You're no good and I need more." But to come to them and go, "I understand things are difficult. Here's how I think we can save you time. Here's how I think we can make your life easier." Compliance as code enables you to go, "I can give you a dashboard that shows the AWS estate is not running with VMs anywhere else in the world." That's a thing that makes security people feel happier. I mean, it'll take them some time to adopt it. But if you go with empathy, instead of going argumentatively and saying, "Why aren't you in the future?", you'll have a much better response.
I am Michael Brunton-Spall. If you have any questions, I think we have some time left.
Questions & Answers
Moderator: Excellent talk. Just to plug a couple of the talks that teed up from here. We're going to have Shraya from BuzzFeed talk about SSO, which is a zero trust platform open source solution, later in the track. And then we have Gareth Rushgrove to talk about policy as code. So great tee ups for it. We have a few minutes here for questions. With this speed, we do have time. What questions come up?
Participant 1: Great talk by the way. One thing which always moves me is how do we bring in the security checks early on in the pipeline? Are there tools around it? And if you want to put those checks in place very early on.
Brunton-Spall: If I just repeat the question so everyone can hear, and I'm going to rephrase it and you can confirm whether I'm right or not. The question is, how can I put security checks into my build pipeline? What tooling exists to do that as it stands?
Participant 1: Yes, very early on.
Brunton-Spall: Early on in the build pipeline. There are lots of tools out there to do these kind of checks and these tests. I'm a big fan of BDD Security, which is driven through writing cucumber tests and whole system tests to check your service runs the way it does. There were some simple tools. Things like Snyk, things like Dependency Check. These tools will help you improve the quality of your code. Again, I liken it very much to the early days of operations. Or actually, I think QA is a better example in this case.
I worked for The Guardian newspaper quite a long time ago, 12 years ago, and we were going through a revolution in delivering on a regular basis. And the QA tools we had at the time were things like Selenium, back when it first existed, and WebDriver. They created tests that gave us some value but they were buggy, they were hard to maintain. It was incredibly hard to find people who knew how they worked. And actually what they did was they encoded current good practice into the computer. What happened was people who spent a lot of time working on that suddenly started saying, "Oh, I think there's a better practice." And they started building a second generation of tools or changing those tools to make that process better. We're at that tipping point.
So a lot of the tools out there are slightly clunky. If you try to drive Zap, for example, security people don't program by and large. Lots of them don't. The tools they use aren't designed to be driven by programs. The Zap API is not fun to program. Trying to build something that can automatically drive it is incredibly hard, and you are fighting the tool the whole time. That will get better over time. And what we're seeing is that's coming out of teams who are doing this at scale, so Netflix, Intuit, Amazon, Google. Organizations that are doing this stuff are releasing small tools, a part of their build pipeline. But the starters, easy ones to put in, are dependency checking tools like Snyk, Dependabot, stuff like that, because they're trivially easy to pass. Linting tools that will check your code for common errors. There are some plugins for IntelliJ and Eclipse and things that will do that as well. They'll look for common security patterns. That's very simple, early stuff. In the build itself, I'd say I like BDD Security. There's a tool called Gauntlet. There are some others. They're good to try and use. They're hard work. You have to really invest in them, and I think that's still difficult for organizations.
Participant 2: I'm quite happy to see that as a software engineer, that we are moving more and more software engineering practices into a wider range of things that were outside the realm. The question that came up earlier - I mean, experience I had recently with some outsourcing developers, is there's still a lack of awareness of security among developers. Things like everybody had access to a database. There were no roles introduced to limit access to databases. Very simple things. Defense and in depth approaches. Introducing roles to code parts and thereby limiting - not every bit of code needs to be able to write to the database. Very often they just need to read. So these kind of practices are just lacking all over the place. A bit of a comment there, the kind of lack of people seeing where we are or where we should be, was a cloud development thing.
Brunton-Spall: Sorry to rush you. Is there a question in there?
Participant 2: No. But it's just a slight extension there of what you said there, was we virtually had no VMs in the application. We use largely pass components. But the IT team was very concerned about securing any VM that was in there, and spent nearly 90% or more of their effort on just securing the VMs and not the rest of the application. This awareness of security is still lacking. I think it was quite useful to have you talk here.
Brunton-Spall: On security awareness, I absolutely agree. Security awareness is really hard. I would also say, you can attend any talk today. Cultural awareness on how to manage teams is very poor for software engineers. Operational awareness and how distributed systems actually work and what CAP theorem means is quite poor across organizations. I don't think security is particularly special. I think we like to hold up security as being special for a whole bunch of reasons. But it is no different to quality, performance, operability, observability, and all of the other things that come with it. As an industry, we have to just get better as software engineers, and we have to look at how we make poor software engineers balance 400 different concerns at once, when all they want to do is write code.
Participant 3: Something I've noticed with DevOps is that where I work at the moment, it's mostly been ops people sitting with the devs, rather than everyone doing DevOps. And I could see that security people sitting with the teams would be a better place than where we are at the moment. What do you think the possibility is of us moving towards the world where actually DevSecOps is the develops understanding and building with the security in mind?
Brunton-Spall: One of the reasons I used the Wardley mapping is this view that things do evolve, and practices evolve over time. And practices evolve over time. It's the guy who wrote "Neuromancer". Bruce - no, not Bruce Schneider. Who wrote "Neuromancer"?
Participant 4: William Gibson.
Brunton-Spall: William Gibson said, "The future is here. It's just unevenly distributed." And actually that's true of the evolution of lots of products. I've worked in DevOps teams who are amazingly brilliant, and I've worked in DevOps teams where it is a systems administrator who just got told, "Tomorrow, your title is DevOp. And nothing is going to change, but you are now a DevOp," which is very exciting for them I'm sure, and they love that experience. But it's here and it's unevenly distributed, and that's true of a movement that's been going 15 years. Security is only just starting in this place.
What I find interesting with security is I think there's a parallel growth of interest in security. You see it in the work Jessie's been doing with Docker, and in fact the work Justin's been doing. Docker as an organization has invested a lot in security in a way that's leaving a lot of traditional security behind. That's something that, again, I think we need to have empathy about the fact that it's here but it's very unevenly distributed. Shannon Lietz who's the CSO, I think, for Intuit had a thing. She called it a 1, 2, 10 ratio. She said, "If you take a security person and try to teach them dev and ops, they'll take a certain amount of time. If you take an operations person and try to teach them the dev and the security side, they'll learn it twice as fast. If you take a developer and try to teach them the same stuff, they'll learn both ops and security 10 times as fast."
I think there is something about, we train developers in a world of constant changing environments. That's what Agile kind of makes our developers work in, constantly changing requirements. How do you learn new technologies? I started my career 20 years ago. I started programming in C, and then I was in C++, and then I had a job programming Java, and then I programmed Scala, and I've programmed Python, I programmed Ruby, and I've programmed JavaScript. And that's in a 20 year career. Most operations people have not been through seven iterations of that in that same time. Developers are forced to learn new technology all the time, which means they're generally in a better position to learn security principals and operations principals faster than people who aren't in that position. I'm slightly nervous at suggesting that developers are better than those people. They're not at all. It's just that the way that we train them encourages them to learn that stuff. So that hopefully answers your question.
Moderator: I think we have time for one more question actually. We're a little late for the five minute mark. I've got one actually. What are the role models that you have, that you see? Which companies are doing it, you think?
Brunton-Spall: I think it's a really hard question. There are companies that do the PR of security really, really well, versus companies that are actually doing it well. Companies that sit and do it quietly well are often the ones you don't actually hear about as much. Google has had very few breaches publicly, Google Plus aside to a certain degree. But actually considering how many people's data is in there, how many people access Gmail, the work they've done on advanced threat protection for your Gmail is outstanding. It is world leading in that area.
I met the team who build Azure. And what's interesting is Microsoft have a published software development life cycle. It's their secure software development life cycle. The team who built Azure don't follow it at all. They worked out very quickly it doesn't work for building things at speed at pace. Lots of people who use Microsoft products who are enterprise shops want to operate at a slower pace, but internally they're building a new version of that. And actually, the work that came out of Microsoft with threat modeling is absolutely outstanding.
But Docker is, I think, one of my favorites for talking publicly about security at the moment, of recognizing, possibly not early enough, no offense, but that Docker images were now a massive security issue for people. Actually, lots of people got obsessed with some very odd bits of Docker. They wanted to know, "Can I break out of the Docker container and infect other Docker containers?" Which is a risk and a worry, but it’s an old thinking. The much bigger worry is, is somebody going to poison your Docker container by uploading a bad base image?" And actually, Docker have done a bunch of work on how you assure base images, how you have some trust that the base image hasn't changed since you downloaded it from Docker hub. And that stuff is really good. But also I know Gareth, I know Justin, I know other people in Docker have done a lot of work to try and think through the security properties that Docker gives you, and I think those are really good.
On the end, Intuit do amazing work. They're not very public about it, but Shannon is a genius as far as I'm concerned and has done lot of the red team exercise and, “How do you build security teams?" and so forth. Government is peculiar. There are patches of brilliance. Often they won't talk about it. It's very hard to get people from GCHQ to come give public talks. They don't like being identified as a spy and people knowing their name. It's really weird. But actually they do some really interesting stuff and they use a lot of open source technologies. They use a lot of tools and stuff that we use as well, and they face a lot of the same problems. And so there's a bunch of interesting stuff.
There's an outstanding paper by a guy called Reed, that is called "Boiling Frogs," that talks about transformational change inside GCHQ, and how to take a set of people who are not allowed to see the internet on a daily basis. They sit in a special room with no phone, no internet, no Facebook, no Twitter. And how do you change that culture to become one of openness and so forth? It's an excellent paper and it's on the GCHQ GitHub repo. It's worth reading for culture change as well.
See more presentations with transcripts