BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Presentations Programming Your Policies: Exploring Open Policy Agent and More

Programming Your Policies: Exploring Open Policy Agent and More

Bookmarks
45:24

Summary

Justin Cormack discusses how to deal with policies, what the business drivers are, how it affects developers, compliance and security departments, and the cultural and communication changes there.

Bio

Justin Cormack is the CTO at Docker, working on unikernels.

About the conference

QCon Plus is a virtual conference for senior software engineers and architects that covers the trends, best practices, and solutions leveraged by the world's most innovative software organizations.

Transcript

Justin Cormack: I'm Justin Cormack. I'm the CTO at Docker. I'm also on the Cloud Native Computing Foundation Technical Oversight Committee. I'm going to be talking to you about policies and making policies programmable. It's great to be in this track about configuration beyond YAML. We're going to talk about a bunch of things that you can do when you have some YAML and you can go beyond just using the YAML. It's good to be talking about this.

What Does Policy Mean?

What do we actually mean by policy? I think this is a really important question so that we're all clear. The simplest policy is something like, who can run this program? Another policy might be, who can make this API call with these parameters? A more complex case is, who can make this database query and get back all the results and are allowed to see all the results? These are the kinds of things, it's really about access control. Who can do things? Rather than defining what we do. It is about who can do it and who's allowed to see the results. Who, can be a person. Effectively, it's generally another computer program that's calling perhaps on behalf of a person. There's complexities around, how do we know who it is, as well, that are important.

History of Security in Computing

I'm going to start with a little bit of history. I like to ground what we're doing in history. It helps me think about where we come from, where we're going. We're talking about policy and security, and authorization, authentication, those things. We have to start back in 1972, when there was the first real computer worm. A great design called CREEPER sent this scary message onto your terminal. This was back in 1972.

The first antivirus program called Reaper was created to get rid of this thing again, because it was spreading everywhere and being annoying. This was the beginning of the world where we are today where we have strong access control. There's actually quite a big backlash against access control and security at all. One example is Richard Stallman, who famously didn't like passwords, thought computers should be available to anyone. For a long time, he didn't have passwords. Then everyone knew that his username was RMS and his password was also RMS. If you wanted to access lots of computers, you could just log in as RMS. This backlash was quite a common thing for a while, I think. There was a sense of openness and control. Because originally, you had to walk up to a computer to get access to it, and it was kept in a locked room. Once things got connected to the internet, people's views on this really started changing.

Probably the biggest stake in the ground for what the future was going to look like was this so-called Orange Book. This was from 1983. The Orange Book was written by the U.S. Department of Defense. It was all about how access control should work. It was very much involved in a thing called multi-level security, which, often, lots of it comes with revivals every now and again. A lot of it has to do with classified documents. You have levels of document classification from anyone can see this, to top secret, and so on.

There's a complicated thing where there's people who have clearance to see the documents at different levels, and documents at different levels. Then there are processes whereby as someone who can see a document at a higher level, I can change its classification downwards with certain controls. There was all sorts of worries about things like leaking documents between levels and side channel attacks and things like that. It's really a basis of a lot of our security.

SELinux comes from this world, as well eventually around about root. It really has influenced those things. It particularly influenced the Multics operating system, that had a lot of complexities due to this. As we know, Multics didn't take off and I think old Unix did, which was a joke about Multics. We've only got one of them, not many of them. Unix really took off. It had a very simplified model of permissions, compared to Multics. It worked for a long time for most people. Eventually, I think we saw it didn't really scale.

The Unix permission model is really quite simple. It's like read-write access with groups, users, and everyone. When you scale to trillions of files, how do you make sure they've all got the right permissions and ownership? It's just really hard. You really want to have sets of policies that you can check globally to make sure that you've actually set the right permissions for things. It becomes a real scalability problem and it was never going to live with the size of data that we have.

We can see if we look at the OWASP Top 10 vulnerabilities for web applications that access control related issues are really significant. They've been rising in significance. These problems have become harder to deal with. Attackers have found very good ways to get into systems because you can't manage all your access control correctly. I've seen a lot of policy code written as imperative code. It's very difficult to maintain and understand and change. Often, it's a lot of conditionals. It's very difficult to get a big picture of what these really mean.

Policy frameworks have really gravitated towards declarative configuration, not imperative, just to have an easier to understand way of thinking about things. Logic programming is often the basis of this. Logic programming is really about taking a series of statements about the world about facts, and then combining them with a set of rules or ways to infer new things about them. The classic, Mog is a cat. Cats are allowed through the cat flap, therefore, Mog is allowed through the cat flap. That kind of straightforward inference.

Open Policy Agent

Datalog is a programming language that's often used. It's what Open Policy Agent is based on. Datalog was built around trying to extend SQL. It's another Turing complete language, but it's very much an extension of SQL that also supports recursive queries, which actually is really useful for making the policy queries we want. We're going to talk about Open Policy Agent in particular. This is a CNCF graduated project that's pretty mature. It's one of the most commonly used projects for policy management in the cloud native world. Let me show you how it works first. Let's start with a demo.

We're just going to do this demo in the Rego Playground, which is where you can try out Open Policy Agent yourself. We're just going to write a really simple example. We're going to make a QCon Auth package. We're going to start with default deny, which is a good state of the world. We're going to be authorizing HTTP requests. We're going to allow GET requests and nothing else. Let's try.

Let's make an example. This is where we're going to parse to the input to evaluate. This is just a JSON document, we can expand it later. We might have more parameters that we want to parse later, as well as the type of request, we might want to parse some parameters or inputs that we want to authorize on. We'll just start with a really simple case now. We're going to turn on the coverage. We're going to parse a GET request, and this is the happy case, it should succeed. Yes, it allows true. On the coverage you can see that it parsed this bit here. It didn't evaluate the default deny condition. If we change this to something different, like a POST. We can evaluate this again, and obviously we get false as we'd expect.

It's a really simple, really straightforward thing, and we can just add more rules there or anything as we want. We can also publish this as something we can integrate into our application. Press the publish button and we'll get the instructions here. It's got install instructions if you need to install OPA.

We can run a server. I have a server, let's just run this server here. It's updated the server. Then I can just hop over, we can just go and see. First of all, let's just break this apart. This is the input that we had last time. We can just go and have a look, see what the input we used was. It was just our straightforward input method equals POST. We just want to curl that into localhost 8081, which is where our server is running. We can just go, POST, put that into our server, which was here. It will tell us as we expected, the authentication was allow false. We go there, and we can just edit this and check out other stuff on the command line. GET and it's true. It's all very straightforward.

We can just integrate this directly into our code, just like any other way we would integrate an HTTP server. We can run the Open Policy Agent maybe in a container, or locally, or whatever, as part of our application. Then we can just do HTTP requests into this to get success or failure and integrate that into our code as we want. Very simple.

What did we learn from that demo? I think Kelsey Hightower the other day summarized it really well. Open Policy Agent is really easy to integrate into your code. That's one of the reasons it's popular, and any kind of code. It's not just easy to integrate it in one place, but you can integrate it across a whole ecosystem. One of the reasons it's easy is it takes your existing JSON or YAML, because JSON and YAML have the same data model, and you can use this as the basis for making your policy decisions.

That means that often you've already got suitable documents to make decisions from, such as your Kubernetes configs. As well as integrating with your existing YAML or JSON configuration files that you've already got, Open Policy Agent also comes with a range of integrations across the whole ecosystem, which is really powerful. Those already off-the-shelf integrations for anything from Kubernetes, to SSH, and everything in between.

Another thing is it enables you to easily share policies that you've created. I work at Docker, and the Build, Share, Run workflow that we created for Docker is a really powerful model for all sorts of other things. Being able to build a policy, share it, and reuse it is really important. One of the things I think that we'll see more of is that for sharing, it really helps if you use the same data model, so that we can all share the same code. I think we'll see more work on standardization of data models in more areas so that we can share more policy related code than we can now.

I think that's why we're seeing use in Kubernetes, where there's already a standard config model for Kubernetes. You can easily write rules on that. Other areas don't have so many standard models yet. There's more work to do to increase standardization in other areas so that we can share our own policies. If you have a non-standard data model in your organization, you can't use someone else's policies because they don't immediately apply and you have to modify them, rewrite them to work with your data model. That's much more work. Being able to share is really convenient. It saves a lot of time, makes it consistent. It's easy to update. Makes it easier to reuse code. That's going to be a really important trend.

The Big Vision, Going Forward

What's the big vision going forward? Looking forward, software is just going to eat compliance. Compliance is not going to be a thing that people do by looking at a bunch of printed out documents, or PDFs, or asking questions. It's starting already. It's really going to accelerate into something that's consumed via testing policies. We're already seeing little bits of this moving towards this in smaller areas, but this trend is really going to accelerate.

One of the screenshots there is the NIST 800-53 documents, which are still written out as manual controls that you should check. More people are working out how to think about ways of automating these controls. The other screenshot is from Drata, which is a startup that has a system for checking some sorts of controls against common frameworks like PCI compliance, and so on. These kind of frameworks for managing controls that someone just bring up and so they can be more automated testing, and this is working towards this future vision.

What We Have to Do to Get There

What do we have to do to get there? I think one of the things that sometimes people misunderstand and misestimate is the amount of work we actually have to do on observability. I think people are quite good at thinking of controls they would like to put in an organization, but they don't necessarily always realize what's really going on in the real world. I think we all know cases where controls are bypassed, and people have access to things that perhaps they're not supposed to have, in theory, according to the theoretical policies. There's often a big gap in organizations as they mature, between what people think is going on and what really happens.

I think we probably all have lots of stories about organizations where everyone has the root password, or people bypass things. There are usually systems to the common path that have good controls. Often, there are systems that bypass the common path to deal with things that haven't been worked on yet. There's a lot of work to do. Observability is a really key piece of like, what's really happening in your organization? Can I see what's happening in order to find out what controls are appropriate?

One of the things that people find where they have controls is the first thing that happens when you have a control is, someone's going to ask you, how do I bypass the control? Bypassing controls is often reality. Sometimes it's like, how do I bypass the control if there's an emergency? Then sometimes it turns out, how do I bypass the control on a day-to-day basis? How do I get an exemption for the next six months for this? There's a whole lot of nuance. I think that understanding what's going on in your organization in a really detailed level is actually really vital to extend the scope of policy beyond the easy cases.

We mentioned before standards and reusability, I think there's a lot more work to do there. We have standards for small areas of organizations, but we will need standard data models for everything about how people work in organizations. That's really hard. It will take time. It will require a lot of software to be written, a lot of work to be done, and a lot of community work, and standards bodies, and tests. We're going to talk about tests.

One way to view security controls is to view them actually as tests. I think it's a really fruitful way of thinking about them. Once you have written your policy as code, once you've got this blob that you can test against, you can actually start to use it in lots of places. You don't have to just run it in one place at the end, at the final gate to production, check all the tests have passed. One of the things that we do in the real world is we publish policies so that people know what they are, so they can plan for them in advance. In convenient times we call this shift left.

Before you travel, you know that you're not going to be able to carry large amounts of liquid, so you prepare yourself by buying things in small little bottles, for travel, and so on. Mostly people conform to the tests upfront. Ideally, with software, the good thing about tests is you can run them everywhere. You can make sure you're compliant, as a developer working on your desktop long before your code actually gets to production. That makes things much smoother. Rather than you having to go all the way to production and be told it's no good, and then go back and start again. This is really a great thing about having these policies in a reusable, portable form that you can reuse everywhere.

It's really important that you can ship updates separately. You might be going to revise your policies. You can ship through the revised versions up front, so people are not surprised by them. It's really important to think about division of responsibility in the organization. Your developers are not necessarily the people who should be writing the policy tests. You might be shipping them in externally, using common tests that the community builds. You might have a compliance team building tests, but you can ship these to developers ahead of time. You can modify them on a different schedule, all these things.

You think about them as tests that they've got to pass. You can update them independently of code, which is really helpful. Division of responsibility in an organization is incredibly key to getting things done, effectively. If the developer has to go and rewrite their code every time as you want to change the policies, it makes things more difficult. Whereas if the compliance team can do that, then that lets things move faster, and gives you the domain expertise in the right place.

Another thing you can do is you can reverse the direction of testing. I think, again, like treating things just as a blocker at production time, it's annoying, but you can reverse the flow and treat it as a promotion piece, so that you promote something to production when it meets policy automatically. This is the merge when green model that more people are using with their tests, generally. Once you have comprehensive tests, if it's passing the tests, it must be good, so you can promote it. If you have a policy on staging and a policy on production, then once the staging policy is met, things can get automatically shipped to staging.

Once the production policy is met, which might involve, has it been tested in staging, for example? Soon as it's passed the test in staging, perhaps it can be automatically promoted to production, perhaps behind gates and so on, still. Once you think of it just as a set of tests to pass, you can be really much more flexible in how you work with it in the same way as you do with tests. Another way to manage promotion is to use signed attestations. There's another CNCF project called in-toto that is really about how to manage signed attestations about things.

The model here is that, basically, you go and get a pre-declaration that you passed the tests, so that you have a way that you can validate this. You have a statement from someone that you passed the test. It's like coming along with a driving license to say that you're allowed to drive. It's difficult to forge. You can just show it, and people will believe that you can drive. Attestations are really a way of, again, turning around the model and saying, I've got the approval already up front, you don't need to actually run the test again. This saves a lot of time and lets you move around when tests take place. That's a really powerful way of thinking about policy.

As well as Open Policy Agent, there are a couple other frameworks that you might come across for policy. Very recently, there's been a bunch of interest in Google's Zanzibar project. The Zanzibar paper was published a few years ago, but a couple of open source projects have started implementing this. OpenFGA just joined the CNCF sandbox. They're exploring this. Zanzibar is really designed to operate at a gigantic scale. It's what Google uses internally for things like access to things like Google Docs. It's got a real scalability record behind it. It runs a slightly simpler model perhaps than Open Policy Agent. It's something that there's definitely interest in in the community.

Kyverno is another CNCF sandbox project. It is very focused just on making the Kubernetes use case easier, but not covering any other use cases. It's designed to feel more Kubernetes native.

Conclusion

There's a huge amount going on in this space. It's a really exciting space. It's really early, interesting. It's really going to change the way we work. One thing you can see is that we got these configuration documents that we made for other reasons. People may either complain about all the configuration documents YAML are generating, but actually it turns out this stuff is really useful in that you can actually process the configuration with declarative code in order to do other things with it, and draw inferences out from it and enforce policy with it.

That is actually a really useful outcome of a world in which we've been generating a lot of configuration documents. I think it shows us an interesting path, it's in line with this beyond YAML track that we're taking YAML, and going beyond just YAML's a thing we have in terms of what can we do once we have all this YAML? However you generate your YAML, or however you generate your config documents, you can still process them through these pretty powerful declarative policy systems, which is a really exciting move.

Barriers to Adopting Policy

Carmen Andoh: What are some of the barriers to adopting policy regardless of the implementation and organizations to you?

Cormack: I think the first thing that people do is, [inaudible 00:26:33] without understanding what the impact for that is going to be. That'll stop people actually being able to [inaudible 00:26:46]. You start off by understanding what people do and why. I think as developers we're often in situations where people try and stop the developers doing things in different ways, and things like that. It's a common thing. I think that just understanding what people are actually doing as a starting point is also really important.

Then, I think, trying to express the thing you want to do in policy is often difficult, because the domains are really complicated. If you look at Kubernetes policy, for example, and you want to block privilege escalation attacks, there isn't the kind of thing that defines a privilege escalation attack in Kubernetes. If there was just a config option to turn it off, it would be easy, but it's not like that. There's a combination of things that are bad that you really have to understand the domain really well to work out what they are.

I think just converting a thought of what you want to do into a really good policy, is often quite difficult in the domains that we work with, which tend to be quite complex. You can make policy that's not actually effective, or it's just a lot of work. I think that's why you see quite a lot of people interested in sharing and reusing other people's policies, because all these areas, it's like, I don't really want to spend all my time working on that myself. The domain expertise versus the program expertise is often quite mixed. Your security department sets policies, but they're not usually used to writing formal policies, they're used to audits and checks rather than writing rules.

Policy Sharing and Software Reuse

Andoh: You mentioned a little bit about policy sharing, or reuse. Is this the future of policy in the way that software and code has evolved over time, where we had these standard libraries and then different libraries, and then this trajectory of software reuse. Then we get to supply chain security and other aspects of that, and we're coming to the other end of the ease of reuse and flexibility. Do you see what is that trajectory for policy? I'm going in a second direction for this question with the reuse, which is, do you see that there's a possibility to reuse when things are proprietary, or your company has very specific, very high niche, specific context? Can policy be shared in the same way as code? How so, how not?

Cormack: Reuse is something that has to be designed for. I think there's always these big tradeoffs between how reusable something is versus how specific it is to your use case. If you look at any artifact reuse, there's always some form of customization, and there's some form of commonality of reuse, and there's always a balance in every ecosystem about how those things work. I think that we haven't necessarily found exactly where that is, but I think that usually there's a bunch of things where if you look at domains, like configuration management, where it's worth making your things more similar to other people's things in order to reuse their stuff, because it saves you a lot of time.

There's a convergence to, let's all run this piece of software in the same way in this ecosystem, because running it differently is not valuable. Let's use the same recipe. Then that extreme becomes too extreme, and you see things like, now we need to add a lot of hooks in this, so you can configure it and make overrides, and do your own piece of thing. Then that becomes a move back towards extensibility. You see they're mirrored in all the ecosystems that share objects and share code.

Open-source code, often, is very unopinionated. It lets you do a lot of things. It's rare to see open source projects that don't accept more features and hooks and additional options, because that tends to be the way that they approach that tradeoff. There's going to be a big spectrum of those things, and it will have to evolve. I think that with some kinds of sets of policies, where the policies are driven by outside people, like SOC 2, or things like that, there's probably more of a convergence of, these are the kinds of patterns that people use because they're trying to achieve the same outcomes.

I think we've seen a bunch of these in like Kubernetes admission controllers and things like that, because people are trying to do the same things, which are mostly things that just Kubernetes doesn't give you any controls over itself. There will always be organization specific things or models. Unless they're giving you real competitive advantage, it's worth thinking like, should we make our organization more like other organizations? Should we reuse more that's not differentiated? That commoditization argument always comes, like, only differentiate where you really have to.

Learning About Policy, Beyond OPA

Andoh: In your talk, you used OPA as the implementation, but you mentioned a couple others. Large organizations have lots of different constraints they have to put on various entities in their distributed systems and whatnot, and they want to codify that, they want to audit that. Beyond OPA, where would you go for various different contexts for people to learn more about policy in your opinion?

Cormack: I think understanding a little bit the things to build around is helpful. I think that understanding the types of decisions these things are optimized for is useful. I think the difference between, say, Zanzibar and OPA and Datalog, for example, as models is useful to understand. It's still a new area. There's not a lot of experience published with using these things at scale still. I remember talking to you about when we were talking about this track about who's using these things at scale, who's written about it, talked about it.

It's still very early days for really people talking about this at scale. I think there are some examples, but they don't tend to be public yet. It's still relatively in the early adopter phase. I think conference talks from people with practical experience is still really valuable. You talk to people and find out their experiences, they're still important. It still feels early to me in terms of like, there isn't a set of standard practices and principles for implementing this yet.

There's still a bit of experimentation, and really trying to prioritize what your organization is trying to get out of this, and what's important for you and what's valuable. I think there's different things that are valuable to different people. I think even things like just separating our authentication code from normal code for normal pieces of software is actually valuable. I've worked with bits of software that have very complicated sets of RBAC rules that are very hard to read in imperative code, for example, and just pulling those out and turning those into a refactored library using some code that's designed for this.

That's one of the things, I think it's valuable, regardless of like a big vision of how policy sits in your organization. Just thinking about, is this code doing policy? Should I think about this thing differently? The questions I was trying to ask about like, where should I run this code? Do I want to run it in multiple places? Should it be maintained differently from the rest of my code? Those questions are good things to think about, even if you're not thinking about as a big picture of what policy might do in the long run.

Andoh: The outcome of a policy isn't going to be the way that traditional code would be. Simply, it could be just a report that gets mailed to the higher-ups or to some regulatory person, or a team that now needs to go do remediation on that, depending on it.

Why Logic Programming Languages Have Emerged as Good Policy Languages

You mentioned various papers and research, and Datalog isn't one of them. OPA has the language Rego, which is a Datalog derivative. This is me putting on my language nerd hat, but I've noticed many logic programming languages have emerged as good policy languages. Would you maybe care to talk about why that is happening, or what is the deal with that?

Cormack: I think it's really about the declarative versus imperative thing. I think, partly it's just a matter of readability. It's like, I want to list the six things that mustn't happen. Then, I just make sure these things never happen. You don't want to say which order you check these six things in or whatever, it's just like, this is just the list of things that don't happen. It's actually interesting. I think there's a long history of using logic programming for this kind of thing about trying to understand complex sets of rules and their interactions.

Once you have a set of rules, you can actually, in principle, do other things with them, like, make sure that there is a solution to them, and things like that, and what it looks like. I think just being able to write something down in that concise form of, these things mustn't happen, these things must happen, is actually kind of [inaudible 00:40:54], and it works as a pattern matching thing. It's mostly to do with that kind of readability and the way of trying to solve those kinds of constrained problems that this fit. It's interesting, though, for example, that logic programming for testing hasn't turned up as a thing or anything like that. It's been limited to databases, which is a weird set of things.

Andoh: I've started to hear the munging of policy, and the people in the BDD camps of software, this behavior driven development, because the concept is somewhat similar. I have this behavior that must happen, or these constraints that need to happen. It's just very interesting in the policy space.

Cormack: I think logic programming is definitely an interesting thing, if you're interested in programming languages. For a long time, it's been a forgotten set of programming languages. Functional programming really became almost mainstream, in terms of programming language gigs, whereas logic programming didn't in the same way, or hasn't so far. It's sitting around at the fringes of a lot of things that people do in an interesting way.

I think there's some interesting things you can do. Because I think functional programming was also about more declarative things, less imperative. Logic programming historically was very much coupled with that movement towards this make programming languages more declarative. It got forgotten about in some ways. I think it was very tied to the old implementations of AI before ML and things like that.

Andoh: I'm starting to hear this idea that the future is declarative. I'm not just talking about infrastructures. This is, of course, the track that is infrastructure, or languages of infrastructure. If infrastructure includes instantiation, it is very much tied to a declarative model. The logic programming, probably, we might see a rise as policy is thrown into that mix as well.

Additional Resources on Policy Evaluation and Policy Learning Journeys

Cormack: One of the things I like about OPA is just that it's very accessible as a thing to just try out, and it has great documentation. I think it's a good place to start learning is just to play around with it. It's a good introduction to thinking about the space. It's a good, friendly project, and a good playground.

 

See more presentations with transcripts

 

Recorded at:

Aug 17, 2023

BT