BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Interviews Todd Montgomery on Protocol Design, Security, Formal Verification Tools

Todd Montgomery on Protocol Design, Security, Formal Verification Tools

Bookmarks
   

1. We are here at GOTOCon Aarhus 2014 and I am sitting here with Todd Montgomery. Todd, you are a self-confessed protocol geek, what does that mean?

It means I have been designing protocols and implementing them for way too long probably. I just like it. Communication is always something that fascinated me and being able to implement and design protocols is something that I was attracted to.

   

2. Aren’t there already enough protocols? Why do we need more?

Actually it’s a really good question, “Do we need more?”, and I don’t know actually. There are a lot of protocols, and just like languages they have a lot of pluses and minuses, they have a lot of different subtleties to them, they have different aspects to them that lend themselves to non functional requirements like performance and usability, and things like that. A classic example is HTTP 1.1, most people think that it’s a horrible protocol. You talk to anyone who’s done protocol design, they look at it and go “Wow that’s a lot of anti patterns put in one place”. But it’s extremely successful, it’s very simple but it also has very poor performance, it’s hard to optimize, it’s hard to get really great efficiency out of the protocol. And that’s why things like HTTP 2 is coming out. That addresses a lot of that so it’s just like languages, taste matters, opinions matter, things like that. So do we have enough? Probably, but needs change over time as well.

   

3. Since you brought it up, to you, what are well designed protocols? What is your gold standard?

I look for couple of different things, the first is it should be a binary encoded protocol, or in other words it shouldn’t require me to parse strings; strings are time consuming and CPU, to really parse, there is much better binary encoding. That’s the first thing I look for, is the protocol designed by someone who actually understands that binary encoding is much more efficient and much better to do and easier to do actually overall. Second thing is, variable length fields versus fixed length fields, and that’s really about layout, stuff like that. Another thing that’s extremely important is, has the protocol been thought of as truly asynchronous? Is the ordering of different message types and how they would interact with one another, been thought through sufficiently? It’s very easy to determine that when looking at an implementation or looking at a specification of a protocol.

You’ll notice things like “We’ll just assume that A has happened before B”, and what happens if it doesn’t? And when that’s not specified that could be a potential security vulnerability, it could be just incorrect behavior, it could be all kinds of different things. Protocols are actually very very important in security. It’s one of the things that is not talked about much, but a secure protocol is actually very difficult to do. Security, encryption and protocols around encryption are one of the most important attack vectors. Protocols that are designed where they are to be secured. I have most of my interest in security that comes from the protocol side. So things like how key exchanges work and certificate authorities and things like that are something that I have known for a long time and I got into it because it’s so prevalent in networking to understand security. You can’t really design a really secure protocol unless you understand the problems in security. [I]t’s also hard to design a security system without understanding protocols at some level.

Werner: So to get back to your point of encoding, so you like binary encoding, so you must be in hell right now, because everything is textual.

No, you kind of do with what is there in a standard, a standard is a standard, HTTP 1.1 and a lot of other early protocols were based on ASCII or various forms of UTF8 or different types of character encoding. In many ways those protocols you just have to deal with them, it is the case that when you are dealing with protocols you get really really good at parsing, and how to do parsing efficiently and catch edge conditions and things like that. If you don’t, then your implementations have issues with those particular protocols. But if I design protocols I always make them binary encoded, because it’s actually easier from an implementation standpoint many times, unless the data that is being used is entirely textual which is very very rare, it’s actually better from a space perspective, easier to implement, easier to reason about a lot of times too. And also has a lot less errors to deal with.

Werner: Oh yes, definitely. It seems to be a trend to people saying that text based is better you can just look at it with netcat or a netsniffer but it’s not really much of an argument because just write a parser for it.

Yes, and that’s kind of the case right. If you got something which is a machine on one side and sending data and something that is a machine on the other side receiving data, that doesn’t really need to be textual. It may be convenient to have it textual, but in general that is simply just a waste, you are not going to interrogate every single byte, and look at it from a textual representation. It’s sort of like to a French gentleman and a French lady communicating and their chaperon doesn’t understand French so they have to speak in English. This seems like a waste.

Werner: A few years ago I think Sun came out with this idea of having an XML protocol but having an ASN.1translator? So what do you think of that? I thought it was a great idea but nobody else liked it.

Well ASN has its own kind of issues.

Werner: Designed by committee.

There are other things. I am not a fan of those types of encodings in general. But I think that actually is a very interesting way to look at it where XML is potentially one of the worst designed parsing technologies I think I have ever seen, but it became very popular because it was simple and familiar and the rules were very clear. So from a design perspective it’s great but from a parsing technology perspective I am not exactly sure, there are way too many things that are bad about XML. So getting a way that it could be binary encoded instead, so that you wouldn’t have to actually be sending XML saves a lot of bandwidth, saves a lot of processing, it’s a very good thing. And making that trade off, another question is why would you need to generate XML in the first place? And I can see if it has been generated by a human but a human doesn’t want that much redundancy, to have to type that much for example.

   

4. Security in protocols, let’s get back to that because that’s a big topic. So first off how do you design a secure protocol? Is it simply actually designing a proper state machine and figuring it out completely rather than just throwing stuff at the wall and hoping something sticks or what’s your view about it?

That definitely has a side to it, actually understanding what the different states are, what different transitions can be, the things that can be received, making it very clear what the actions are for every potential change and a lot of time that’s not specifying everything it’s giving default, it’s thinking things through in terms of how they are to be handled and making it very clear. TCP reset is a classic behavior right? It’s sort of a catch-all category, just to give an idea but it may not be that secure. TCP has had some security vulnerabilities over the years. Usually when you think about what a secure protocol means it can mean actually quite a few different things. It can be “Does the protocol work?” or does the protocol leak some sort of information, that means that there is a lot more around the protocol. Classic example is it used to be possible, this is going back a little, to get arbitrary data exfiltrated from a kernel by requesting certain sizes of messages in a TCP connect. It was part of the SYN and SYN-ACK, and that was an implementation limitation, not really the protocol but it was a vulnerability. And it’s going back a long way but other things around protocols are, as protocols may be providing certain types of information, getting things into a bad state and then there is a whole area of using protocols for DDoS attacks whether be NTP or a whole bunch of other protocols you can get them to do bad things which can lead to DDoS situations. It’s a very interesting set of things that you have to think about.

   

5. In talking before you mentioned and also related to this, you mentioned your interest in verification techniques. Can use these on protocols?

Absolutely, yes I spent time doing formal methods at NASA working on Shuttle, Station, and Cassini and other systems. And yes protocols are involved in a lot of different aspects, I’ve done formal verification of protocols I have designed, too. There are very good techniques for verifying protocols, and those techniques don’t only apply to protocols they also apply to other things too that might use protocol techniques, state machine exploration and model checking are classic examples; formally verifying something works is sort of an exhaustive exercise it’s also an exhausting exercise so it takes a lot of foresight a lot of really distilling things down and usually by the time you figured out how to verify it you would have designed it differently.

Werner: Because you figured out all the problems.

Well you figured out that it would have been easier to verify it if it was slightly different and it actually would have made things cleaner. So even, which is something I have noticed, as you refine and you look at something and you look at it in different ways and a lot of times you can make it simpler, smaller whatever and verification is just another activity to looking at it differently.

   

6. Can you give us technology names for verification techniques?

Yes, model checking is one if you are interested, there has been a model checker that won an award back in the mid 2000 I believe, the SPIN model checker, which was done from some work at JPL, I worked with it, model checking is basically state exploration, state machines, but it can be applied to a lot of different problems, it’s very handy. What you can do is then “Here is my behavior, I specify it as a set of transitions, and then I want to look at different invariants over the states and different interaction patterns”, things like that, it can explode it out, it can look at that. So it’s very handy from that perspective. CSP in a traditional sense can sort of be verified with language constructs, those are useful but they are very difficult to apply in real world systems, I’ve always had questions about whether those apply in the type of systems that I’ve usually dealt with, which has a lot more temporal properties than CSP is really used to dealing with. But I haven’t looked recently maybe there is work there, there’s also been work done on Petri nets specifically for systems like network and how you can put reliability models on those, I consider those to be very good, kinds of designs and you can do an awful lot with them. It depends on how much time you want to spend on that and there’s also the fact that a lot of these don’t rely on actual code, they rely on models of the system. So the fidelity with the code and the model and how things work and the environment, these are all questions that are actually pretty difficult to answer. But I think the best verification technique is a smart person looking at the code a lot and just saying “What if?”. That’s the best verification technique I have ever seen. A domain expert can look at the code and just say “What happens when that happens?” or “Is that really going to happen or not?” and that is invaluable still I don’t think that will ever change. We may get better at certain things but I don’t think we are going to find a replacement for that because it’s the way we find really nasty bugs. And fix them, whether they be in design, implementation, whatever.

Werner: So we need to automate nitpickers.

Actually I think expert systems that capture domain experts knowledge and can maybe not point out problems but point out hotspots that people may want to look at I think that actually has a lot of promise.

   

7. I guess as a protocol geek you really like state machines. Are they your favorite tool or how do you think about protocols in that sense?

I think about state machines, my background is in digital design, so I started out as an electro engineer and computer engineer so designing flip flops and gates and logic is really where I did that before I even went to college. So thinking of state machines is a natural thing for me. I think I tend to go back to state machines when a problem gets to a certain point of intractability, when the logic gets to a certain point where it seems too complex one of the tools that I pull out is “can I make this simpler by thinking of it as a state machine?” And not everyone is like that, I fully understand I don’t think that there’s one size fits all for a lot of that. But I think that it’s a good technique for, I noticed other engineers when they are really mulling a problem and they can’t figure out how to make it simpler but they know that there is something there that can, sometimes I have seen state machines work for other developers and sometimes I have seen state machines just explode out and just not be a real tractable solution. So I don’t think it fits everyone but I think it is another tool in the bag and just like functional composition that’s the right tool for certain sets of things, state machines are another great tool for certain sets of problems, you got to just have a big tool chest and be able to pull out the right one for the right situation.

Werner: I think that is a good statement to end on, everybody in the audience look up your Petri nets and your state machines and fill up your toolbox. Thank you Todd.

Thank you.

Dec 19, 2014

BT