BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Interviews Jessica Frazelle on Working at Docker

Jessica Frazelle on Working at Docker

Bookmarks
   

1. Hello, I'm Manuel Pais and I'm here at QCon London 2015 with Jessica Frazelle, software engineer at Docker. Thank you for accepting our invitation Jessica. Can you briefly introduce yourself to our readers?

Yes, I work at Docker on the core team in San Francisco, so that basically entails reviewing PRs (Pull Requests), fixing issues, some of the less glamorous sides of maintaining an open source project, but it’s fun.

   

2. Why did you decide to join Docker in the first place?

Before that I was working in New York at a startup and we used Docker for our infrastructure and I just fell in love with it because before that we used LXC containers and Docker just made things so much easier. We used it when technically you shouldn’t use it in production, so things were a little bit corky in the beginning, but it was still better than what was available at the time and now it’s gotten so much better. So I decided to help out and switch.

   

3. I'm curious because right now you basically have the whole world paying attention to what’s going on at Docker, how does it feel to work in that kind of environment?

It’s cool. It’s actually weird coming from New York, it was never like everyone knows about tech because almost everyone there is more finance, I guess, tech is more like just a little man on the street. But in San Francisco everyone knows about tech so just wearing a Docker shirt in places it kind of blows up, so that was kind of hard to get used to, but it’s cool that everybody loves containers.

   

4. With the level of attention that Docker has now from the community, I wonder how is the decision-making process inside Docker? Particularly how do you balance working on your own roadmap, things you want to improve with I guess a lot of incoming requests from customers and just regular users?

So, the part that I work on is only open source, the company Docker Inc. itself is completely separate. We actually have a person who acts as a firewall. So my team, that I am on, only works on open source, we don’t have roadmaps. We can set vague goals for a release, but since they are timed releases and not features we don’t hold back if it’s not ready. And the Inc. side, if they have a roadmap and they want to get something in, they have to go through the process just like everyone else and it can wait if something is not ready.

   

5. In the open source side, how do you prioritize between issues and improvement requests?

It depends. If it’s a simple bug fix, we can get those PRs done really fast and turned over and merged. If it’s a bigger bug fix where it touches different parts of Docker and different subsystems, that gets a little bit more complicated because more people need to review the code. So that can take a few days. Features take a very long time because they usually are like patch bombs, but they will eventually get reviewed in.

   

6. Can you share with us what’s in the pipeline for Docker Inc. in this case in terms of overall features you are looking at implementing. In particular I would like to ask about security, if there are any plans to incorporate for example hardware-assisted isolation, like in virtual machines, is that in the pipeline or not?

Actually, it was just announced a couple of days ago that we got two security engineers from Square, so that’s really exciting. They are super cool and it’s nice that we now have more members on that team because that’s obviously the priority but it’s been hard to get everyone working when the team is so small. But it’s so great that now we can pair with them and add a lot more things. I'm not sure specifically as to virtualization kind of things, but if it’s a pull request in a feature it would probably get reviewed and get in. As far as other things, I'm not exactly sure on the Inc. side for goals, but I know that at least on the open source side we really want plugins, an easy way for people to almost run their plugins in a container. So that you could code it in any language that you wanted and have it be a network plugin or a hook based off the container. Then also we want a whole networking plugin system too so that people can use Weave or anything like that super easily.

   

7. Changing a bit the topic, at the moment microservices are also very trendy, let’s say. Would you say that Docker is the perfect match for microservices as you can have each service running in its own container?

Yes, that’s the way I use it, but I do play the devil’s advocate, people do love their init system. People tend to think that we don’t want you to run more than one thing, but you can do both, it just depends on your preferences. But I do think Docker is great for microservices. It’s just like one thing is going to enter the container at that point and just run.

   

8. And if we're talking about more monolithic architectures or even SOA style, but with large services, do you believe Docker is still an interesting option in that case? What are the potential benefits and drawbacks in these specific use cases?

Maybe something like Hadoop or something just like large scale? I think Docker can help with that a lot. Just as an example: Mesos when you use that you can scale your app to 100 containers and it will just go across cluster. You can do the same thing now with Swarm and Fig, but they are building it out. But I do think it will be almost like one click/one command kind of thing in the future which is great for any sort of system that you are running, be it Mesos or Docker.

   

9. [...] Do you think Docker clustering can work in conjunction with these tools or not and what is the rationale for creating a new alternative in Docker?

Manuel's full question: So the size of the architecture is not a constraint for using Docker, you think, it can be a monolith inside one container and then you scale it or you can have different services, one in each container, both are possibilities. And speaking of scale, when you go to that level of high scalability you need to have effective service discovery, orchestration in place and there are already a number of tools available such as Google’s Kubernetes, for example. Do you think Docker clustering can work in conjunction with these tools or not and what is the rationale for creating a new alternative in Docker?

I think the rationale behind it was the way Swarm works is you get to use a Docker like you know it, it just goes across a cluster. And what they are doing to integrate with the other people who are already doing it is right now they are working on the Mesos backend so you could have a backend in Mesos. I’m sure eventually there will be like a Kubernetes one and you can already switch out etcd and Consul as your discovery. Because there are already these things in place that people know and love, we are not trying to take that away from them. It’s more like we will make you comfortable with everything, so you are not tied in.

   

10. What was the impact of developing Docker clustering on both the existing Docker architecture and the way that you use it?

Since it’s a separate binary (I mostly worked on the engine) there weren’t that many changes to Docker itself, they use their own raw API just like everyone else. It seems like in PRs that come in for features, the Swarm team almost has the same values like someone maybe from Rancher or someone from Kubernetes because they want these things that we technically didn’t think of because it’s not something that comes to mind. But it’s nice because it then pushes the project to incorporate these things that everyone wants. Yes, I think that it’s been good.

   

11. In terms of usage, as you said, it’s a different binary so you wanted to have it completely decoupled so that people could keep using Docker without clustering, I suppose?

Yes, because some people like things the way they are and if you want to do that you can. And if not, it’s very easy to get the Swarm binary too.

   

12. In your talk you also mentioned that this Docker clustering is a batteries-included solution, can you explain a bit what does that mean?

Yes. The whole Docker theme is “batteries included but not required”. Primarily it started in the engine with the backends being LXC or the native loop container. So we start Docker by itself with the native one but if you want to use LXC you can, it’s there. Then there's also the storage drivers so there's AUFS, BTRFS, Overlay, those are all batteries that you can pop in and out. Then Swarm takes it on in the form of discovery with etcd, Consul, all those. And then they will be adding Mesos and those will all be batteries too and soon even networking will be a battery in itself, which is cool.

   

13. How do you see Docker clustering and orchestration playing together with infrastructure configuration management tools like Chef and Puppet? Do you see that it will lead to a separation between infrastructure provisioning and app configuration or not, how do you see that?

I don’t think it will. I mean I'm more friends with DevOps people than I am with developers so I almost think that’s good, because a lot of them either one will really love Chef or one really loves Ansible, but each of them have ways to spin off containers with it. And even they are making patches to it to make it better, so I think things will evolve over time and it will be this thing where you can choose what you want to, spin it up.

   

14. But would you recommend, inside a Docker container would it still make sense to configure the application using one of these tools? Or once you provision infrastructure and you have Docker available, would you just use the Docker tools to configure the application?

I honestly think it’s up to you. Technically I like minimal containers so I am not going to download an extra dependency. But if you are more comfortable with Chef then go ahead and use it because there is no point in learning another config thing just to make your workflow worse, you might as well stick with what you know.

   

15. [...] What’s your view on that and does Docker have any goal to fulfill that spec or define your own spec?

Manuel's full question: Recently there was some friction about Docker moving from being just a tool for containerization to a full-fledged platform with clustering and other orchestration capabilities and networking. Specifically there seems to be a fear of losing composability of services and becoming locked in to Docker the company. In particular CoreOS published a spec for app container that tries to promote that composability and other properties. What’s your view on that and does Docker have any goal to fulfill that spec or define your own spec?

So, we do have a spec and loop container for containers because that’s actually where the execution driver is and then we have a spec for images and Docker itself. But honestly I think we both have the same goals, like CoreOs and us, we both love containers and want containers to succeed. So I think that working together is the best thing here and I know that at least Brandon has been helping out on our imaging stuff now. But I know there was that weird thing where everybody freaked out and thought it was like a war, but at the end of the day we have more in common than the rest of the tech world, so we might as well work to the best solution there is.

   

16. So there is ongoing collaboration between the two. So for final question, I know you are a hardcore Linux fan, right? Do you have any pet projects you would like to share with our readers, besides Docker, in this area?

That’s hard. I really love Linux, sometimes I just mess around with the kernel on my own, that can get dangerous. But I would just say if anybody else is scared to get into something like that, it’s really cool and fun and there is no need and it’s legit.

Manuel: Well, thank you very much Jessica. It was nice to have you with us.

Thanks.

Mar 29, 2015

BT