Interview with Erin Schnabel on the Liberty Profile
Recorded at:

| Interview with Erin Schnabel Follow 0 Followers by Alex Blewitt Follow 4 Followers on Oct 24, 2014 | NOTICE: The next QCon is in San Francisco Nov 5 - 9, 2018. Save an extra $100 with INFOQSF18!

Bio Erin Schnabel is the development lead of the Liberty profile for WebSphere at IBM.

Sponsored Content

Software is Changing the World. QCon empowers software development by facilitating the spread of knowledge and innovation in the developer community. A practitioner-driven conference, QCon is designed for technical team leads, architects, engineering directors, and project managers who influence innovation in their teams.


1. I am with Erin Schnabel at QCon New York 2014. Erin is the development lead of the Liberty Profile for WebSphere at IBM. Erin, I wonder if you could just start off with telling what the Liberty Profile is.

The Liberty profile is intended to be an approachable profile for WebSphere Application Server. it's focused on being friendly for development, being quick to start, being very composable, production-ready, especially for the web profile — we have a full, end-to-end, from development all the way to production, we have very good tools integration and that has really been our focus with the Liberty profile as we add additional capabilities to it — is to ensure that there is a really good integrated experience.


2. The Liberty profile is based on OSGi, which is a modular architecture. How easy is it to move from the original implementation into a more modular one?

We actually had a really good time making the Liberty profile. We got to, in a way, start over with the kernel, with the lowest level of Liberty profile and we got to do a lot of really cool things to enable the kind of behavior that we wanted. So the Liberty profile allows you — this is one of those things that are really nice from a development point of view — you can change the composition of your server based on the kind of application that you're running and in order for us to enable that kind of behavior, the whole “Let’s make an instance of runtime that does this” and while I am developing my applications I decide I need this so I add some more features, then it grows and I decide maybe that wasn't such a good idea and I back them out again and it shrinks. To enable that kind of dynamic behavior for the shape of the runtime, we really use and abuse OSGi services quite a bit in order to accomplish that. I did give a talk this week about the services that we use at the kernel layer, that's configuration admin, meta-type and declarative services. They are the power-house at the core of our kernel. We do a lot really, really nice things to build a runtime that then reacts and interacts well with users at all stages of the life cycle of the product, from development to production.


3. Did moving to an OSGI runtime affect the startup at all?

We didn't really move because there was OSGi in our full profile as well, but we did — when we were working with the kernel in Liberty — we did focus much more on creating an environment that was focused on OSGi services, using and abusing that service life cycle and that absolutely changed the start-up behavior and that was the goal. That was what we were trying to do — it was to make sure that things could start in parallel, start and stop, come and go, all the dynamic things that you can do when you embrace and love OSGi services. So, we would not have the dynamic runtime that we do if we weren't fully committed to the OSGi model.

Alex: So there is a lot of dynamism that you can use once it's up and running to be able to add new components or remove them. I was thinking about the start-up time.

No, no, no. The start-up time is just as important. So at start-up and we are reading from configuration to determine what features that we're starting which determine which bundles we're starting which determine what services we're starting and all of that is happening in parallel, all of that is happening based on dynamic injection of dependencies and all of the characteristics that you have with OSGi services, especially when you are talking about the combination of declarative services, configuration admin and meta-type. That combination of things is just incredibly powerful and we have been able to do some really insanely awesome things with it.


4. The declarative services allow you to build up components and assemble the application in terms of components. How did the meta-type and the configuration admin fit in with this as well?

Declarative services does allow you to define your services, start your services, have rules about how many or whatever instances. What we found really powerful about declarative services is that you create your service instances based on input from the configuration, right? So when you bring in a feature, when you enable a feature in the Liberty runtime, that enables, that installs a set of bundles, those bundles bring in their configuration descriptions — so config admin now knows that these things define this configuration. When you have config admin and meta-type together and they read the meta-type and there is a definition of meta-type that says that this thing is a factory pid — I am getting really detailed here — but this thing has a factory pid, then that causes DS to create a service. So it's all very related. There is no difference between server start time and reorganizing my features later. it's the same process in both cases, where we're reading the config, reacting to config, services are coming and going in response to those configuration changes and the entire runtime recomposes itself based on what's in your configuration.


5. Are these configuration changes done by administrators or end users or by applications?

We made a simplifying assumption that we have yet to break with. It's a design choice on our part which is that all changes to configuration come from server.xml and server.xml is owned by the user. So, the user could be a developer working locally in their workspace and so they might tweak server.xml. If they are working with the WebSphere developer tools, the developer tools will tweak server.xml for you. For example, if you are working with the tools to write an application, it will change the file monitoring settings because when you are working in the tools, you don't want it to notice every file change, right? The tools are smart enough to know when you have written enough things in your App that is actually interesting to refresh the server, so it'll change a setting like that.

And if you deploy your app to the server from the tools, the tools will add into your server.xml the application that you need to run your application directly out of your workspace, which is also really nice. If you are at production time, it's then usually the ops that will tweak server.xml and do that kind of configuration. So, the core that we have, the policy that we have is that your configuration comes from one place — and that is server.xml — and/or its includes, so you can compose your configuration, if that makes more sense. But, the configuration is intended to be human usable. It's not complicated, it's a configuration-by-exception. We are very stringent about making it; any component that we define will work well out of the box so that you don't end up with a yucky experience, so that you don't have to configure everything unless it's something that you really have to define something for, which keeps things small.

Alex: So the server.xml drives config admin and config admin uses the meta-type to create the services.

Exactly. And it's an insanely awesome thing. I have actually really enjoyed, as a development lead, I have really enjoyed watching different parts of our development team. Because they are coming from a very different model. When they first see it they are like “What just happened? I am not understanding this” and then, you start to see the gears turn when they figure out “OK. So if I re-arrange my config this way, then the services get injected that way and then the services happen this way” and then you see things start to click. We have some teams that are just pushing now. They get it and they are pushing that envelope, pushing every boundary we have figured out and we are doing some really awesome things to make it so that the pattern always works. Then you change your server.xml or you configure the minimum amount in a server.xml and the rest just happens or cleans up, and that kind of gymnastics has been very fun.

Alex: So, the Liberty profile and the Web Profile look after creating things like Servlets or database connections.

Yes. The Web Profile is a JEE spec. Right now it's a EE6 — it's the EE6 Web Profile.


6. Does the Liberty profile allow developers to inject their own OSGi services?

It absolutely does. A lot of times we focus on the EE profile because of WebSphere is an application server, so the JEE applications are kind of what we are known for. But you can write your own extensions and people do that on very often — we've actually seen two primary use cases. People can extend the runtime to provide their own features that provide a custom user registry or you have a case where there's some foundation of APIs that the ops team, for example, wants to make sure that all of their application developers use. So they can package that foundation as a feature and then all of their developers build on that foundation, so they are effectively supplying their base API layer as a feature on top of Liberty and then that is what all their devs use which makes things much easier later.

My favorite is — I am a geek, I use IRC so I wrote an IRC bot that sits on top of the Liberty kernel and it does not use any of the JEE stuff at all because it's an IRC bot, so we can minify things, which is really fun. It's my favorite thing. I can write my little server, I have my little extension that makes this IRC bot into a proper bundle, with its meta-type, the whole thing. I can put my channel configuration and my server.xml in it starts up and it gets my little bot running. So I test all that locally and then I use the server package command. We have a special syntax include=minify, which is the best thing ever — and it gives me a little zip that tosses all the stuff that I don't need. So my little tiny zip, that I built, with my stuff is what I put out on my server.


7. Is that minify zip essentially a stand alone, run-time package, with the bits that you use in Liberty profile but not with the bits that you don't?

Yes. Exactly. It's awesome. We have cases, for example, where people are using Liberty on embedded devices, they are using it on appliances, so they are using it in places that have constrained storage, so being able to trim out all of the stuff that you don't want is really important for those kinds of environments. So this allows you to test locally, you can have more stuff so you can add and remove and it's convenient for development as you figure out what it is that you want to use or change your mind six times, whatever the thing is. Then you can just package what you want and push it to the places that you need it.


8. Is it possible to run, say, Liberty on a Raspberry Pi?

It is and we have cars that run. They are at Devoxx actually, I think. Devoxx UK is running right now and I know on Twitter there are pictures of the cars that are running Liberty. We have had several demos and I know there are You Tube clips out there of Liberty running on a Raspberry Pi. So, yeah, it's absolutely possible.


9. And presumably, the fact that you can bring these small runtimes down means that it's possible to come up with both custom configuration for servers but at the same time be able to run on the big end or the big metal or the big servers without any changes?

Yes. That is the point. The other thing — from a WebSphere application server as a family point of view is we have the guarantee that if you write your application on WebSphere Liberty Profile and you test it locally and you are happy with it, that application will be promoted without change to the full WebSphere ND, for example.


10. Do you think that we are seeing a resurgence of DevOps being able to take these sorts of packages and then deploy them onto the big iron servers without seeing these changes?

I think so. I think the big driver for some of these is, in what we see, is a lot of new patterns around situational applications. So, you still have more traditional EE applications that are being used to run and interact with the back end, to interact with a lot of the big data. But then you have these little tiny situational applications that are much smaller and running them on this little tiny runtime, like a micro-service kind of approach, where you have these more targeted server definitions and configuration that are targeted to these individual situational apps is the pattern that we are seeing a lot of.


11. Do you think that the ability to have additional servers being plugged in adds to the dynamism of the system as well? So that you can have multiple applications co-hosted together on one server or running as separate servers?

Well, that is very interesting because we are kind of seeing an interesting shift there. So, there are cases where you have a lot of applications deployed to this single server and that is certainly still possible, but what I am seeing is that people are actually bringing it down so that you have a Liberty server focused on an application. But the nice thing that we have is that the Liberty runtime — you can configure multiple servers against the same runtime and each one of those can have some different combination of features, but they can all be co-hosted on the same set of binaries, basically. If you use the IBM JDK that has shared class libraries, so you can have the shared classes between all of those runtimes which improves your density and things and it improves your start time because you are not loading all the classes again and that can be a pretty potent combination, because you get your servers targeted and focused which gives you really good isolation in the case where that's important and you don't want your apps to interfere with each other. You can set them up as separate things and you still get shared classes and all that stuff.

Alex: So the separate processes at runtime, but all of those classes and JIT compilation is done sort of centrally for that box.



12. Do you use any of the remote services to allow scaling between different systems or do you see scaling as being something like an HTTP layer where a proxy does the forwarding?

The pattern that Liberty uses is fronting with an HTTP proxy on that kind of thing. We don't use remote services at the moment mostly because the patterns that we are focused on aren't so much about remotable OSGi services. We might go there, but we're not there yet. So, the primary use case for Liberty right now is really back-end for web applications. Where's your REST back end? Well, you can put that on Liberty. Where is all of your JEE Servlet kind of behavior? That can all run on Liberty. And actually what we are seeing with OSGi applications is that people want to use OSGi for their applications for the same reasons that we want to use OSGi for their runtime — they want to be able to have separate teams working on their separate modules, being able to be a little more separate with what they are doing, but even then they want to compose their application using OSGi and services and bundles and all that stuff. But at the end of the day, they are still providing a web endpoint via a Servlet or that kind of thing. So, for us so far, exposing remote services in that way hasn't been something we pursued.

Alex: So the main point of the HTTP interface, I guess, is to then talk with the other Java Script web clients or other services and then the implementation behind the scenes is using OSGi to get the benefits of dynamism and modularity.

Yes. At the kernel layer. With WebSphere, we are — and again it has to do with that application guarantee. So, for example, we are not using the OSGi HTTP service. Not because we don't like it, but because the application server that we are providing, that is providing the Servlets spec support, is same engine that runs in the full profile. So, there are some OSGi services that we don't use and it's not because we don't like them, it's just because they don't fit the use case that we are trying to satisfy, at least not right now.


13. Are you saying more people looking at building, if you like, vanilla OSGi applications then as opposed to the JEE traditional stack or?

What we are seeing is that people have — it's like they have their web application and what they want to be able to do is to allow their teams to work independently and that is when they start at the application layer wanting to break their Apps up into bundles. it's because it gives them isolation and it gives them “Here's your API, here's your exports and here's your imports and here is how you stay out of each other’s hair” — that kind of thing. And that is where we see new OSGi applications being created — it's usually to accommodate more distinct development teams that are pursuing different parts of an application separately.


14. What advice would you give to a team that currently has a monolithic application and they are thinking about breaking it down for modularity or for an OSGi runtime?

You know, that is interesting because I would answer differently now, I think, than I would have a few years ago. So, it used to be that your best practice was to try to focus on your imports and your exports and separate your bundles out. What I would say from our experience and when I watch our development team come over and think about this in a new way is I would start with — for us, we start with a configuration. Because for us, it's about the user experience which might not apply for certain applications but go with me here for a second, right? So, for us, when you think about how the user interacts with configuration, it gives you the top of what is going to be your dependency injection chain and the way it used to work in an imperative model where you are starting with the master guy that is going to go and ask for all the services and cobble it together and then you go down from there. Usually, when you get the inversion of control is when you think about — “OK. How does the user going to configure this? And then that maps the services which means I need to get these services first” — and then you start to see people get that they basically took the old model and turned it all upside down and things start to click.

So, if I was going to say: “Here is a monolithic web application and you want to break it down into parts — there is a couple of different considerations. One is the functional units, so you have things where this part of the application is focused on, this kind of function and this part of the application is focused on that function, but when you think about how they have to interact first, how they should be related to each other, then that starts giving you guide lines for how you should break it up. But I would definitely think about the service dependencies first. Think about how you would use the service registry to build your relationships.

That is the hardest leap to make. I mean, going with your package imports/exports is "easy" — it's like you just build the metadata and you say what it's and "that part's done". But you did not get anything there. What you really need to focus on is on how are your pieces going to interact and how am I going to make sure that they can find each other when they need each other, but are otherwise independent and sometimes, when you are dealing with something that is really big and monolithic, that is the biggest mental hurdle, the biggest jump you have to make. But once people make the jump, everything gets much easier at that point.

Alex: And once you start thinking in services, then you can start designing in services and it allows you to innovate faster.

It does. It really does. That is what we are seeing. We're seeing — I can remember some of our teams coming over and they were really struggling to get it and once it clicked, they are doing some insanely awesome things right now and it's because they are free to do it. They understand this approach, they can try new things, they can figure out how to reconnect stuff or how to replace things and it was very liberating.

Alex: Hence the Liberty profile.

Yes. On many levels, yes. But it has been interesting as a lead to watch people hit the wall, watch it hurt and then they get it and then all the lights go on and they run and it's really cool to see that happen.

Alex: That's great. Erin Schnabel, thanks for your time.


Login to InfoQ to interact with what matters most to you.

Recover your password...


Follow your favorite topics and editors

Quick overview of most important highlights in the industry and on the site.


More signal, less noise

Build your own feed by choosing topics you want to read about and editors you want to hear from.


Stay up-to-date

Set up your notifications and don't miss out on content that matters to you