BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Presentations Programming the Cloud with TypeScript

Programming the Cloud with TypeScript

Bookmarks
56:40

Summary

Luke Hoban reviews the unique benefits of applying programming languages in general, and TypeScript in particular, to the cloud infrastructure domain - highlights a few of the projects that are leading the industry shift in this direction - and shows examples of using TypeScript and Pulumi to build everything from serverless applications on AWS to Kubernetes applications on Google Cloud.

Bio

Luke Hoban is the CTO at Pulumi where he is re-imagining how developers program the cloud. Prior to Pulumi, he held product and engineering roles at AWS and Microsoft. He is passionate about building tools and platforms to enable and empower developers, and is a deep believer in the transformative potential of the cloud.

About the conference

Software is changing the world. QCon empowers software development by facilitating the spread of knowledge and innovation in the developer community. A practitioner-driven conference, QCon is designed for technical team leads, architects, engineering directors, and project managers who influence innovation in their teams.

Transcript

Hoban: Today I'm going to be talking about "Programming the Cloud with TypeScript." I'll go into detail on all the parts of that, but first I want to talk a little bit about why I am in particular interested in this intersection of programming languages and developer tools on the one side, and cloud, a deployment on the other side.

Part of that's my personal history, as Richard mentioned, I worked on programming languages at Microsoft for many years and worked on C#, and F#, and worked on the standards body doing JavaScript and then helped to start the team that built TypeScript, so a lot of time spent thinking about how we can use programming languages to enhance developer productivity and to make developers, application developers in particular, enjoy what they're doing and be more productive generally. Then a bit more recently in my career, I went and spent time working in the pure cloud side of things and worked at AWS on EC2 on some of the instance platforms in EC2. More recently, I have been part of the founding team for a startup called Pulumi, which is really trying to bring a lot of the ideas I'm going to talk about today into the world. Those are the ideas about combining all of what we've learned about developer tools and programming languages and how we can use those to make application developers productive, trying to bring all that into the cloud infrastructure space and make cloud developers productive as well.

The Evolution of JavaScript

To set some context for this, I wanted to do a biased evolution of JavaScript story that sets up how I think this cloud piece fits into the JavaScript world. Of course, JavaScript started out only available inside browsers as part of the web platform. When you think about JavaScript inside the web, it's really focused on a set of APIs that are quite constrained, that are the APIs of the web browser, the DOM APIs, but those APIs have over time evolved to become richer. Along with them evolving to become richer, we've seen frameworks grow up on top of them that enabled a much richer user experience that today's modern React and etc., kind of frameworks support.

Over the 20 years since JavaScript was first introduced to browsers, it's evolved into many different other areas of software development and has become probably the world's most-used programming language today. Probably one of the biggest additional areas that it's grown into is, of course, the server with Node.js. When you look at that, the key differences are we have the same language, but it's a very different execution environment for that code. We're talking about the operating system APIs and some extractions over those for working with the file system or working with networking.

Of course, that's not enough, we need that ecosystem of libraries to uplevel the experience so that we're not just working at the raw operating system level, and this is where we bring in HTTP API and app frameworks and many other frameworks available through NPM. In both of these cases, it’s that combination of JavaScript at the language being a nice flexible language that can be applied to many domains, and then put in a context where it has available some APIs, and then an ecosystem grows on top of it to make available some rich frameworks for that.

I really view the evolution next to that as going into the cloud. One of the reasons I view that as the next step is when you look at that evolution from web to server, that was really going from just being in the client experience where you're just working with the code that runs inside the web browser, to actually the developer getting to own more of that stack. They're owning the API server that's serving that application and backing its APIs. They're owning more of the end-to-end experience for that application developer.

The piece that really those developers who are doing this Node plus browser JavaScript don't have today, is they don't have control over the cloud side of things. They still have to provision whatever compute environment they need inside the cloud to support that. They still need to provision and manage data stores to back those API's, their SQL server databases, whatever, and they need to then figure out how to deploy that code into those cloud APIs. This last piece is really the piece that today, A, JavaScript isn't really available there yet today, but B, software is really not available there today. You can't really program this piece, you have to do a variety of things very manually in this environment. This is really what I want to talk about, is what does it look like to bring JavaScript into this world of cloud and extend that experience for the developer all the way through to the delivery and the cloud infrastructure that they run on.

Infrastructure as Code

Folks who have worked in the cloud space may be saying, "Well, this is a solved problem. We have infrastructure as code, we have tools like CloudFormation, and Terraform and Kubernetes, YAML, and these sorts of things." These are the ability to write some code and describe my infrastructure. The problem with those is really that infrastructure as code is not infrastructure as code, it’s infrastructure as text, I think this is an important distinction. This infrastructure as text does have many benefits. By writing it down in a file, it means it's a repeatable thing, I can version that, I can put it into my source control. This is really important, but I don't get all the benefits of software. When I when I think code, having been from the program language world, I really think all of these benefits you get from using software and software engineering practices on that.

We really don't have those today in that world, and so I often call this instead infrastructure as software. What does it look like to bring software engineering into my cloud infrastructure? By way of an analogy, just a thing I always think about when I think about the spaces, this analogy back to where application level software development was, say 30, 35 years ago, and you got some assembly code. This is a program, a piece of software that can run, but it's missing a bunch of things that I would expect from a productive development environment.

I don't have variables, this is a big missing piece, I don't have loops. While I do have loops with jumps or whatever, but they're very low-level and very hard to work with. I don't have functions, this is one of the first things that becomes really significant, is the fact that I don't have redefined functions means I'm copying and pasting a lot of code when I'm in an assembly style environment. I'm very low-level, I'm just grabbing that thing that worked, copying it, and then changing a few things. Going further with that, I don't have abstraction, I don't have classes and the ability to define the interfaces to components in my system. I don't have the ability to go higher level and describe what a new abstraction looks like that I can work to, instead of me working at this level that I see on the left. Finally, I don't have the C Standard Library, I'm working at this level, this very low-level of assembly, and I don't have that standard library that's built on top of those abstractions. I don't have the types to describe what those abstractions would look like. If you think about this in the context of the assembly code thing, the solution is, "Hey, I'm going to invent a programming language. I'm going to invent C, and it's going to give me all these capabilities and it's going to allow me to up-level the way I think about my software. I don't have to think about it at this low level of granularity of these individual atoms here. I can think about my software, in this case, as ways of composing together functions and components on top of a standard library, which is rich and has many capabilities.

C was just the beginning of this, we then had C++, and Java, and Go, and Python, and Node.js, and what have you. We keep moving that abstraction level up, but the key is to move from this assembly level, everything is at this very low level of granularity, up to these higher levels of abstractions. That's exactly where I see infrastructure as code in a sense today. It's very much at that assembly level, there's a ton of copying and pasting, that's the norm for working with infrastructure as code. There aren't these tools like variables, and loops, and functions, and classes, and components.

Folks who have worked with cloud formation or something, will be familiar with this to do very simple tasks. In this case, it's just to host a single rest endpoint in AWS with a lambda backing it. We're talking five pages of cloud formation, and this is typically something that folks are going to copy and paste around every time they need to do this sort of thing. Very cumbersome, we're not enabling that kind of developer productivity that we expect from software-based solutions.

Demo: Infrastructure as Software

To give you a sense of some alternatives to that and what that can look like with JavaScript and TypeScript, let me do a quick demo over here. I'm going to do this demo using Pulumi, which is one of the tools that I work on. In a second after the demo, I'll talk about some of the other tools in this space, but I'll focus on Pulumi just because it's obviously a tool I know well, and because it's one that can sort of highlight some of these patterns of what we can do.

The first thing you notice is I'm just in a TypeScript file here, I can use TypeScript to describe my infrastructure. I have some imports here, so I can import, for instance, AWS, and this gives me access to AWS libraries. Let me just say const bucket= new aws.s3. One of the first things you notice is that I have access to the entire API surface area inside this environment of AWS in this case. Everything that's available, the resources that are in AWS, I can write code that manipulates those things. In this case, I can write code that says, "I want to have a bucket in my environment," and so I'll just call it "my-bucket." Now I'll say, export const bucketname = bucket.id.

This is going to describe my desired state of my infrastructure is that I have a bucket, I'm going to say, "pulumi up." One of the key things with Pulumi, with many of these systems, is that instead of me just running this program, instead of me saying, "Node," and running this program, actually I need to just say that I want to update my environment to match the description of this program. In this case, I want to say, "I want my environment that I'm running in to have a bucket." Because of that, because I'm updating a live environment, Pulumi will show me a preview of what changes are going to happen. In this case, you see it's going to create an AWS s3 bucket. I'll go ahead and say, "Yes," and I'll deploy that.

We see it says it's creating this bucket, I've actually created a bucket in s3, in this case, using my TypeScript program. I can just come over here, see that bucket, go over into AWS, and we see I have a bucket in AWS. The key thing I might want to do is, let's say I want to make changes to this bucket. I want to say here that I want to set versioning, and I want to set "enabled" to "true." I can make that change to the bucket and now, I want to deploy that change into my environment. I can come over here and just say, "pulumi up." The key thing is here I'm actually, I've changed my program description, but I'm now redeploying this application. This is going to show me what difference it wants to apply to my live environment.

We noticed a couple of things, one is we notice is that that bucket itself is being updated, so I don't have to replace it. I can update it in place to add this versioning, so it's going to say it's updated and then the stack itself was unchanged. I can even go and look at the details and see what was added in here. I can say, "Yes, to go ahead and deploy that change.” This will actually deploy "enable versioning" on the bucket in AWS. A very simple example of me doing deploying infrastructure to the cloud using TypeScript.

TypeScript, of course, and JavaScript, in general, give me mini-benefits here. The fact that I was able to just say, "aws.s3," and now I've got a completion list here, they give me access to everything inside AWS, this alone is actually a very powerful capability, the fact that I can discover and understand the entire API surface area. These cloud providers have API surface areas that are significantly larger than the Node.js API or the web browser APIs. They're absolutely enormous collections of services and capabilities that are available there. Being able to have these available, be able to browse, be able to see examples of how to use them, like we can see over here, this is a really powerful thing to be able to bring this experience inside the IDE.

Of course, I can also see things like if I make a mistake, I get that feedback right away. This is a problem if you're doing cloud formation, for example, that error is something that you might not catch until after it's partly deploying the template into AWS. We're able to bring a lot of that error checking experience of TypeScript and use that to catch errors in my infrastructure really early on in the process.

That's a very simple example, but that's so far really just deploying infrastructure. I'm not really taking advantage of a programming language. Let me just really show how we can use some more JavaScript and Node.js things in this environment. I'll say, const folder= "./files", this is a folder that I have on my disk and I would say for const file of fs.read, I'll read that folder. Then for each file, I want to actually say, "I want that file to be synced with s3." I want to put an object in my S3 bucket, so I'm just going to say new aws.s3BucketObject, I'll name it, "file," and I want to put it in that bucket. I want its key to be the file name and I want the content of the object in the bucket to just be the contents of reading the file. I'll just join the folder and the file.

This is a very simple example, but this shows some key things, I'm actually using some Node.js APIs as a part of how I build this up. You could imagine if I need to parse some file format or read in a CSV file to drive this process, or what have you, I could do all that inside code, but I ultimately use it to describe what set of bucket objects do I want to create?

When I deploy this, this should go and say, "I want to take all the files in that folder and create bucket objects for them." In this case, I think it should be two objects, I see it's going to create those two files. It's not going to change the bucket, and that's the same bucket as before. I can see the details to see the details of those buckets that I'm going to deploy, and then I'll just say, "Yes." This will go ahead and deploy those, but while that's going, I'll just go ahead and keep going here. We saw here we're using variables and four loops as well as part of building up my infrastructure, but now I may want to actually take this and give it a name, I may want to call this function syncFilesToS3, and I'll take a bucket and I'll take a folder. I've created a function for that and now I can just call it, I can say, syncFilesToS3.

I can take the bucket and the "./files". Now, I moved that into a function, if I actually do Pulumi, and I can just do a preview to just look at the preview of this, we'll see that even though I changed my code and refactored it to have this sort of reusable component, the preview will actually say, "No changes need to be made," because I didn't actually change the set of resources I need in my cloud environment. This will say that no changes need to be made as part of my deployment here, so four files unchanged.

That's a very simple way to factor that kind of thing out, this kind of factoring is a very simple example of this, but I could then go and take this and put it in an NPM package and share it with others. Now I've got an ecosystem for managing these cloud resource components, and so I can take that whole kind of software engineering thing go all the way into the package manager with NPM. We have several libraries that we've built and many folks in the Pulumi community are building libraries on top of that as well.

For example, instead of me doing, in a totally different domain, let's say I wanted an AWS VPC, the virtual private cloud. That's something that takes a couple of pages of cloud formation to set up a standard, a typical kind of VPC. We can say, "Hey, there's a few very common patterns for that and we want to make those easy to use," so I can say, const vpc= newawsx. This is a library we have that has some of this higher-level kind of components, I can say, awsx.ec2.Vpc. This will actually go ahead and create all the stuff I need to make my VPC.

Let me just do one more time that Pulumi preview, I have to go and see what this will try to build. This component that we have, actually goes and hides behind that simple new VPC building up quite a lot of stuff. In this case, it's going to build 30 resources by default here. It's going to take care of building public and private subnets and putting route tables in those and putting NAT gateways in the private subnets, really setting up all the infrastructure you need to have a properly functioning VPC, but making it really simple for the developer to do that. They can just say, new VPC. This is how we try and do with that C standard library thing of up-leveling the abstraction of introducing simpler primitives to compose together, is really what we want to do with these kinds of libraries.

Other Similar Approaches

That's some of the fundamentals, let me go back to the slides. As I mentioned, Pulumi is not the only tool thinking about this problem and trying to attack this. There are two other tools I mentioned, one is ATOMIST. ATOMIST has a booth here, so maybe some folks had a chance to chat with them. The other is the AWS CDK, ATOMIST is really looking at this problem of bringing software engineering practices in, and bringing JavaScript and TypeScript in, but from a slightly different perspective. Instead of cloud infrastructure, they're looking at the software delivery pipelines, so really replacing the Jenkins YAML with or Jenkins configuration, with code and with software, and for many of the same reasons. All the same points I made in the beginning really apply in the same way for something like ATOMIST.

Then AWS CDK is bringing a similar kind of JavaScript and TypeScript experience to author in CloudFormation templates, and so similar benefits there. In fact, the ATOMIST CEO wrote this software-defined delivery manifesto, which I think is an interesting thing. For folks who are interested in this, I definitely suggest checking that out. That really talks about why it is that we should care about treating these systems, which today we're managing with markup files as proper software engineering artifacts that are software delivery pipelines, that are cloud infrastructure. These are critical parts of our applications, we should be treating them with the same level of software engineering rigor as we do the rest of our code.

Process Models

One of the things that I kind of mentioned along the way in the demo earlier was that one of the big differences in a sense of what we're doing with this cloud deployment aspect of using JavaScript versus Node.js, is that we have this notion of that Pulumi update that I kept doing, which was taking some existing infrastructure and updating it to be in a new state. I wanted to dive a little bit deeper into what's going on there and how it's a bit different than the other environments. I thought a good context for that would be to talk about what the process model looks like in all these different worlds where we execute JavaScript.

In the web, we have the page as our execution environment, this is a very transient sort of thing. People visit a page, go to another page, if they refresh, they reload all their JavaScript and start from scratch, loaded through skip tags and sort of fundamentally stateless. There is access to some local storage and some of these things that you can use to externalize a bit of state, but the page itself is this very transient thing that goes away, and you expect your code to have to rebuild up the environment every time that it loads.

With Node.js, we're running a familiar process model, we're an operating system process. We kind of have a finite lifetime. Every time I want to make a change to my Node.js application, I have to make a change to my JavaScript code and then kill my old Node process and start up a new Node process. That's how we make changes is to end the process and start a new one. I have some binary format that I sort of build to execute that thing. These processes are largely stateless, they may communicate with the database or something that's running somewhere else. They may write files to disk as a method of storing things across process lifetimes, but the process itself is fundamentally a stateless thing and expects to be scaled out, be rebuilt to exist many different times in its life.

The different thing in this cloud world is that the processes we're talking about are processes that sort of live forever in a sense. My deployment of that application that I was just showing you, I sort of deployed it and then I just kept modifying the existing thing. I never killed the whole stack and brought up a new one, I had an S3 bucket and I don't want to destroy that whole S3 bucket and build a new one. I want to just make tactical changes to it as I evolve it, so the model of execution of these applications is a bit different. I still want to think about them as something I can program, but those programs are things who's processes live effectively infinitely and I have to hop patch them in a sense.

This is why you see "pulumi up" and you see some of these commands are a bit different, I'm not just running Node and then a program. It's because I really want to think about this as something where I'm kind of hop patching that whole application state as I go. This is also why we talk about this as a desired state model, my program represents the desired state of my infrastructure, and every time I do that update, I want to drive it towards a new desired state. Because of this, we can deal with fundamentally stateful applications, I can describe my database inside this environment. I can describe my S3 storage inside this environment, I can actually manage things which are fundamentally stateful and I can expect those to live effectively infinitely as part of the application.

Implications of Managed Services

There's one other idea I want to talk about before I jump back into another demo. And that's some of the implications of managed services and the shift towards a more cloud native architectures, and how those are influencing the way that these programs get written, and the reasons why it's more important than ever to be able to program this environment. The one thing I think about in the pre-cloud era, we had a few fixed VMs. We stood them up once, we didn't really change our infrastructure that often. All the interesting stuff was happening inside those VMs. I was deploying my code in there, I had pipelines that SSH-ed into the machine and dumped some files in some certain places.

The gray part there was all the stuff I own, I own the innards of that VM and had to manage it. Then I had these pieces of open source software or my own code that I brought and ran inside that environment. In the cloud data bearer, it's changed quite a lot. To one extent, you may look at the right-hand side and say, "Oh my God, that looks so much more complicated, there's so much more going on." It's true, there is a lot more going on, but there's two reasons why it's attractive. One is that the gray box has got a lot smaller, the pieces I am operationally responsible for became smaller and more targeted around the stuff that's my business value. A lot of the operational burden got moved off into these green things, which are owned by somebody else.

The other part is that there becomes these edges, lots more of these edges, between components and a lot of what I think of as my application is no longer just the stuff inside these gray boxes. It's actually the logic about how these things are stitched together that's really where a lot of my application lives. I might have these architecture diagrams where I draw something like this, and really critical the way that everything is hooked up to CloudWatch or the way that my lambda is vented off of my API gateway or any of these sorts of things. I really need to be able to think about all of the different edges on this graph as part of my software. I really need to be able to describe those as part of my software and version them and manage them in a robust way.

This is part of why we think it's increasingly important to use tools like Pulumi that let you program the cloud environment. It's not because we mean you need to program that cloud environment on the left, it's because we think this application that is increasingly happening on the right. It really demands a more complex and a more integrated approach to how you think about describing your software, and not just the software running inside one of these Docker containers or lambdas, but the actual infrastructure that ties it all together.

Demo: Breaking down Barriers between App and Infrastructure

With that, I'll show a quick demo of how we can use some of the Pulumi capabilities to break down some of these boundaries and think about that whole architecture on the right as a single piece of code. Let me come back into this example, I'll just get rid of all of this code here, but I'll leave the bucket there. If you saw in that example, on the last slide, one of the things was I had an API gateway that was firing off a lambda. This is the kind of thing where often I want to, in my infrastructure, describe the two pieces of infrastructure connected together and there's some logic in how they're connected. AWS, of course, makes this available through lambda and through the ability to do serverless functions.

We can use that in something like Pulumi by saying bucketdoneonObjectCreated. Now I can actually say, "When an object gets created in this bucket, I want to run a piece of code," I'll say, new object, and I'll just write a little function, and here I'll just say, console.log. I just added a couple of lines of code there, let me just see if I can run this update while I talk about what was going on there. This is just a couple lines of code, but it changes the way you might've thought about what we were just doing before. It really brings into context why it's interesting to use JavaScript here, because I'm not just describing my cloud infrastructure, I'm also describing some pieces of logic that are going to run inside my cloud infrastructure, effectively those edges on that graph.

In this case, I'm going to say, when that bucket gets executed, I want to run this code at run time, I want to execute console.log. When I see this deploy, we'll see it creates a few resources in the cloud environment. This is a little bit of a higher level kind of thing, it's going to create a BucketEventSubscription, which is a Function, and a Role, and a RolePolicyAttachment for that function, and then it's going to hook that up on the bucket itself. We'll see a bucket notification created that hooks up the bucket to fire that event. If I say aws3.cp, I'll just copy of a file into that bucket, so now that we've uploaded a file, if I run Pulumi logs, this will show me all the logs from all the resources that are part of this deployment. We should see in a second or two that we actually get this log event printed out. That's the log happening inside of lambda somewhere inside AWS, and then being written into CloudWatch and then we're seeing that from our logs here. This was the event that got fired.

Now, I can combine both the infrastructure and some of the application code, I'm not going to bring my entire application in here and write it inside the same environment, but these little pieces of glue that connect together various pieces of my infrastructure are key things I might want to be able to program in this environment.

That's a very simple example, I could go and say something more complex, and I won't write through the whole thing here, but aws.dynamodb.Table. I could create a table and then I could use that table from within here, I could get a document client for working with dynamodb.Table, and then I could put some item into this thing, and I could say, tableName is table.name.get(), and I won't finish off this example but I should give it a simple, maybe take ev.Records![0].

I could do something like that and just write that file into a DynamoDB, I've tied together two pieces of cloud infrastructure. Every time an object gets put in this bucket, I'm going to write it into a DynamoDB table. In this now, I'm managing the lifetimes of a dynamodb.Table, some data storage, and an S3 bucket from within this program, but then I'm also describing the lifetime of that code. If I deploy this, we'll see it'll actually create that table and it'll update the function in place to change this new behavior. Instead of just writing out that log file, it will actually upload the key into a DynamoDB table. That's the kind of thing we can do.

Let me just, to give one example of that, jump over and show you a richer example. I won't deploy all these, I'll just give you a sense of some of the kinds of things you can build on top of this. This is an example that is built on some of the same things, we have a DynamoDB table and it's a counter. Then I have in this case, an API gateway, we have this AWSx library I showed you for VPCs. It also has a simple abstraction over API gateways that makes it really easy to set up a simple API gateway. Here if someone gets any route from the route, I want to run this event handler, and that event handler will just increment the counter in that DynamoDB table, a very simple piece of logic.

In this entire app, we've got 47 lines of code, this is something I can go and deploy. I can go and deploy it as many times as I want, the entire application is described within this one file. The infrastructure depends on the cloud and the logic that's going to run inside the infrastructure are all defined in this one file.

Programming at the Level of Architecture Diagrams

What that gets to, is this higher level way of thinking about your applications. What this really lets you do is think about programming at the level of architecture diagrams. Instead of me programming at the level of raw infrastructure, I can think about the kind of level that I would draw on a whiteboard. I want that to be one-to-one with the code I write in JavaScript and TypeScript. That's what we see here, for an example, what I might see on a whiteboard. This is an example where I'd have a video file, I drop it in a bucket, whenever that bucket sees a new video file uploaded, it fires off a lambda. That lambda launches a Fargate task, there's going to be a long-running piece of compute that's going to be containerized. Then that's going to write a jpeg that's a screenshot out of that video into the bucket. That's finally going to fire another lambda that's just going to log after this process is done.

We've described that in a couple of sentences, I was able to describe what this application looked like. For those who have built apps like this before, you're probably aware of this is not actually that simple to implement. The code for this, the JavaScript code for this, that you would write If you're just writing the body of that lambda and maybe the implementation of that Fargate task is very simple. That's just a few lines of code, setting up all this cloud infrastructure to enable this on this architecture is probably 5 to 10 pages of cloud formation. Even if you're using some domain-specific tool like serverless framework, this is going to be somewhat complicated in particularly because you're using the Fargate here and some other technologies that are outside of that. It's actually quite hard to put together these things. I think a lot of the frustration that developers have with the cloud technologies, although these are great building blocks, it's very hard to actually use them. It's very time consuming to put together robust infrastructure that sits on top of all of these building block pieces.

To give a sense of this, let me show you what this application looks like in something like Pulumi. Here's that example, and I've collapsed a couple of things that I'll dive into in a second, but just to give a sense, that whole architecture diagram is this code right here, so about 50 lines of code. I'll walk through the high-level pieces of this, and they match exactly what you saw on the slide. I've got a bucket and that's going to be my source and my sync for this application. I've got a Fargate task, for folks who aren't familiar with Fargate, it's a serverless container offering in AWS. This is the ability to run some Fargate-based container compute that I want to do. Then I've got two kind of event handlers. I say, "When a new object is created that has an mp4 suffix, I want to run this piece of compute. When a new file is added that has the jpg Suffix, I want to run this piece of compute." Then finally I export that bucket name. Really simple to turn that architecture diagram into this code.

I'll show some of the details of what's going on in there to highlight a few of the other things that are interesting about this. For the Fargate task, I just want to provide an image. I want to provide a Docker image that I want to deploy. In this case, the interesting thing is I could say, "Hey, hello world," or something, if I wanted to run the Docker hello world image as part of this thing. In this case, I actually have some custom Docker file that I want to use to run inside this environment, I've actually provided that in this path here. If I go into my file system and open that up, I've got a Docker file in that folder and it's just some custom compute here. In this case, I'm installing FFmpeg and then I'm just writing some batch, which effectively runs the FFmpeg tool inside this environment, but this could be anything.

This could be my Java application or my Go binary or my Node.js application. It could be anything I want to run in terms of compute, and I can run that inside this environment. The interesting thing is that because I said I want to take that source path and that Docker file and I want to build an image from it, what Pulumi is actually going to do is allocate a registry inside AWS, locally build and push that image, that Docker image, up to that registry, and then hook that registry up into this Fargate task, all that's going to get done for you. Just with this one line of code, you don't have to worry about how to build up that delivery pipeline between those things. It just works, you just have your Docker file and it's going to get built and pushed as part of the process of deploying this application. Whenever there are changes to that Docker file, that's going to cause a change to that resource and potentially is going to cause a change to the Fargate task so that it gets redeployed. That's one interesting thing there.

The next interesting thing is that effectively inside this code we just say, FFmpegThumbnailTask.run. Whenever this event comes in, we'll just kick off an instance of that Fargate task. This is very simple, I'm just using the AWS APIs to go ahead and run this thing. I get the full flexibility of Fargate in this case, but I can do it all in a very targeted environment where I am just stitching together these pieces from the outside. I get many of the benefits, of course, generally of TypeScript here.

One of the things I didn't highlight earlier but I think is really interesting is if you look at this callback here, I'll give an example of this one down here just because it's a bit simpler, this callback is one that's on this object-created event. When I actually came in here, you may have noticed that I was able to say, "ev.," and I get completion lists here. I'm actually getting type checking and completion lists and all that tooling for both the cloud infrastructure on the outside and some of my runtime code on the inside. I have the records and at zero and I can dive all the way into here and get this data, so TypeScript is giving me this rich experience all the way through, both on the infrastructure side and on the runtime side as well.

The last thing I wanted to highlight is actually that all of what I've just talked about here, I've focused on the AWS side of things, and how you can program various AWS things from serverless things to some of these container-based pieces, and really focused on how you can use these capabilities there. A lot of these kinds of ideas apply to any kind of cloud environment. In theory, everything I talked about could apply to Azure, or GCP. It could apply to Kubernetes, or OpenStack, or vSphere, or anything like that. In fact, with Pulumi in particular, we do provide the same experience across all of those different platforms. We expose all those APIs into the environment. In this example, this is one that's doing some things with Azure and Kubernetes.

If you look at this, just like I saw before, I can say, "In a new a new azure.storage.Blob." The same things, I get all of those API's available, I can work with anything inside Azure. This is a pretty good example, I'll highlight a couple of things about this. One is, we actually are splitting our code across multiple files using the fact that we can just do lightweight NPM modules and that sort of thing. If I go over to this cluster file here, this has another part of my application. This part of the application describes active directory things, there's the service principles that I want to use as part of my application. Then it stands up a Kubernetes cluster, all the clouds, including Azure, have managed Kubernetes services. Here I just want to stand up a Kubernetes cluster so that I can run some compute inside that Kubernetes cluster, so I provide the specification for that and export that out.

The other side of this, I can do a couple things, I can create a manage data store inside Azure, so using CosmosDB to get a MongoDB-compatible API there. Now I can start doing things inside Kubernetes itself. Now that I've got a Kubernetes cluster, I can actually deploy things into Kubernetes, I can deploy a secret into Kubernetes. This is where we're actually using the ability for Pulumi to talk to the Kubernetes API as well as to the AWS API and to program that in the same first-class way. Instead of me using Kubernetes YAML files and using [inaudible 00:41:26] control to deploy those, I can coordinate that same kind of thing using Pulumi.

In fact, I can use it here where both the Kubernetes cluster that I was deploying to run this Kubernetes resources, and the resources themselves are defined and managed within the same kind of application environment. I can really write those programs in that software across both of those pieces. Then I can use things like "deploy a Helm chart." I could deploy any resources I want, but in this case, it's just I have a Helm chart that exists for Node that I want to use. I'll just deploy that into the Kubernetes cluster, and this will actually use that secret, so now I've actually tied the access, this CosmosDB account is stored inside a secret, so I get the connection string out of that CosmosDB connection strings. I've stored that inside the Kubernetes cluster so that my compute can access that CosmosDB cluster from within the Kubernetes cluster.

This really highlights that the model here of being able to program any kind of cloud environment is really a general one about cloud applications that have this kind of process model of a forever-running set of infrastructure that has code being deployed into it. We can use these primitives across any of different cloud platforms we may be working in. I think it's a very general technique that can be applied in many different kinds of places.

Programming the Cloud

To wrap some of this up or highlight a few of the core things I think we talked about and why I think they're really interesting. The first is, I think there's this march of JavaScript to flesh out the full experience of somebody who is building an end-to-end application. Today it started off, developers could build a lot of stuff inside the browser, had to go to some other language or some other toolchain to kind of build the server side. Then they were able to do that with JavaScript as well. I think the reality today is that most folks using, building server-based applications are also having to own more and more of the delivery and the cloud environment in which that code runs, and so increasingly wanting to sort of integrate that with the life cycle of their application code, and that's why I think we see cloud as the next stage of that process.

The second thing is just software engineering practices, and I really think this is one of the key things here, is that we can really bring everything that we want from a software [inaudible 00:43:45]. We can bring package managers, we can bring type checking that we saw from TypeScript. We can bring the IDE experience we expect as an application developer, and we can bring testing. We can bring all these different things that we know how to do and knew how to treat as first-class citizens in the application development world. We can bring that rigor into our cloud infrastructure.

Of course, we saw a lot of things about we're moving up and down the stack in terms of where our abstraction layer was, everything from accessing the raw capabilities as a cloud platform, to accessing some really high-level things like some of those VPC, which creates 30 resources or some very high-level serverless things.

I think this notion of bridging the gap where we have applications, we have infrastructure today, those are often managed in very different life cycles and processes for deploying them. We really can simplify a lot of our lives by just treating them as a single unified entity for some use cases that deploy in versions in a reliable way together. Finally, this notion that I think is a deep idea about the change in the process model for this application, and why programming means something slightly different in this environment in terms of how I can live patch my environment and use a desired state model to drive to it. I don't think that process model means we have to go back and use YAML files. I think it just means we have to have the right tools for how we bring programming languages and software to bear on this problem.

Questions and Answers

Participant 1: Thanks for this, this was really amazing. In one of your examples, you're saying when we create all the infrastructure, we can do preview and we can see all the resources that have been created. The question I have is how those resources are going to be shared with the backend infrastructure, because eventually, you'll use this script to create all this infrastructure. Is it going to be stored somewhere in the config, which is available to access from a lot of backend services you have, like e-commerce services and a lot of other services, which actually are going to deploy and use that AWS infrastructure which you have just created?

Hoban: Your question is about how do you access the various things that are available inside this deployment from other pieces of infrastructure you might have. You probably saw I had this export at the top level, which made sure I printed out the name of the bucket when I deployed it. That's one of the key ways which, at the top level of that infrastructure deployment, I can export the key endpoints, all of that infrastructure, that I need to share with other pieces of things. I didn't show this, but you can just say, "Pulumi stack output," and then an output name, and you'll get the value of that output.

It's very easy to then script those into other kinds of environments as well. If you need to go and take the VPC that was built by this piece of infrastructure and use it as an input into some later, or you've got some other application inside your environment that needs to then deploy into that, you can use that Pulumi stack output to get it. Also, the resources are all ultimately deployed inside AWS, however else you would have used to inventory and maintain information about where those things were inside your database account, all those existing tools apply as well, but Pulumi does have a layer of tools as well on top of that, that you could choose to use.

Participant 2: That was very interesting things, I've heard of Pulumi before, but I've never heard about it. I've got two very connected questions and they're both Terraform comparisons, you were probably expecting some of those. There are a couple of things that Terraform has never really delivered, I wonder what's your take on whether a higher level programming language versus essentially just the JSON, YAML, HCL, Schema will solve, one of them being unit testing, like actual unit testing without bringing up infrastructure. The second one is about abstracting cloud providers, which is something that a lot of people were trying to do in Terraform when it originally came out, and very quickly realized that it's never going to be possible. Do you think that something like this will actually be possible with TypeScript?

Hoban: Those are two great questions that we think a lot about. The first one about unit testing, one of the nice things we can do is we can bring that full kind of testing experience. We can bring the ability to use [inaudible 00:48:17] and whatever else, whatever test framework you want inside JavaScript, we can bring that to bear in the infrastructure space. Today, we don't have that yet, we have some things that are more like, internally we use some Go test frameworks and things to test this stuff, but we have plans and there are some issues in our repos that you can go look at it if you're interested in how we're thinking about bringing in the sort of full ability to do unit testing.

The one thing I would say is, whenever we go deep on the unit testing versus integration testing thing, my personal belief is actually that for this cloud infrastructure, it's not that interesting to not test the cloud provider. Really, all of the interesting behavioral characteristics of the infrastructure you've defined, is the logic inside the cloud provider, and to the extent you try to mock that, there's nothing left. I think this was a point that was made in the talk by Jim from Gruntwork yesterday, that really unit testing in this environment is really hard to define what it would mean in a productive way, and in a way that would add a lot of value. We do think about it more from the perspective of writing integration tests, tests that really do run against the cloud provider.

The one thing I would say that's a bit different is, in a programming language, you can define components in really rich ways and at a very fine grain even. You can build functions, you can build classes, you can build all sorts of different components. You can think about granularly testing those things, like writing a test matrix of different parameterization of that VPC class. Now I can integration test that in its own repo that runs every time I update anything in that repo, I run that battery of tests against it to test it with a whole bunch of different configurations and then I push my NPM package up into my repository. That means I can disaggregate a lot of that testing into more individual packages that test individual functionality. It's something that feels more like unit testing, even though it is sort of end-to-end validation, but has a scope that's more like the component. That removes a lot of the burden then for the folks using that VPC package, now don't have to retest all the functionality of that piece. They can just test the things at their boundaries of their components. I think that that's an important thing, and that's the direction that we're investing a lot in right now.

The second one was abstracting cloud providers, this is a fascinating thing and one of the first things we built. Once you have a programming language and the abstraction thing I kept mentioning, you can't help but build abstractions. It's one of these funny things about programming languages, as soon as you have one, you don't want to write the code twice. You would naturally go and give it a name and put it in a function, or put it in a class, or put it in a package. It's very unnatural inside of programming language to do that sort of blunt copy-paste kind of task.

That also takes you to this thing about going up to a multi-cloud solution. We have a package called Pulumi/cloud, which is a cloud agnostic high-level API that has things like the ability to create a HTTP API endpoint, which will be API gateway on AWS. It will be Azure functions with its HTTP routing on Azure. We have support for that right now, for AWS and Azure, for the core set of capabilities. In theory, it's something that is very high-level cloud abstraction that you could use in multiple places. We definitely see the possibility, there are challenges, of course, to that. The lowest common denominator of the cloud platforms is not a particular rich thing, but I think there are a few trends going on in the industry, which are changing that.

Obviously, there are more and more of these bulky, opensource managed services that are available. MySQL is available anywhere, Kubernetes is available anywhere, Lambda is not yet or functions as a service is not yet standardized, but I think there will be progress there. We're getting to a point where that lowest common denominator could actually be quite rich and it could actually make sense to abstract that away and build truly cloud agnostic applications. We have all of the right tools to do it, it's just a matter of the maturity of the cloud platforms. That's another place where we are spending a lot of time building up pieces of that. Many of our users are also very interested in that, at least the portability aspect, if not the abstraction aspect.

Participant 3: Thank you for your talk, I have a question regarding, say, we want to migrate from CloudFormation to Pulumi. Is there a way to invoke Pulumi scripts from within existing lambda functions?

Hoban: Pulumi is just a CLI tool that drives the deployment, and by default assumes a set of credentials, any ambient credentials, just like any other AWS tools that pick up ambient database credentials. You can certainly run it inside lambda. Lambda may be a little tricky because some of the deployments, depending on how big your deployment is, a Kubernetes cluster takes 15 minutes. That's not going to work inside a lambda, but certainly, inside a Fargate task, that's something we've had many users who have built their own little pipelines that use Fargate tasks to automate running deployments.

Absolutely, that's possible. Folks put Pulumi inside their CI/CD pipeline, that's incredibly common. You can really run the tool and then, therefore, the deployments inside any kind of environment you want, either to push button automated using lambda, or Fargate, or something or to deploy it on as a get off style thing through a CI/CD pipeline. Definitely, I'd say pretty much everyone seriously using Pulumi is using it in some form like that.

Participant 4: Where do you guys store state and how do you generate state?

Hoban: You mean the state of the deployment?

Participant: Yes.

Hoban: By default, we store the state in a backend service that's managed by Pulumi. When you get Pulumi, you can sign in and then it's stored for you automatically and transparently in the Pulumi backend. I didn't show this, but let me just quickly highlight it in here. You may have noticed when I was doing several of these updates that there were some permalinks here and those are permanent links to information about that deployment that I can get.

If I open that up, we actually see some of that state and information about what deployments have happened to this stack and what resources are under management by this stack. All of that is just backed by that state file. This is the default experience, we want to make that really simple and we take good care of managing that for folks by default, but we do give an option to just log in and store it locally instead, if you want to manage the state file and back it up yourself to cloud storage or something, but we do, by default, just manage it all for you as part of a SaaS offering.

Participant 5: Thanks for the talk, it was a great talk. I think there's a lot of overlap in my question on this, but I was wondering the CLI, a lot of defs and commands to AWS happening from the CLI. It's not going to Pulumi and doing a different...? Hoban: That's actually a really critical thing, it's all happening in the CLI. Whatever environment you're in, whether it's in your local developer machine or in the CI/CD environment, we're driving all of that from that local machine so that the credentials never have to leave that context. We only send that state file off to be stored, all of the actual execution and deployment logic happens in whatever context you want to run it in. That's really important for us to make sure that we don't want to have any of those credentials in our backend.

 

See more presentations with transcripts

 

Recorded at:

Jul 24, 2019

BT