BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Articles Tutorial: Writing Microservices in Kotlin with Ktor—a Multiplatform Framework for Connected Systems

Tutorial: Writing Microservices in Kotlin with Ktor—a Multiplatform Framework for Connected Systems

Key Takeaways

  • Ktor is an OSS Apache 2 project created and maintained by JetBrains.
  • It can be used for creating asynchronous connected systems.
  • Ktor makes heavy use of Kotlin features, including coroutines and language constructs.
  • It is low-ceremony in that it requires very little code and configuration to get a system up and running.
  • Ktor is multi-platform and can run on a variety of systems and server container technology.

What is Ktor?

Ktor (pronounced Kay-tor) is a framework built from the ground up using Kotlin and coroutines. It gives us the ability to create client and server-side applications that can run and target multiple platforms. It is a great fit for applications that require HTTP and/or socket connectivity. These can be HTTP backends and RESTful systems, whether or not they’re architectured in a microservice approach.
 
Ktor was born out of inspiration from other frameworks, such as Wasabi and Kara, in an aim to leverage to the maximum extent some of the language features that Kotlin offers, such as DSLs and coroutines. When it comes to creating connected systems, Ktor provides a performant, asynchronous, multi-platform solution.
 
Currently, the Ktor client works on all platforms Kotlin targets, that is, JVM, JavaScript, and Native. Right now, Ktor server-side is restricted to the JVM. In this article, we’re going to take a look at using Ktor for server-side development.

Ktor on the server

The equivalent of a Hello World application with Ktor would be

fun main() {
   val server = embeddedServer(Netty, 8080) {
       routing {
           get("/home") {
               call.respondText("Hello Ktor!", ContentType.Text.Plain)
           }
       }
   }
   server.start(true)
}

If you have experience with frameworks, such as ExpressJS or Sinatra, this code might seem familiar. First we’re creating an instance of a server that is using Netty as the underlying engine and listening on port 8080.

The next step is to define an actual route to respond to a request. In this case, we’re saying that when a request is made to the URL /home, the server should respond by sending the text Hello Ktor! in plain text.

Finally, we start the server and tell it to wait, thus preventing our application from immediately terminating.

That’s as simple as it gets when it comes to Ktor. If we want to add more routes, in principle all we’d need to do is define more HTTP verbs, along with the corresponding URLs in the routing function. For instance, if we’d like to respond to POST, we’d simply add another function.

routing {
   get("/") {
       call.respondText("Hello Ktor!", ContentType.Text.Plain)
   }
   post("/home") {
       // Act on request
   }
}

Functions everywhere

In case you’re not familiar with Kotlin, you may be wondering what these constructs are, and where do all the magic words, such as call, come from. Let’s break it down a little bit.

routing, get, and post are all higher-order functions (that is, functions that take functions as parameters or return functions). In this case, we’re talking about taking functions as parameters. Kotlin also has a convention that if the last parameter to a function is another function, we can place this outside of the brackets (and if it’s the only parameter, we drop these all together).

In our case, routing is not just a higher-order function, but it’s what’s known as a lambda with receiver in Kotlin. This is a higher-order function that takes as parameter an extension function, which essentially means that anything enclosed within routing has access to members of the type named Routing.

This type in turn has functions, such as get and post, which in turn are also lambdas with receivers, with their own members, such as call. This simple combination of functions and conventions in Kotlin allow for the ability to create elegant DSLs, and in the case of Ktor, this is used for defining routes.

Features

Features are something Ktor provides that enable support for certain functionality, such as encoding, compression, logging, authentication, etc.

If we think of the request/response pipeline, we can think of features as interceptors that intervene during the different phases and provide their specific functionality. In some frameworks these are actually known as middleware, or event interceptors.

A feature consists of two parts:

  • Initialization, which is used for configuring the functionality required. This part is optional.
  • Execution, which handles the actual interception and work being done on the request and response.

To use a feature, we’d generally just install it and optionally configure anything required. Beyond that, the feature itself would handle the execution part. For instance, if we need content negotiation, which not only provides content negotiation itself, but also does the encoding, we’d simply call install(ContentNegotiation) in the set-up of our application.

fun Application.jsonSample() {
   install(ContentNegotiation) {
       gson {
           setPrettyPrinting()
           serializeNulls()
       }
   }
   routing {
       get("/customer") {
           val model = Customer(1, "Mary Jane", "mary@jane.com")
           call.respond(model)
       }
   }
}

In this case, the feature also has an initialization part which is configuring the GSon library and setting the properties. With that call alone, the application now supports content negotiation and encoding to JSON. As such, a call to /customer would send back the Customer object in JSON format.

You may have noticed that in the diagram above, Routing was displayed as if it were a feature. Routing, in fact, is also a feature in Ktor, and much like any other feature, must be installed. However, instead of calling install(Routing), we would usually use the higher-order function routing, which is what we’ve been using. In fact, if we look at the implementation of this function, we see that it calls install(Routing).

fun Application.routing(configuration: Routing.() -> Unit): Routing =
   featureOrNull(Routing)?.apply(configuration) ?: install(Routing, configuration)

Content negotiation and routing are just two examples of features. The Ktor web site lists dozens of other features that ship out of the box with the framework. However, it’s also very simple to implement new features — essentially, all that’s needed is to implement one class where we define the initialization and execution phase. To learn more, check out the source code for the Content Negotiation feature covered above.

Structuring applications

When developing applications, we usually have a series of endpoints that are responsible for different areas of the system. For instance, take a regular CRM — we can have Customer, Sales, Proforma endpoints. With many MVC frameworks, these are usually grouped in different classes commonly suffixed with Controller. We’d have CustomerController, ProformaController, each of them responding to their /customer and /proforma endpoints respectively.

How would we define and group these in Ktor? What definitely wouldn’t work is setting up all the routes in the single application initialization block, let alone in a single file.

Ktor provides us with the flexibility to define our routes in any way we want, and organize them as we wish. While this obviously has the benefit of giving us complete freedom, it also raises the question, especially to newcomers, of what’s the best way.

As any developer/consultant/IT expert would know, the answer of course is it depends. We can organize cohesive routes in a single file. We can create folders and then have each endpoint in its own file. We can group by feature. It really is up to us.

Another aspect of it is how to define the routes, i.e., do we just create top-level functions? A good approach is to make route definitions extensions to the Route class, as seen below:

fun Route.home() {
   get("/") {
       call.respondText("Index Page")
   }

}

fun Route.about() {
   get("/about") {
       call.respondText("About Page")
   }
}

This gives immediate access to the different verbs such as get, post, put, delete, option, head. To then use these route definitions, we can simply call each function in the application initialization code.

fun Application.structureSample() {
   routing {
       home()
       about()
   }
}

Route hierarchies

Ktor also allows us to define routes hierarchically. This means that instead of having to do something along the lines of

get("/customer/") {
 
}
post("/customer/") {
 
}

we could do

route("customer") {
   get {

   }
   post {

   }
}

In fact, each route itself could also define a new URL if needed.

route("customer") {
   get("/list") {

   }
   post {

   }
}

Again, demonstrating the flexibility of the framework.

Rendering Data

We’ve seen prior how to send back text as well as JSON. What about when we want something more sophisticated, whether this is HTML or using a view engine?

Server-Side Rendering

With Ktor we can render data directly from the server using many approaches. One of these is with Kotlinx.HTML which is a DSL for creating statically-typed HTML. This allows us to leverage the full power of Kotlin, combining data with control flow. The example below demonstrates iteration over a series of elements

fun Application.htmlSample() {
   routing {
       get("/html-dsl") {
           call.respondHtml {
               body {
                   h1 { +"HTML" }
                   ul {
                       for (n in 1..10) {
                           li { +"$n" }
                       }
                   }
               }
           }
       }
   }
}

Templating Engines

Many applications today, whether they are Single Page Applications or not, make use of templating engines. Out of the box, Ktor supports a variety of these, including Freemaker, Thymleaf, Velocity, and Mustache, amongst others. They are implemented as features, so in order to use them, all we’d need to do is install them as part of the initialization phase of the application.

Working with route parameters and fields

So far we’ve seen how to define simple routes and respond with text. A web application however needs to send information as part of the request. These can be either as part of the URL (route parameters), as query fields (everything following the ?), or part of the body (for instance, in the case of POST and PUT). How would we process these from Ktor?

Routing Parameters

When it comes to route parameters, we can access these using the call.parameters property.

get("/customer/{id}") {
       call.respondText(call.parameters["id"].toString())
}

Query Fields

In the case of query fields, these can be accessed using the call.request.queryPameters property.

get {
   call.respondText(call.request.queryParameters["id"].toString())
}

Post Fields

When it comes to post fields, Ktor has support for multipart form-data built in. We can simply access these entries using the multipart property.

post("/form") {
   val multipart = call.receiveMultipart()
   multipart.forEachPart { part ->
               when (part) {
                   is PartData.FormItem -> appendln("Form field: $part = ${part.value}")
                   is PartData.FileItem -> appendln("File field: $part -> ${part.originalFileName} of ${part.contentType}")
               }
               part.dispose()
           }
}

Leveraging static typing with Location

One thing you may have noticed in the examples that work with route parameters, as with all routing definitions, is the use of strings. While generally this is absolutely fine, and tools, such as IntelliJ IDEA, can refactor strings if needed; another approach that Ktor provides is to use strongly-typed route definitions.

In Ktor, this route definition is called Location. We can use classes to define them, using the class name as the name of the location and class properties as the route parameters. As convention, the class names are defined in lowercase.

@Location("/") class index()
@Location("/employee/{id}") class employee(val id: String)

fun Application.locations() {
   install(Locations)
   routing {
       get<index> {
           call.respondText("Routing Demo")
       }
       get<employee> { employee ->
           call.respondText(employee.id)
       }
   }
}

In addition to providing type-safety, when it comes to defining routes, it also allows for strongly-typed access to actual route parameters, as can be seen in the case of employee.id. Which ultimately means that the compiler will catch any errors while typing the name of route parameters.

Configuration

In the very first example, we saw how we’d started the application using server.start, creating an embedded server using Netty. While this works great for demos, usually we’d want to externalize the configuration of our server, allowing the ability to define parameters, such as the port, without needing to recompile.

This is how we’d usually deploy and configure applications, and in fact, all of the other examples in this article use this approach. For instance, if we look at the example around JSON

fun Application.jsonSample() {
   routing {
       get("/customer") {
           val model = Customer(1, "Mary Jane", "mary@jane.com")
           call.respond(model)
       }
   }
}

We notice that there is no embeddedServer or server.start call, i.e., the main entry point of the application is missing. It’s not in fact missing, but defined elsewhere.

fun main(args: Array<String>): Unit = io.ktor.server.netty.EngineMain.main(args)

This single line essentially tells our application to start using the Netty engine, but to read the configuration parameters from the command line argument, and if not, fallback to a file named application.conf. This file is defined using HOCON (Human-Optimized Config Object Notation, a subset of JSON). A typical configuration file would look like so:

ktor {
   deployment {
       port = 8080
       port = ${?PORT}
   }
   application {
         modules = [ jsonSample ]
   }
}

where modules indicates the actual application to load (in the previous case would be jsonSample). Ktor applications can actually load multiple modules, where each module could represent an area of functionality.

Roadmap and next steps

As mentioned early on, one of Ktor’s goals is to make both client and server available on all platforms. The client already offers this functionality. At JetBrains, we’re working on providing the missing pieces to make this a possibility for the server. One of these is to offer an alternative to engines, such as Netty and Jetty, which can now be used to run Ktor server applications. There is work already underway to provide a fully-coroutine based solution, which is currently offered as experimental under CIO.

At JetBrains, we are fully committed to Ktor. Not only do we continue to work on it, but we also have skin in the game, if you will, as our recently announced product JetBrains Space is built on Ktor.

If you want to find out more about Ktor, make sure you check out ktor.io.

About the Author

Hadi Hariri is a developer and creator of many things OSS. His passions include Web Development and Software Architecture. He has authored a couple of books, a few courses, and has been speaking at industry events for nearly 20 years. Hariri is currently at JetBrains leading the Developer Advocacy team and he spends as much time as he can writing code.

Rate this Article

Adoption
Style

BT