Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ


Choose your language

InfoQ Homepage Articles .NET Core and DevOps

.NET Core and DevOps

Leia em Português

This item in japanese


Key Takeaways

  • DevOps is a worthy and rewarding pursuit no matter what technology stack you currently use
  • Closedsource, proprietary software and build processes do not work well with DevOps practices
  • .NET Core is open source, and was conceived and built for DevOps
  • The .NET Core CLI and Roslyn API makes the entire delivery process open and adaptable
  • Automation is a large part of DevOps; .NET Core was built from the ground up to support build and deployment automation

With the release of .NET Core 2.0, Microsoft has the next major version of the general purpose, modular, cross-platform and open source platform that was initially released in 2016. .NET Core has been created to have many of the APIs that are available in the current release of .NET Framework. It was initially created to allow for the next generation of ASP.NET solutions but now drives and is the basis for many other scenarios including IoT, cloud and next generation mobile solutions. In this second series covering .NET Core, we will explore some more the benefits of .NET Core and how it can benefit not only traditional .NET developers but all technologists that need to bring robust, performant and economical solutions to market.


I’ve been developing software long enough to have been doing it when .NET 1.0 was in Beta. I remember thinking that using .NET felt like cheating. “Isn’t this supposed to be hard?” I wondered. “Where is my malloc? I don’t have to cast any pointer arithmetic spells? What is this Framework Class Library?” I spent the first six months thinking it had to be some elaborate trick.

Fast forward to 2018, and we’re all still happily writing code on the .NET Framework, without agonizing over memory allocation. Threading was handled for us by System.Thread, then BackgroundWorker, and now Task. FCL classes are marked thread-safe for us, or not, ahead of time. Want to write a web application? Here’s a complete framework, batteries included. So many of the things we had to hand-craft ourselves are provided by .NET, on a virtual silver platter. The upshot is that we developers get to spend far more time writing code that provides business value (gasp!) The Assembly/C/C++ hipsters may cry foul, lamenting the lack of hard-core systems programming knowledge now needed in the average developer, yet here we are. I, for one, am not complaining!

.NET has been through many iterations, including four major versions, since that first Beta. Its most recent iteration, .NET Core, is its most significant yet. .NET Core features include true cross-platform targeting, a modern CLI and build system, and an open source library, just to name a few. Those things are important, but the promise of .NET Core goes further. That promise goes to the way software is produced and delivered.

I’ve been writing software for over twenty years, so I’m also old enough to remember when source control was a curiosity reserved for “large” teams. “Automation” was not really in our lexicon- except to the extent that we automated business processes for our customers. Building/compiling software was something done, ironically, by a human being. A “build manager” would produce binaries on her own computer (where it would always work on her machine!)

Deploying software to the environments where it would run was (and too often, still is) a fragile, byzantine process of shared drives, FTP, and manual file copy-paste. Integrating the work of development teams was a miserable death march, playing whack-a-mole with one regression after the next. Is the software ready for production? Who knows?

Software was rapidly building its appetite for the world, yet the process of producing, deploying, and operating software-based systems was stuck in the days of Turing and Hopper. A revolution was in the air, that began sometime around 2008, and its name was DevOps.

The intervening years between then and now have seen the rise of a movement. DevOps is a big thing that encompasses, and perhaps supersedes, the Agile movement that came before it. I was introduced to DevOps in 2014 when I was handed a copy of The Phoenix Project at a conference. I made the fateful decision to crack the binding then and there, thinking I would read just a few pages. Silly me. My conference plans that day fell to the wayside as I devoured that book. It spoke to me, as it has too many. If you’ve been in the IT industry, even for a short time, you’ve been those characters. You can relate. DevOps has been a career focus for me since then.

DevOps is often presented as having three major “legs”: Culture, Process, and Technology. This article is about the technology of DevOps. Specifically, it’s about the technology that .NET Core brings to modern DevOps practices. .NET Core was conceived during the rise of DevOps. Microsoft clearly has well-defined goals to make .NET Core a DevOps-era platform. This article will cover three major topics of .NET Core and DevOps:

  • The .NET Core Framework and SDK
  • Build Automation
  • Application Monitoring

.NET Core Framework and SDK

DevOps doesn’t exist in a vacuum. The technologies used to produce and deliver software-based systems can support DevOps practices, or hinder them. DevOps is a worthy pursuit regardless of your technology stack. Having said that, the stack you choose will have a significant impact on your DevOps practice.

Closed-source, proprietary build systems are not DevOps-friendly. .NET Core is fully open source, and the file formats used to represent projects and solutions are thoroughly documented. Modern languages and frameworks such as Node/Javascript, Ruby, and Python have been developed with a few common features:

  • Compact, open-source frameworks
  • Command-Line Interfaces (CLI)
  • Well-documented, open build systems
  • Support for all major operating systems

These features and more have become popular in the DevOps era because they are easy to adapt and automate. The .NET Core CLI, dotnet, is the singular entry point to all build processes for a .NET Core application. The dotnet CLI works on developer workstations and build agents alike, regardless of platform. To wit: all the local development work I’ll be demonstrating henceforth will be performed on a MacBook Pro. Try to imagine that, just three years ago!

The first step with .NET Core is to download it. If you’re following along, go here and download the SDK. It’s a lean, mean 171MB on my MBP. Once it’s installed, open up your favorite Terminal window (I’m partial to Powershell when I’m Windowing, but it’s iTerm2 on my Mac.)

If you’re already familiar with .NET development, you’re used to big framework installations. You’re accustomed to using Visual Studio to get work done. If you’re new to .NET Core, this is going to feel a little strange, in a good way. We’re going to get a lot done with these 171 megabytes before we ever touch an IDE.


This is the new CLI command that allows you to interact with the .NET Core SDK on your system. The output here teases the available CLI options. Let’s look deeper.

dotnet help

This is a list of all the commands supported by the CLI. It’s not a long list, and it doesn’t have to be. You’re looking at everything you need to interact with the .NET Core framework build process, from a “fresh canvas” project to a deployed application.

The first step is to create a new application. Let’s look at our options.

dotnet new

The output will list the available templates. This is the part in Visual Studio where you click File-New Project, only here we’re working from the command line. We have quite a few templates to choose from. I’m rather partial to Angular, so let’s start there.

dotnet new angular -o dotnet-angular

This will create a new Angular project in a new directory dotnet-angular. You can create the directory manually if you prefer, just don’t forget to change to it before you execute dotnet new or the project will be created in your current directory (I learned that the hard way.)

If you already do Angular development, you’ll likely have Node installed. If not, take a moment to download and install. If you do need to install Node, close and reopen your terminal after installation.

dotnet run

This command will compile and run the application (compiling can also be done without running the application by executing dotnet build.) It may take a minute or two; then you’ll have some output that includes a URL:

** Content root path: /Users/dswersky/dotnet-angular Now listening on: https://localhost:5001 **

Copy the URL into a web browser and take a gander. What you should see now is a simple application running ASP.NET Core in the background, and Angular on the front end. Let’s take a breath for a moment and think about how this experience differs from the .NET development experience of yesteryear.

If you were following along, you created and ran a .NET Core application in a handful of minutes (even if you had to install .NET Core and Node!) A few questions might come to mind:

Where’s my IDE?

We didn’t need one to get to this point, did we? Obviously, if you want to edit this code, you’ll need something to do that. You will likely want to use a tool that has some understanding of .NET and Angular. “No problem,” you might think, “I’ll start Visual Studio Professional and get to work.” You could do that… or you could download Visual Studio Code, which is nearly as capable, and free. You could use Visual Studio Community, which is also free. The point here is that it’s no longer necessary to invest hundreds of dollars to get started developing with .NET Core. You can start small and grow organically.

Where’s IIS?

This is a major difference between “legacy” (too soon?) .NET web application development and ASP.NET Core. You can run ASP.NET Core applications in IIS, but you don’t have to. The need to de-couple ASP.NETCore from IIS is obvious, considering the fact that .NET Core is truly cross-platform. The commands I listed here, including dotnet run, work equally well and precisely the same way on Windows, Mac, and Linux (there’s even an ARM build that will run on a Raspberry Pi!) This Angular application is one of many that you can now “write once, run anywhere.”

Hosting .NET applications without IIS has been possible for some time. The Open Web Interface for .NET (OWIN) has supported “self-hosted” ASP.NET applications for years. That was made possible by code and infrastructure, generally referred to as “Project Katana.” .NET Core uses an HTTPS server called Kestrel. Kestrel is a fast, high-performance, open source HTTPS server for .NET applications. Kestrel provides HTTPS to ASP.NET Core websites and RESTful services running anywhere, including Windows, Linux, and in container orchestrators. Kestrel makes ASP.NET Core applications completely self-contained, with no external dependencies on a Windows-based HTTPS server.

What does this have to do with DevOps?

Automation is a core tenet and practice of DevOps. The portability, the CLI, and the open-source build system offered by .NET Core are essential to DevOps practices. Most importantly, they make it easy to automate the build and deployment processes. That automation can be accomplished by scripting the CLI, or programmatically by automating the build system directly. These features of .NET Core make it not just possible, but relatively easy to automate complex build processes. This brings us to build automation and continuous integration.

.NET Core Build Automation

Back in the days of Visual SourceSafe (“we get it, Dave, you’re ancient”), it occurred to me that the code my team was pushing into that repository was there, available and ready to be compiled. An idea tickled the back of my mind- “why do I build deployments from my system when it could be done from there?” I wasn’t the only one to have that idea, yet I certainly can’t claim to be one of the few that did something with it. That claim belongs to the brave souls that embarked on the development of Continuous Integration (CI) systems.

The purpose of CI is simple to say, not so simple to achieve:

Always have a build ready for deployment.

Software development is a team sport. The average Agile/Scrum team has three to five full-time developers actively contributing code. The work they do is split between them for the sake of efficiency. The code they produce must be combined, built, and tested together as a unit. That testing must be automated, using a system that does not have developer tools installed. Ideally, build and test should happen every time new code is merged to a designated branch (this would be master in trunk-based development.) CI systems are usually integrated directly with source control systems, triggering a new build each time the CI branch is changed.

Roslyn is an open-source compiler for .NET, with a wealth of APIs you can access directly. CI systems developers use the compiler APIs to build plugins, which in turn automate .NET build processes. .NET Core build tools such as Roslyn provide fine-grained control over the build process. Developers can use them to adapt and extend existing CI system features to cover almost any conceivable build pipeline use case. The best part is that you don’t have to be a CI system developer to build a plugin. Maintainers and vendors of CI systems go to great lengths to make their systems easy to extend.

There are a number of CI systems out there. Here’s a brief list of examples, that is by no means complete:

  • Jenkins
  • TFS/Visual Studio Team Services
  • CircleCI
  • TeamCity
  • GitLab

The flexibility .NET Core offers allows it to work with any CI system. That can be as simple as a script working with the CLI, or plugins that use the compiler APIs to automate the build directly.

If you currently have a favorite CI system, you can try it out with my sample project. This is the same project that we created with the CLI earlier, with a little extra. The repository includes a Dockerfile. It took me about ten minutes to create a VSTS build pipeline that pulls the code from Github, builds an image, and pushes it to an Azure Container Registry. This would work just as well with a Jenkinsfile or GitLab pipeline, in AWS or Google Cloud. The possibilities are, as they say, nearly endless.

Application Monitoring with .NET Core

The “care and feeding” of software systems is a full-time job; just ask your Ops team colleagues. Those systems are like a cranky, colicky baby- constantly needing some kind of attention. Ops staff are often like the confused new parent, at a loss for why the system is screaming for that attention. How do systems scream for attention? That depends on how you watch them or don’t!

The worst way to monitor systems is not to monitor them. Whether you monitor or not, one way or another, you’ll eventually find out when they break. When your customers call in a rage, or just quit your services altogether, you’ll find out after it’s too late. The goal of Application Monitoring is to detect problems before your customers or end users (there’s really no practical difference.) Many companies make the false-economy judgement that application monitoring is too expensive, or that “well-made systems don’t need monitoring.” Don’t buy it.

Even the most stable system is only one failure or change away from catastrophe. DevOps practices try to balance safety with velocity- allowing companies to innovate by moving fast and safely at the same time. That balance is maintained by keeping a close watch on the operational parameters of your system.

.NET Core design and architecture is well-suited to application monitoring. ASP.NET Core is an excellent example. It is possible to customize the internal request/response behavior of ASP.NET 3.x/4.x applications, running on IIS, using HTTP Modules. ASP.NET Core improves on that model with middleware, which is similar in concept to HTTP Modules but quite different in implementation. Middleware classes are integrated through code, and are much simpler to configure. They form a request/response pipeline chain of modifiers for the response to a request.

Injecting middleware into an ASP.NET Core application that performs monitoring is almost trivially easy. I’ll demonstrate an example with Azure Application Insights. I created an Application Insights resource in my Azure portal, then edited exactly three files in my repository to enable Application Insights monitoring:

Added a line to reference the Application Insights assembly (this manual step was only necessary because I was using Visual Studio for Mac, details here.)

Added my Application Insights instrumentation key.

Startup.cs is where middleware is configured. I added the Application Insights middleware here.

Once these things were done, I was able to start debugging locally and gather monitoring data from Application Insights. You can try it out yourself, just replace the sample key in appsettings.json with your key.

Application Insights is not your only option for application monitoring. AppMetrics is an open-source monitoring library that integrates with visualization tools such as Grafana. There are also paid options from vendors that offer Enterprise features.

All these monitoring options provide transparency; the ability to view the behavior of your application in its runtime environments (especially Production!) This is critical to DevOps practice, as it allows you to verify, with hard numbers, that the changes you are making to your system are not degrading its performance. You can then add new features with the confidence that moving fast does not have to break things.


.NET Core was conceived and developed with DevOps practices in mind. The CLI and open build system and libraries make it possible to automate and adapt the software delivery process to just about any imaginable set of requirements. Build automation, and continuous integration are achieved through CLI scripting, or deeper programmatic integration if you prefer. Application monitoring with open-source or paid enterprise tools are available to turn your system from a black box to a clean pane of glass. .NET Core, delivered using DevOps practices, is a compelling platform for modern software systems.

About the Author

Dave Swersky has been working in IT for over 20 years, in roles from support engineer, to software developer, to Enterprise Architect. He is an aspiring polyglot, and passionate about all things DevOps. He has presented on DevOps at conferences including DevOps Enterprise Summit, CodeMash, Stir Trek, and at local meetups in KC.  Dave has also written a book on DevOps: DevOps Katas: Hands-On DevOps. Dave can be found on Twitter @dswersky.


With the release of .NET Core 2.0, Microsoft has the next major version of the general purpose, modular, cross-platform and open source platform that was initially released in 2016. .NET Core has been created to have many of the APIs that are available in the current release of .NET Framework. It was initially created to allow for the next generation of ASP.NET solutions but now drives and is the basis for many other scenarios including IoT, cloud and next generation mobile solutions. In this second series covering .NET Core, we will explore some more the benefits of .NET Core and how it can benefit not only traditional .NET developers but all technologists that need to bring robust, performant and economical solutions to market.

Rate this Article