Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ


Choose your language

InfoQ Homepage Articles Virtual Panel: What's Next for .NET?

Virtual Panel: What's Next for .NET?

Key Takeaways

  • .NET is positioning itself for cross-platform development with .NET Core while .NET Standard 2.0 brings in the missing pieces
  • Streamlining the cross-platform tooling and educating the community to eliminate confusion is the next step to drive .NET Core and .NET Standard adoption.
  • Roslyn has a major impact on .NET, enabling new features to be delivered much faster. Roslyn also enables developers outside Microsoft to use to build their own tools based on its public APIs.
  • The .NET community is now warmed up to open source and increasingly contributing on compilers and system libraries.

A lot happened in the last year in the .NET ecosystem. Things are moving fast on several fronts: Xamarin, UWP, .NET Core, .NET native, F#, open source, etc.

Putting aside the details, the bigger picture is difficult to grasp. There is movement in all aspects: cross-platform, cloud, mobile, web apps and universal apps. Developers wonder where all this is going to lead and what will be required to get there.

The panelists:

  • Richard Lander - Principal Program Manager Lead on the .NET Team at Microsoft
  • Phillip Carter - Program Manager on the .NET team at Microsoft
  • Phil Haack - Engineering Director at GitHub
  • Miguel de Icaza - Distinguished Engineer at Microsoft

InfoQ: Where are .NET and its languages going and what are the challenges ahead?

Richard Lander: You can see the future of .NET by looking at the wide breadth of device and operating system support that .NET has today, including recent additions from .NET Core. You can build any kind of application with .NET, including mobile, web, desktop and IoT. What’s interesting about .NET is that it is in a very small group of development platforms that can run natively on many platforms, has highly productive and evolving languages and tools and Enterprise Support. This, in short, is the vision we have for .NET.

The choice to open source .NET Core has had a huge impact on .NET. We’ve seen large numbers of open source developers getting involved with .NET Core and related projects and seen a big upswing in general .NET activity on GitHub. We’ve also been surprised by significant corporate interest and engagement. A number of big and important companies have joined the .NET Foundation, like Samsung and Google. Something truly interesting is going on when you see other companies saying “.NET is important for our business … we need to get more involved.” You’ll see us continue to be open and collaborative and increase the ways in which we are doing that.

One of the big surprises in 2016 was the introduction of Visual Studio for Mac. It includes tools for both Xamarin and ASP.NET Core. The Visual Studio for Mac product is a very clear signal that Microsoft is serious about cross-platform development. We also have free tools options for Windows, Mac and Linux, making it super easy to get started with .NET.

The challenge is getting people to recognize that .NET is no longer Windows-only and that it has transitioned to a credible cross-platform development option that should be considered for your next project. We’ve made some huge changes in the last couple years, such as acquiring Xamarin, open sourcing .NET Core and building great cross-platform tools support. We have work left to do to earn people’s interest in the product, and that’s a key focus moving forward.

Philip Carter: For .NET, the biggest focus right now is .NET Standard Library 2.0 and having as great of an experience as possible using container technologies, like Docker.  .NET Standard Library 2.0 makes the vast majority of .NET APIs cross-platform, and gives developers a simple way to reason about the code they write.  If the only .NET APIs you take a dependency on are from the .NET Standard Library, your code is guaranteed to run anywhere that a .NET runtime does, with no extra work on your behalf.  This is the same for NuGet packages as well – if the dependency graph of your system ultimately depends on the .NET Standard Library, it will run everywhere.  That’s huge from a code-sharing point of view, and even more important for long-term flexibility.  Need to target Linux?  All your code which uses the .NET Standard Library runs there.  We’re also taking containers very seriously.  We want your experience with deploying code to a container to be as simple as possible, and we’re building the tooling to make that happen.

For the .NET languages, we’re focused on making tooling for our languages as good as possible out of the box.  We’re shipping some great productivity features in the forthcoming release of Visual Studio, and we’re focused on building even more in the future.  In terms of language features, C# and Visual Basic are focusing on continuing to add more of the functional programming features already found in F#, such as expression-based Pattern Matching and Record and Discriminated Union types, modified in ways which make sense.  Non-nullability is also a huge area of interest for us.  F# is specifically focusing more on better IDE tooling, as it already has these previously-mentioned features, but lacks the same quality tooling experience that C# and Visual Basic have.  In short, more features which continue to highlight functional programming, and better tooling for each language.

The biggest challenges ahead lay in the sheer amount of work involved to release all the above as release-quality, supported software.

Phil Haack: C# is headed in a great direction. Now that the design is done in the open on GitHub, the community can follow on and contribute to its future. It's been adding a lot of features inspired by functional programming that'll make coding in it more delightful. F# continues to be a wonderful functional language that inspires a lot of features of C#, but seems to have trouble breaking into the mainstream.

.NET itself is headed towards being a compelling cross-platform choice for development, but it faces challenges in maintaining relevance and growth. While the number of jobs for C# developers are high, the amount that it's taught in schools and bootcamps and code academies seems small compared to Node and JavaScript. While the .NET ecosystem is strong and growing, it's still heavily dependent on Microsoft. There seem to be few large companies contributing to their OSS projects for example. And the number of packages in NPM dwarf NuGet. Expanding the independent community is important.

Another challenge is that at the end of the day, the lingua franca of the web is JavaScript. So in terms of being a cross-platform language, Node and JavaScript have a huge appeal because it's one less language to understand when building web applications. This explains the appeal of platforms like Electron where you can bring many of the web development skills you may already have over to native application development. Thus C# and F# have to make a compelling case to learn yet another language (JS in the front, C# in the back).

Miguel de Icaza: .NET continues a tradition that was set when it was first introduced. It is a framework that continuously evolves to match the needs of developers, that continues to have a strong interoperability story and that strives to blend productivity and performance at the same time.

Today, .NET is now available on pretty much every available platform in use: from servers, to desktops to mobile devices, gaming consoles, virtual and augmented reality environments, to watches and even tiny embeddable system like the Raspberri-Pi and similar systems.

The entire framework has been open sourced under the most liberal terms possible which opens many doors - from becoming a core component of future Unix systems, to secret new devices being manufactured by the industry.

The blend of productivity and performance is one that is very important to me, as I first started working with .NET back in 2000 when computers merely had a fraction of the power that today's computers have yet it delivered a high performance runtime that assisted developers in creating robust software.  This was done by ensuring that a safe programming environment existed, one where common programming mistakes were avoided by design.

This blend has proved to be incredibly useful in this world where we carry portable computers in our pockets and for game developers. Developers still want to create robust software, at a fraction of the time, with a fraction of the support, but running on devices that do not have as much power as high end computer.

As for challenges, these are probably the most interesting ones and I want to share examples on how the framework evolves along the lines that I outlined at the start.

Like I mentioned, one of the cultural strengths of .NET is that we have adapted the framework over the years to match the needs of the market and these change constantly.  From work that needs to be done at the very local level (for example runtime optimizations) to the distributed level (higher level frameworks).

In previous years we had to shrink the framework to fit in underpowered devices and we created smart linkers, smarter code generation and created APIs that mapped to new hardware.

And this trend continues.  One example is the focus in the past year to enhance .NET to empowering a nascent class of users that develop high performance server and client code.  This requires the introduction of new types, primitives and compiler optimizations in the stack.   On the other end of the spectrum, it is now simpler for .NET developers to create distributed systems, both with Microsoft authored technology (Orleans, ServiceFabric), or with community authored technology (MBrace).

On the interoperability side of the house, we have been working on various fronts.  We are working to make it simpler for .NET programmers to consume code written in other frameworks and languages as well as making it simple to consume .NET code from other languages (we already support first class C, C++, Java and Objective-C) as well as making it easier to communicate with services across the network with tools like the Azure AutoRest.

InfoQ: How has the emergence of Roslyn helped the growth of the .NET platform and your language? (C# / F# / VB .NET as appropriate)

Richard Lander: This is a really easy one, and I'm going to bend the question a bit to enable me to include the runtime in the equation. If you think of the .NET runtime, it enables these languages and Roslyn to exist, given the model (mostly garbage collection and type-safe memory) that they expose to developers. So, the absence of that would be C++ (ignoring other industry peers for the moment). The developers on the runtime team work in C++  so that Roslyn can exist and you can use C#. That is very charitable of them!

csc.exe (the pre-Roslyn C# compiler) was also written in C++, so the same model applies there.

It turns out that the developers who write the native components of the platform like C# better. News flash, eh? They actively find ways to do more of their job in and convert more of their codebase to C#. It's a massive over-simplification, but you can think of Roslyn solely as a project to rewrite csc.exe in C#. At the same time, there has been an equally significant trend to rewrite runtime components in C#, too. Particularly for the runtime, it's a significant architectural effort to convert runtime components to C# since you have a bootstrapping problem, but it's worth it.

A C# code-base is hugely beneficial over C++ for a few reasons:

  • It vastly increases the size of the developer base that can contribute to the codebase.
  • We have excellent tools for C# that make development much more efficient.
  • It's straightforward to make a .NET binary work on other chips and operating systems. This is great for bringing up a codebase like .NET Core on something like Raspberry Pi.
  • It's easier to reason about security issues.

So, in my view, the primary trend is moving our existing C++ codebase to C#. It makes us so much more efficient and enables a broader set of  .NET developers to reason about the base platform more easily and also contribute.

Philip Carter: Roslyn has been big for us in growing .NET, helping us at Microsoft build better tools and offering developers a new platform to build a new class of tooling with deep, semantic understanding of their codebase.

From a language developer’s perspective, one of the immediate benefits of Roslyn is a modern architecture which allows for adding new language features far more easily than the previous compilers for C# and Visual Basic.  Roslyn also introduced Roslyn Workspaces, which is a cross-platform editor abstraction layer.  This is now used in Visual Studio and Visual Studio Code (via OmniSharp) to more easily utilize each language service.  Additionally, F# 4.1 will be the first version of F# which uses Roslyn Workspaces with its own language service, which opens the doors to a vast amount of IDE tooling improvements and new features.  This can position F# as the only functional programming language on the market with quality, first-class IDE tooling, which we believe will help grow .NET.  Roslyn Workspaces are the vehicle that allow us to ship better language features for all .NET languages.

Roslyn Analyzers help grow .NET by offering a set of APIs that allow you to build a new class of custom tooling for your C# and VB codebases.  The first improvement here is enabling people to build powerful static analysis tooling more easily, but you can take it a step further with something like a semantic diff tool, or other things which require an understanding of the semantic structure of your code.  This is a vector which, prior to Roslyn, was realistically only available for those who made money off static analysis tooling.  With Roslyn Analyzers, this area of development is now approachable and available to any .NET developer.

Phil Haack: It's had a huge impact. Prior to Roslyn, the idea of implementing a code analyzer was limited to a few who dared delve into the esoteric machinations necessary. It's democratized enhancing your compiler. It also has paved the way for people to be involved in language design. It's one thing to suggest a new language feature. It's another to also submit a pull request with the feature implemented so that others can try it out and see what they think.

Miguel de Icaza: It has helped unify the C# experience across all of our supported platforms: Visual Studio, VSCode and Xamarin Studio but also new development technologies that make it easy to write live code and try live code with Xamarin Workbooks or Continuous on the iPad.

For tools, developers had to resort to half-built, half-cooked language implementations or hacks.   In the C# world this is no longer necessary, as we now have a way for developers to fully grasp a C# program, manipulate it and explore it in the same way the language would.

F# is on a league of its own, it combines beautifully the world of .NET and functional programming into one.   It is a language that has always been ahead of the curve and great for data processing.  At a time where interest in machine learning is at an all time high, it is just what the doctor ordered.

InfoQ: How should developers approach the official .NET platform, the .NET Core platform, and the Mono stacks? Is there a way to "keep up"? What should be used for new projects?

Richard Lander: A major focus for 2017 is reducing the number of things you need to keep up with. Initially, we made design choices with .NET Core that made it significantly different from other .NET platforms.  Since then, we have reverted those choices, making .NET Core much more similar to the rest of .NET. In the Visual Studio 2017 release, you will see .NET Core move to using the msbuild build engine, just like .NET Framework and Xamarin. That makes the experience of making .NET mobile and web projects work together, for example, a lot easier.

We are also standardizing the minimum set of APIs that all .NET platforms must have. We’re calling that “.NET Standard”. We’ll define .NET Standard 2.0 in 2017 and ship various implementations of it. It’s the biggest set of common APIs that we’ve ever defined and shipped by a large margin. It’s twice as big as the largest Portable Class Library profile. Once .NET Standard 2.0 is implemented by all the .NET platforms, many things get a lot easier. Many people are looking forward to that. It’s a game changer.

Philip Carter: This is an important question to answer, especially since we’ve been building so many different things over the years. Here’s the way I like to frame it:

.NET is a cross-platform development stack.  It has a standard library, called the .NET Standard Library, which contains a tremendous number of APIs.  This standard library is implemented by various .NET runtimes - .NET Framework, .NET Core, and Xamarin-flavored Mono.

.NET Framework is the same .NET Framework existing developers have always used.  It implements the .NET Standard Library, which means that any code which depends only on the .NET Standard Library can run on the .NET Framework.  It contains additional Windows-specific APIs, such as APIs for Windows desktop development with Windows Forms and WPF.  .NET Framework is optimized for building Windows desktop applications.

.NET Core is a new, cross-platform runtime optimized for server workloads.  It implements the .NET Standard Library, which means that any code which uses the .NET Standard Library can run on .NET Core.  It is the runtime that the new web development stack, ASP.NET Core, uses.  It is modern, efficient, and designed to handle server and cloud workloads at scale.

Xamarin-flavored Mono is the runtime used by Xamarin apps.  It implements the .NET Standard Library, which means that any code which depends only on the .NET Standard Library can run on Xamarin apps.  It contains additional APIs for iOS, Android, Xamarin.Forms, and Xamarin.Mac.  It is optimized for building mobile applications on iOS and Android.

Additionally, the tooling and languages are common across all runtimes starting with the forthcoming version of Visual Studio.  When you build a .NET project of any kind, you will use a new, but compatible project system that existing .NET Developers have used for years.  MSBuild is used under the covers for building projects, which means that existing build systems can be used for any new code, and any .NET Standard or .NET Core code can be used with existing .NET Framework code.  All the .NET languages compile and run the same way across each runtime.

What should you use for new projects?  It depends on your needs.  Windows desktop application?  .NET Framework, just like you’ve always used it.  Server or Web application?  ASP.NET Core, running on .NET Core.  Mobile application?  Xamarin.  Class Libraries and NuGet packages?  .NET Standard Library.  Using the standard library is critical for sharing your code across all your applications.

The best way to keep up is to keep your eyes on the official .NET Documentation.  The official .NET Blog also has numerous posts about this, but you’ll have to understand that many historical posts no longer describe the .NET landscape accurately.

Phil Haack: I tend to be fairly conservative. For production projects that my business depends on, I'd focus on the tried and true official .NET platform. But for my next side project, I'm definitely using .NET Core. It doesn't suffer the baggage of over a decade of backward compatibility requirements.

Miguel de Icaza: Thanks to .NET Standard, and especially the APIs we're delivering in .NET Standard 2.0, developers should not need to think too much about which runtime is running their app.  Those that keep up with the internals of .NET may be interested in understanding how we ended up with runtimes that are optimized to certain use cases (for instance, the years that have gone into optimizing Mono for mobile and games), but for the most part, developers just need to know that wherever they go, we've got them covered.

When it comes to choosing a runtime, the way to think about them is as follows:

  • .NET Framework is a Windows-centric framework that surfaces the best of Windows to developers. If you are building a Windows-centric application, this is what you will be using.
  • .NET Core is the cloud optimized engine and it is cross platform. It uses the same high-performance JIT compiler but runs your code on all the supported operating systems (Windows, Linux, macOS).  It does not ship with Windows specific APIs, as they would defeat the cross-platform objective.
  • Mono is the runtime used for mobile and Apple platforms (Android, iOS, watchOS, tvOS), gaming consoles and Unix desktop applications.

InfoQ: What features of other languages do you admire and would consider for (C# / F# / VB .NET as appropriate)

Richard Lander: One of the things I like about JavaScript (and other languages like it) is that you can just start writing code in a file and you have something runnable with a single line. There is no ceremony and no real concepts to learn (at first). That's valuable. There are some C# scripting solutions that are like that, too, but they are not well integrated. Certain language features are headed in this direction, but we're not there yet. I'd like to be able to have a single line C# file for a "Hello World" Web API. That would be awesome.

Being a runtime guy, I'm going to bend this question to the runtime again. I like JavaScript and PHP, for example, because they can be read and executed quickly from source. I also like Go because it produces single file native executable. .NET is one of very few platforms that can reasonably do both. I'd like to see us expose both of those options for .NET developers. It’s easy to see scenarios, particularly for Cloud programming, where both options can be beneficial.

Philip Carter: One of the biggest features we admire for C# and Visual Basic is non-nullability.  One of the biggest problems out there, often coined the “billion-dollar mistake”, is null.  Every .NET developer out there has had to chase down bugs where they hadn’t properly checked for null in their codebase.  The ability to mark types as non-nullable eliminates that problem, allowing you to push concerns about null from a runtime problem to a compile-time problem.  Making this a reality is a serious challenge, however, because the corpus of C# and Visual Basic code out there today does not have non-nullability.  Thus, such a feature might have to be opt-in and cannot break existing code.  This is actually not much of a problem in F#, where the F# types you use in your codebase are already non-nullable by default, and nullability/non-nullability is already a language feature.  However, when interoperating with other .NET types, null is still a concern for F# developers.  We’re keenly interested in opt-in non-nullability.

Another set of language features that are interesting are those which enable better ad-hoc polymorphism; namely, Protocols/Traits and Type Classes.  These allow you to extend the functionality of existing types, and Type Classes in particular are even more flexible because they allow you to define behavior without needing to “pin” it to a particular type.  This makes things like equality semantics for an arbitrary type hierarchy much simpler than it is today.  While something like Protocols/Traits or Type Classes aren’t on our roadmap, they’re certainly interesting and do solve some of the more nuanced problems you can encounter with .NET languages today.

Miguel de Icaza: I am not responsible for the evolution of those languages, but as a user, I have a list of features that I would like C# and F# to incorporate.

For F#, my request is simple: I am a man that is too attached to his old ways of writing loops.   I want my loops to include support for break and continue. I know this is heresy, but that is what I desire the most :-)

For C#, I would like the language to continue incorporating many of the F# ideas, and in particular, I would like for it to introduce non-nullable references.

InfoQ: The open source shift at Microsoft have been underway for over a year. In what ways did it influence or change the .NET community?

Richard Lander: The open source shift of .NET began in 2008 releasing ASP.NET MVC source code, then Web API and SignalR and finally open sourcing all of .NET Core in 2015. This has been a journey and now over 50% of all .NET Core changes are coming from the community and the number of C# repos grows every day on GitHub. This is a sea change from the past and paints a very nice picture for the future.

As open source maintainers, we try to do a great job. That breaks out a few ways: being welcoming to newcomers, providing decent repo documentation and instructions, holding a high bar on PRs and enabling community leaders to rise up and help run the project. In some cases, we've met the community’s expectations and in other cases we have conversations going on where they would like something different. In general, I think it is fair to say that the community is happy (and in many cases very happy) with how the .NET Core and related open source projects have worked out over the last couple years.

A positive and healthy open source platform project for the .NET ecosystem is hugely beneficial. It encourages more open source projects from a broader set of people. Effectively, it creates a different tone that wasn't possible before just because you can now say that .NET is an open source ecosystem. The feedback has been very positive, from the community, from small and big business and from the public sector.

Philip Carter: I think that the shift to open source has done a tremendous amount of good for the .NET community.  I think the greater .NET community is still warming up to open source, and might be a bit jarred by our sudden transition, but people are already seeing the value and contributing.  We’ve seen an uptake in community contributions across the board – even in our documentation, which is not normally associated with open source development, we have almost 100 non-Microsoft contributors.  One community member even reviews pull requests that Microsoft employees make on our own repository!  I am personally noticing a .NET community that feels empowered to contribute to their development stack, and I couldn’t be more excited about it.  This is only the beginning, too.  As we launch release-quality tooling in Visual Studio for .NET Standard and .NET Core (which is open source as well, by the way), I expect an increasing number of .NET developers watching and contributing to .NET.

Phil Haack: I may be biased, but it's been underway for more than a year. It's only in the past year that it passed the tipping point. I think it's showing the community that Microsoft is dead serious about being a real open source player. It may still take time for that to sink in. After all, MS has been heavily antagonistic to open source for much longer than one year. But I think it's a net positive. The .NET community can now actively participate in the future of .NET in a manner that wasn't possible before. All of the new .NET core development and language design is being done in the public on GitHub and they accept contributions. This is good to see. More and more the community feels like a part of the effort and not a sideshow afterthought.

Miguel de Icaza: It has energized both the open source .NET community as well as those that merely consume .NET. The benefits of open source for a framework are in full display with the opening of the framework and you can see contributions to the codebase across the board, from performance to memory usage, to improved precision to scalability and so on. There is a virtuous cycle in full display right now, as we nurture and grow the .NET community together.


Open source in .NET is now part of the landscape and get be expected to continue growing with .NET Core and .NET Standard 2.0. Microsoft focuses its efforts on the cross-platform story, polishing the platform to appeal to non-Windows developers and platform implementers.

About the Panelists

Richard Lander is a Principal Program Manager Lead on the .NET Team at Microsoft. He graduated from the University of Waterloo (Canada) in 2000 with an Honours English degree, intensive study areas in Computer Science and SGML/XML Markup Languages, and went straight to work at Microsoft. Richard has worked on many aspects of .NET (joined team in 2003), including integration with Windows, .NET application model, and customer communication. Today, Richard focusses on the .NET Core open source project, cross-platform .NET with ASP.NET 5 and partnerships with other .NET companies (Xamarin, Unity). Richard is also a frequent poster on the .NET Blog and @dotnet twitter account. In his spare time, he swims, bikes and runs and participates in a few local races each year. His enjoys 80s British rock and Doctor Who. He grew up in Canada and New Zealand.

Phillip Carter is a Program Manager on the .NET team at Microsoft. He currently works on F# tooling, .NET, and .NET documentation.
Prior to joining Microsoft, Phillip was a student at Oregon State University, where he worked as a student developer-mentor and was the president of the mobile app development club.

Phil Haack is an Engineering Director at GitHub in charge of the Client Apps - a group that consists of the Desktop, Atom, Electron, and Editor Tools teams. Haack joined GitHub in 2011 and is a prominent member of the .NET community. Prior to GitHub, Phil worked at Microsoft as a Program Manager. His teams shipped NuGet and ASP.NET.

Miguel de Icaza is a Distinguished Engineer at Microsoft focused on the mobile platform where his team’s goal is to create delightful developer tools. With Nat Friedman, he co-founded both Xamarin in 2011 and Ximian in 1999. Before that, Miguel co-founded the GNOME project in 1997 and has directed the Mono project since its creation in 2001, including multiple Mono releases at Novell. Miguel has received the Free Software Foundation 1999 Free Software Award, the MIT Technology Review Innovator of the Year Award in 1999, and was named one of Time Magazine’s 100 innovators for the new century in September 2000.

Rate this Article