Transcript
Tsai: Hi, my name is Mei-Chin [Tsai] and my team owns .NET Language and Runtime.
Parsons: I'm Jared Parsons. I'm a part of the .NET Language and Runtime team. But specifically, I own the C# compiler.
Tsai: Before I go on, I would like to know who was in yesterday's performance talk? Thank you for returning. This is our goal and this is our journey. We want to be a platform for building anything. We started as a desktop runtime. As time evolved, we picked up a different workloads: web, cloud, mobile, gaming, IoT, and AI. What we think is really important is actually developers' knowledge, as an asset. When you pick up a language or ecosystem, we would like you to be able to reuse it when your workload shows up. Do not drop the knowledge that you accumulate over the year and you continue to be marketable and companies continue to be successful.
Innovating on Desktop is Hard
I want to talk about desktop. That's where our route started and I want to talk about how hard it is to innovate on desktop. How many of you are still developing on desktop apps? Have I ever break you? Yes? No?
Parson: You can be honest. It's okay.
Tsai: It's okay to be honest to me. When we decided to ship .NET with a Window, it was a key milestone because is the central distribution, and we actually got to touch every single mission out there without us making an effort, without you downloading it. What we did not know is our success could be our baggage. As Window moved into the Patch Tuesday, Windows' update, firmware updated simultaneously without you being able to validate your application again. We often found ourselves breaking applications that we do not know because applications out there are not on our radar.
Let me give you a couple examples of breakage that was actually super surprising. Actually there are more horrible stories, but I'm not about to share all of that. First one, we changed array sort. The implementation of array sort was a quicksort. Quicksort is not stable. We all know that. And we changed it to introspection sort. It’s supposed to be more performing and it's also not stable. So we ship. We actually broke you, one of you, maybe not in this room, somewhere on the earth. WinForms, one of the country on WinForms turned upside down because the un-stability, within the un-stability, there is stability. And that is a compat.
The second question, do you think performance is a compatibility? Anyone who wants to agree? Do you think performance is compatibility? Can you raise your hand? Yes, I probably break you before that [inaudible 00:03:07] that one. I remember in early December, Miley came to my room and he was panicking. He told me in order to support portable PDP, we had to change our co of stack trace when a session is being thrown. He told me, "Performance is not very good." And I said, "Can you define not very good?" He said, "Maybe 100 times slower. I said, “Exception trace and exact trace on exception." Do we see our Miley test, do we pass it? Yes. Do we run our stress? Yes. Do we run our compatibility bar? Yes. Do we run our partners? Yes.
Well, maybe it shouldn't be a critical issue because you shouldn't throw that many exceptions in your application to start with. Sure enough, right before Christmas he's in my office again telling me one of the countries in Europe, the health provider care, is actually on the floor. Nobody can see doctors because they cannot pull out the insurance policy. We got a very nice polite email from the people who develop the application. They should be apologizing to us. They know the application is written in Java and C# hybrid. Exception is the control flow.
When an exception is being thrown, JVM is asking that stack trace to store their way because it's continually a security. But you know what we did over the holiday season? We scribbled. We roll back the change because we don't even know how to recover from that performance regression. We actually did not make it until for that 7.2. So that just kind of gives you a flavor of why innovating on desktop is hard.
Also, shipping with Window is a blessing; it’s also a curse. Now, think of it; it's probably a bigger curse than blessing. Window has certain Windows to ship. They view everything underneath their platform as a high risk component. We have been always labeled as a high risk component. We actually do not know what we build until they already know what they want to build. They gave us four months less than their shipping schedule because we needed to stabilize. They don't trust us. So that means when you receive a 6 months or 12 months product from us, we only have four weeks to do development. Now you see why it is really hard to innovate, and it's really slow. It really doesn't matter how much runway we have. The time that we can actually develop on is an issue.
We Really Need to Innovate
But we really need to innovate. Just look at the modern workload diversities; we start to see servers with 128 core, versus Raspberry Pi. One runtime had to be scalable in both spectrums. We used to see long running processes coming to us. Reliability is important. If we don't process, you bring down our service. But we started seeing microservices. There's a form of things and they want fast start up. They want to know that strong guarantee of reliability, because fault tolerance is built in many layers, not a single point.
There’s also the question of what should we optimize for? We know that if we can optimize for everything, and we can be perfect, perfect. But we know performance is hard. There's always a tradeoff. How do you tradeoff between start up triple latency that we talked about yesterday? How do you tradeoff between size on memory, size on disk, size over wire and network speed? Is network faster than disk? If you ask different people, you'll get different answers. We were actually told once that they believed in their configuration, and he refused to tell me what configuration it was. He told me in his configuration, networking was actually faster than disk. That was actually a very mindboggling concept that is different.
With that, we know that we need a new playground to which we can bring our customers along to all the different workloads. In the meantime, we also started to receive a lot of different requests and inquiries from our partners. They are checking our Linux story. They asked, do we have Linux offering? If we do not have Linux offering, we're probably seeking an alternative. And so we knew in order to even sustain our customer, we had to open up the opportunity for them. So we know we must go to Linux. But to go to the Linux and be credible, we had to be open source.
We must take the lesson that we learned before, and not to repeat the same mistake. So we need to give customer choices, the deployment choices. Application local, if you want to bring runtime with you, take it with you. If you want to do a machine-wide multiple versions of it and you want to tackle one of them and really hard on to it, you will have that choice. We know that we need to innovate fast and ship fast. Otherwise, we will not be relevant. With that, I would like to pass the talk to Jared, to how we got to .NET Core.
.NET Open Source Journey
Parsons: As Mei-Chin noted, one of our big priorities was creating a sustainable open source offering for our platform. That's pretty much minimum bar if you're going to try to ship on Linux. Here's a bigger, broader timeline of the open source journey we've taken in .NET. So the ECMA specification was released way back in 2001, before we even released the Runtime. The Runtime released about a year later. The importance of the ECMA spec being out in the open though was it gave others the ability to create their own .NET Runtime implementations. Even though Microsoft built the first .NET framework for Windows only. The spec was deliberately portable across OS and chipsets.
The model project began spearheaded by Miguel de Icaza with the specific goal of implementing the new .NET stack on Unix-like platforms. In about 2008, ASP.NET MVC went open source. This was important because it was kind of the first application development framework from Microsoft to be released in the open, yet the underlying runtime and the compilers and framework were still closed. So while the community could help drive the NBC project, they couldn't really fundamentally change the way that things worked in .NET ecosystem.
After that limited success, we kind of did nothing, till about 2014. And this is kind of where the .NET Core journey began. Around this time is when we began having these conversations with our customers. We could see the future and we knew that Linux was coming, and this was going to be important. So at Microsoft Build, we released the C# compiler open source onstage for the world. Later in November, the CoreCLR, the framework, and everything else started to be released into the open as well. Since then, we've been building on top of this foundation. We've continued to release all of our .NET ecosystem parts on GitHub.
Stubbing Our Toe
Now, the move to open source was not completely and totally smooth. As I said, C# was kind of the first part of this journey that went open source. And we did this on CodePlex. A few other projects had gone open source before this. But this was kind of the first part of the core development platform that was available for our users to contribute to. The problem, though, was this was only the compiler that we put in the open. The rest of the C# Experience- so intelligence, the bugging, refactoring- they were all still closed source, as well as their tests. The day-to-day life of the C# developer was actually still inside TFS. That's where we did our development, our code reviews, our test, and processes. They were all internal.
The only visible activity that we gave to the community was our commit history when we pushed changes to CodePlex. That's only when we remembered to push it. It was a completely manual process, and we didn't always remember to push the button. So the community did get updates, but not as often as they should. And while having the code and history available to the community was valuable, it wasn't like fostering a vibrant community of contributors. Yes, we did get a number PRs, but these were more kind of doc updates or simple bug fixes. Oftentimes, these PRs were incomplete. Because remember, a good portion of our code, probably about two thirds of it, was still closed source. The community thought they were doing this great work, but really, they were actually breaking a whole bunch of our code when they changed APIs. So we would actually have to take their changes, merge them internally, fix up all the tests and then push it out into the public. These PRs, instead of helping us, we're actually kind of creating a lot of work for us. So probably not an ideal situation.
The other problem this creates for the community is they see everything on a time delay. By the time they see the code that we've authored, we've had the design debates, we've done the code reviews, and everything is pretty much settled. These changes, more or less, just end up getting dumped on the community. There's no participation on their side, they have no opportunity to shift our debate, and they can't even see what is our decision-making process, what does this team value.
Move to GitHub
After this experience in C#, as we were starting to release the CLR and the framework is open, we decided we really needed to change our approach because what we were doing wasn't creating that sustainability that we were looking for. The first step was moving from CodePlex to GitHub. There's a very simple reason for it. That's where all of the community was. There were a number of other Microsoft projects that had gone on to CodePlex, and we've all kind of seen this lackluster involvement from the community. It seemed pretty evident and clear. If we want to get community involvement, we need to go and be where the community was. As I said, the CLR and core effects, they started the release on GitHub. Roslyn moved over shortly after. And since then, we've all been on GitHub and had a much nicer community involvement.
But we didn't stop at just moving our community facing page to GitHub; we moved our entire operations over. That meant everything from switching to get from TFS using GitHub for issue tracking and doing all of our changes through pull requests. This required a lot of investment. For instance, a number of our engineers knew Git and were very familiar with it, but a lot of them did not. Also, we had to build a CI system from scratch, because at this point, all of our CI was still running internally. And because it's Microsoft, we weren't using one CI system. We all had our own little CI systems. So it wasn't migrating one big thing over, it was migrating a whole bunch of small things. We chose to use Jenkins because at the time, it was the one thing that met all the needs we had. But if you've ever run a Jenkins server, it's not a service. It's an infrastructure. So we had to build a team up to run our Jenkins server for our entire organization.
That was a lot of work. The community got to wake up to a couple of splashy blog posts about .NET moving to GitHub. But behind the scenes, we'd spent months of effort getting there. At the same time, we also moved all of our code into the open. It was really clear that having half your product in the open was just not a good idea. We were finally able to convince management that this was the way forward. They bought into it. We moved everything into GitHub, and it was much nicer.
The other thing we decided to do was we wanted to thank the community for coming with us on this journey. We wanted to show appreciation to all the members who followed us through all of these painful steps along the way. So for about the first year of being on GitHub, we started sending out these thank you mugs to anyone who merged the PR into one of our repositories. The mugs were customized. We went down to this place in Microsoft and we use a laser to engrave both the GitHub username and the commit shop for the PR that emerged. We didn't tell anyone we're doing this. We just found really non-creepy ways to ask people for their addresses. By the way, apparently, everyone will just tell us where they live if we ask. We didn't tell them why we asked. We'll say, "Hey, what's your address?" Went sent them these mugs.
People, I think, see here, they started posting pictures of these mugs on Twitter and various other social media. That generated quite a bit of momentum for developers to get over to GitHub, get a PR merged so that he get this swag before we ran out of them. This was great at getting the community involved in our open source efforts, and just kind of generating positive sentiments around what we were doing. The question though was whether or not would this generate that sustained momentum that we were looking for, and that we felt was just necessary to be successful?
.NET Open Source Success
Looking at this chart, it's easy to see that we were able to succeed it on that mark. There are actually two diagrams here. In the foreground, what you see is the accepted PRs every month from the community since we've moved to GitHub. This isn't just a rush of developers trying to get a cool piece of swag. This is sustained involvement from the community. At this point, the number of community PRs we merge every month outnumbers our fulltime PRs. So we are actually merging more code from the community on a month-by-month basis than from our own employees. That's really awesome.
When we drill down and we look at this specific repository, look at individual repositories and their commit patterns, we're seeing a lot of contributors who are showing pretty deep growth in our products. For instance, on the C# repository, we have one user, ALRZ, who kind of started off with a few issues, and then he sent us a couple of bug fixes. Over time, he's grown to the point where he submitted small language features, then we had him come to C# language design meetings, virtually via Skype. Just this morning, we were finishing up a review where he has a pretty significant feature he's trying to get checked in for C# 8. This is amazing. And it's not just our repository; we see this across all of the repositories we have on GitHub.
Now, in the background, it's a little bit harder to see, you kind of have a hit map that shows where all of our contributors are coming from. Every one of these dots, they represent someone who has opened an issue, submitted a PR, and you can see we're getting contributions from all over the globe. That's really encouraging. So overall, we've seen a lot of companies taking a bet on .NET, and some of them specifically because it was open source and they felt like they could make the changes they needed to be successful. At this point, we've had had more than 20,000 contributors from about 3,700 companies, and over half of those are coming outside Microsoft now.
Now, we did all this open source and movement in order to create this .NET Core product. .NET Core one was kind of our first deliverable, the sealer runtime, and the goals we had here were pretty straightforward. We wanted to deliver a cross platform runtime that targeted Windows, Linux, and Mac OS X. Particularly, we wanted to enable cloud workloads for Linux. This is one of our more motivating scenarios, with ASP.NET customers, both internal and external. We also wanted to slim down the runtime and framework.
Our starting point, again, was the full desktop runtime and framework. That included a lot of deprecated technology and cruddy old APIs, things like remoting and binary serialization. Some of these things just didn't make sense in a cross platform environment. And so we removed those pieces, as well as removing a number of other APIs in order to get our payload smaller and make it easier to get our port up and running. Additionally, having less than this baggage meant it gave us a little more flexibility in innovating as we moved forward. We also wanted to target a flexible deployment story. As Mei-Chin mentioned, we really wanted to give deployment of the framework, put that back in control of the application developers. No more of this. You're on the desktop, Patch Tuesday comes in.
Lessons Learned
The initial version of CoreCLR accomplished all of our main goals. It delivered the cross platform promise. We had a number of teams and companies use it very successfully in production. But at the same time, it taught us a few lessons about what we needed to be doing going forward. The first was that we trimmed the API down way too far. The provided API set was great if you were in a Greenfield project. If you wanted to start Linux ASP.NET project, .NET Core 1 was pretty excellent and it gave you the tools you needed. But, if you were someone who had a big existing code base and you wanted to make it run across platform or maybe move it to .NET Core, what you found is a lot of frustration, because almost certainly when you changed your target over you got a whole bunch of "this API doesn't exist" errors.
This was pretty bad for the experience because it took all the developers' intuitions, who had built a lot of time on .NET Framework, and made it irrelevant. This applied to both .NET Core and .NET Standard. And .NET Standard is essentially the API target where if I want to write code that runs both on the desktop framework and the CoreCLR framework and Mono and Unity, I can target .NET Standard. It had all the same problems as .NET Core, super limited API set and made it very hard for people to port their code over.
.NET Core 2.X
So, looking forward to the version two release of CoreCLR. We had a couple of other goals. The first one was performance. So after all, one of the main reasons we started this project was we wanted a place where we could rapidly innovate our platform. Performance is one area we were very near to dig into, particularly in areas which cross the entire stack, where we can look at the runtime, the languages, and the framework and say, "Can we make an across the stack change that really improves the performance of .NET overall?"
We also wanted to close the API gap we had with the desktop framework. We needed to turn this from a source of friction when people were moving from desktop to .NET Core, to something that just felt super natural and was very easy, and frankly didn't even feel like a move. We also had a bit of a focus on ramping up our developer tool experience. For example, things like slimming down our project files, increasing the performance of our CLI tools.
Now, on the performance, we definitely made a huge number strides in version two. So what you see here is a slide showing our performance on the TechEmpower benchmark, specifically Round 14. As you can see, we're about twice as fast as the Java servlet and about four times as fast as Node.js. This is not just TechEmpower; customers were seeing similar whims in their real world applications. Raygun, for example, was able to go from 1,000 requests per second in their Node.js runtime to about 20,000 with .NET Core.
In 2.1 release, we continue to make a number of strides there. So this benchmark is actually one that's kind of internal to Microsoft, but it runs on the same hardware that the TechEmpower benchmarks run on. This is not JSON, that one is later. But you see our plain text workload was able to increase by 12% between the minor version, our JSON workload by 11, and our fortunes by 123%.
One of our other focuses was closing the API gap with the desktop framework. We want to make this feel much more natural for developers, and the same time make it easier to port .NET assets onto .NET Standard or .NET Core. As mentioned before, .NET Standard is what you target when you want to have code that's portable across a number of different runtimes. In version two, we brought back over 20,000 APIs from desktop. This really lowered the bar for entering. For instance, on the C# compiler, when we initially moved to .NET Standard 1, there was a bit of friction. I went back and redid the port to 2.0, and it just worked right out of the box. We only had to make one minor change to what we were doing. It was much smoother. The first one took about a month of work to complete.
Additionally, we released the Windows compatibility package. This brought a further 20,000 APIs back to .NET Core. Now, these tended to be more Windows specific. There were a number of these that will work cross platform. But for anyone who was doing Windows-based development, there's now 40,000 more APIs for you that are just available. So these additions made .NET Core a much more comfortable platform, for all developers who come from desktop. It was now actually fairly easy to import existing libraries to code bases or the target .NET Standard and be portable. Or simply for people who want to go all the way to .NET Core, it was much easier.
.NET Core 3.X
Looking forward to version three, there are a couple themes that we're targeting. The first is enabling desktop workloads. This is making it possible to author WPF and WinForms component on top of the core stack. In the same way that .NET Core is side by side, these UI stacks will also be side by side, which means that we can actually do some bug fixing in WPF. I don't know, has anyone ever looked at the WPF controls implementation? It's complicated and compatibility is basically impossible. The second is, we want to continue to innovate inside the runtime. We got a lot of wins by doing this in version two. We think if we keep focusing on this, we can continue to move the bar forward, both and performance and in expanding the capabilities of the runtime.
We've talked a lot about our emphasis on performance. I wanted to take a bit of a deeper look at one of the features that were key to our performance story and .NET Core 2 and that's Span
When looking at the ecosystem, we saw a pattern of problems here, too many string and array allocations across the ecosystem. There were a number of reasons for this. Sometimes the customer had just chosen a bad approach and forced a lot of allocations. In many cases, though, we found we were just giving the customers little other choice. The .NET APIs as a whole tends to prefer having strings and arrays as input. For instance, if you want to parse an integer out of a piece of text, you have to allocate a string first to get there. That means if you have a big web request header, and you want to get a couple little numbers out of it, for every one of those numbers, you have to allocate a stream. Again, these are small little allocations. But for the totality of your application, they start to add up.
Furthermore, in text-based applications like websites, you often end up dealing with text in both two forms, strings and character arrays. These types of simply don't mix. So whenever you're trying to write an algorithm that processes text, you end up in a kind of bad decision. Either you have to write the algorithm twice, or you have to essentially take on an allocation and say, "I'm going to take the allocation to convert that char away into a string and write the API once."
Span
Now, Span
It also enables no allocation slicing. Today, if you have a string and you want to extract a portion of that string, pretty much the only choice you have is substring, which forces an allocation. The same is true with arrays. And now with Span
Due to the escape analysis, we've actually done work in the compiler so that you can allocate arrays on the stack in a lot of cases. This is not something where if you do a pattern, the JIT can look at it, and maybe it'll optimize it to the stack. You can actually declaratively say, "Please allocate this array on the stack." And there's no risk of safety issues, there's no risk of returning bad pointers. If you do it wrong, the compiler will tell you. There's also kind of a sibling child, the Span
So I think it's important to dig into the implementation of Span
Being intrinsic though is important because it allows the runtime the heavily optimize this type. It can essentially give it all the characteristics of arrays. It can ally bounds checks allocation, ally bound checking in the right places and do a pretty progressive in-lining. That means that in cases today, like when you have a string and you want to move it to a read-only span of char, you can do so with no performance penalty whatsoever. And as I noted before, they have the general usability of arrays. So if you're familiar with using arrays, indexer's length, using a span is literally no different.
Here's a simple function demonstrating where Span
Like I said, calling this once is not that meaningful. But imagine you're processing a file with thousands and thousands of lines. These applications will start to add up. So using span, we can eliminate these applications without really having changed anything about the algorithm. The slice method here was just returning a read only Span
Now we have all of this parsing and returning and we've incurred no allocations. Additionally, now, because since we moved our input from a string to read only Span
Span
And the framework, we had to go through all of core effects, and we had to find all these hundreds and hundreds of overloads, which took strings, which took arrays, and give them a Span
Earlier, we noted how hard it was to innovate on the desktop runtime. Deeply integrating a feature like this would just never fly there, because the Windows key would tell us it's too risky and the wins simply aren't worth it. We're not doing Span
So I've talked a lot about how much code can clean up and how much simpler things can get when we have Span
This is not some fancy ordinal case comparison. This is memcmp. It is, take these two strings and tell me, is their memory exactly equal? Yet, in order to get the performance we needed out of String.Equals, this is how we had to write it. But now that we have Span
Performance .NET Core 2.0 vs 2.1
So String.Equals is just one of the many places where we were able to take advantage of this in .NET Core stack. It was actually used within a large number of places that are most primitive types. Formatting and parsing is another place where our .NET application seemed to have an unnecessary number allocations due to tendency to allocate strings and sub-operations. For example, when you're printing an int into a string for formatting, we actually first take the int, we convert it to a string, and then we take that string and write it into the string builders. So we have this unnecessary intermediate representation.
This table shows a number of the performance benefits we were able to get by taking Span
Here's a graph from the bing.com team. So what you're seeing here, that precipitous drop, bing.com used to be on the desktop framework. They migrated to .NET Core 2.0. And then when they deployed 2.1, that drop, it represents a 34% drop in their latency numbers. The latency is pretty important to them. They have a lot of data on this. Through that data, we were able to dig in and understand what caused this big drop. All those methods I just showed before, String.Equals, that was the thing that they were fixing, and that was what was helping them out. So it's been very validating to get numbers like this and see that we've been able to have big impacts in real world applications. With that, I'm going to step back and let Mei-Chin talk about where we're going forward.
Where We’re Going
Tsai: With all the work that we put into .NET Core 2.1, we actually started to see migration waves. Bing, when we refer to Bing here, it's your front end of the Bing. And then this back end of the Bing is Azure on the path migrating to 2.1 as well. If you see a performance number like the slide before this, who wouldn't want it? PowerShell Core and Azure DevOps, I wouldn't read that on the slide, but these are all part of the mindset that would be currently engaging with us try to move to .NET Core.
This is iChart. It’s a similar critical component that we've been working with in our track, that actually they have been always already migrated down .NET Core 2.1 as well. I think I met a gentleman yesterday, a French company, [inaudible 00:41:17], I don't know if he's here or not. They're also in the progress of migrating to 2.1.
Where are we going? Jared walked you to where we are. Are we done? No. If you're not done, what's next? What do you expect? Jared asked me, "Am I really showing this slide one more time?" I say, "Yes." I want to remind you guys you're in good hands. Different workloads will show up. This is a workload that we know today. There will be more workloads tomorrow. And there will be new architectures. We don't know if they have a new platform or not because new a platform is always hard to be mature, right? Even if a new architecture does not show up, the chips evolving GPU, and bunch of the other optimizations in the chip level, we will be there. We do the optimization and we will move you with us.
Tune Core CLR for the Future
We want to make sure that we are tuning the course you offer, the feature with that picture in mind where we focus on. This first bullet sounds kind of serious. Yes, actually, it is. Think about it. Being an enterprise language, that means you place a lot of trust on us. What do you trust us for? You trust us that we are going to give you productivity. What is productivity? When your program doesn't work, do you have to use printf() to debug it? No, you have tooling there to help you. When you write your program incorrectly, compiler error is your first line of defense. You fix your program. When you go to deploy your program, the debugging experience is super important. There have tools there for you to troubleshoot performance, if you do have one, and monitor your.
Performance is also one of the fundamentals we focus on. Span
Second, we see feature as a polyglot. We are committed to better class language interrupt. There are two languages that are currently higher priority to us than others, Java and Python. Here they are, actually extremely popular in different workloads. We believe that you will choose the language that's adequate to get your specific test down. But in the larger application you're writing or endpoints or the other places you will write to reuse the ecosystem class library in a different language, we will enable you. So we will invest on the cross language interim.
We want our runtime to be more customizable. We believe what we have built for you is good for 95% of people. If you lend that 5%, some people will come. “I want a GC doing something different.” “You can.” Observe on our repo. We have moved GC to at local GC. We actually have JIT as a pluggable. There'll be more components that are pluggable. Model, for example. Even application binary model is not something that you'd expect. We will enable you to be plug in your application model.
Configurable. When you are not doing the large grain of customization on the runtime, there are a bunch of huge risks in runtime. We would like to give you choices. I think in yesterday's talk there was a gentleman who came and asked me, "Are really large objects always going to be 85K?" I said, "Well, it will stay as 85K. But if that's not what fits your workflow, you have a choice." More language and runtime innovation. We're looking to UTF8String. We are looking into type classes. None of this works, we can really go on desktop.
Our Mindset: Always Curious and Experimenting
What is our mindset? We would like to believe we're always curious in experimenting. While I was trying to write this slide, I was wondering if a manager came to me, and asked if all my team is doing experimenting. The answer is, "No." We focus on shipping that core value to you, but we have the capacity view and we are the lowest stack in all the .NET. If we are not ahead of time, the whole stack will be stale and will be obsolete. We have a lot of data. We analyze data and we choose what matters to experiment and what are likely to be successes. Span
I know that GC Heap and the generational GC have been serving everybody very well for quite some time, but you could look at the modern world load, right? The web transaction comes, transaction goes, server still stays. Those data are transient. What if you can just tell us that was a transient? CoreRT is our experiment, trying to find a lean runtime. Runtime itself is the least portable part of the .NET. It depends on the framework platform, it depends on the architecture. We are experimenting on how lean we can be with our runtime, how fast we can be. UTF8String we mentioned. We're also looking at WebAssembly. If it does take off, what does it take from us around there? If we are not ahead of it, we are not observing when things happen, you will be left out. We want you to reuse your skillset.
JIT optimizations. If were there yesterday in the performance talk, you know that we have been playing with [inaudible 00:47:06]. Now that unlocks a lot of possibility for optimization. The biggest problem with JIT team is actually they come in and say, "There is so much we can do. Which one should we focus on?" Data driven. Figure out what matters, and optimize those. There are many more and I wouldn't go there. How am I doing in time? Three minutes.
Experiment - Arena
Then I will just quickly cover Arena and give you an update about this experiment that we are doing. So in case you are on Coursera and of course you are, being to a lab, you wouldn't be surprised. The observation is there's a lot of workload transactional-based. And when transaction is not transient, data actually goes. So what can we do with that observation we have? This is the Arena API we designed. But we did not do it in a Coursera because a partner that walked this journey with us is Azure on desktop right now. But it is actually coming to Coursera, it’s our next step.
We can design an API. The API in yellow is how you declare that all our locations within that yellow range are using Arena. Then the last highlighted one actually is disposing the Arena when you're down. The green one is just in case within that transaction there are things that you’d like to escape that transaction that you want to go back to the GC Heap.
With this, a simple thing like this, actually, this is over 12 months of experimenting, because we first write a benchmark against it. Benchmark is easy to get performance results because we write the benchmark as the best candidate for Arena. We're actually seeing it 1.6 faster. Then we go to Bing and say, "Hey, we have this thing in a transactional based. How much does it take for you to migrate to use this? How can you manifest? This is the feature, how can we help you?" So he was doing a validation phase. We are getting very close for Bing to migrate things to use Arena and actually this is zap, the backend of the Bing. After they migrated to .NET Core, we are going to pour Arena to .NET Core so that they can continue that journey. They're trying to use this to tend the P95.
Experiment - CoreRT
The second experiment that we’re going to do is essential CoreRT. This was our attempt to match C++ performance. We also want to shrink our runtime as small as possible. As I say, that is the least portable part. We want to run fast. You want to have last perform dependency. This particular experiment is also trying to figure out what the ease of deployment can be. If it's a true single file actually that you are getting, you don't even need to carry the runtime with you. Runtime was embedded into the XE. And the small footprint on disk, what we see is actually this is a great solution for constraint execution environment. What constraint execution environment, what assemblies want? In many of the platforms, some features are not there. For example, you cannot rent and open the file when you're running on WebAssembly.
There are also outcome streaming platforms, for example, where you cannot go. Or maybe there are platforms like Enclave, a secure environment. The process has limited abilities. So those things are on our radar. We are not jumping in. Many of them have no kind of reception itself, but we are ahead of it. We are aware of them. That is the key takeaway that you should have.
When we see .NET Core, it is becoming a more mature runtime and framework. It is cross plat, it's open source and we're trying to give as many choices to customer. I see yellow stop. It's fast moving. I think that probably this is the future. The last slide before I really, really stop- I just want to share this slide with you. Over the last year, we added one million new monthly active .NET developers, just last year alone. Right now we have about 5 million developers. And .NET Core itself is over half a million developers. If you're happy with desktop, you don't need to migrate. Desktop will be there to serve you. But if you are looking at all the characteristics that we showed you, the innovations, the different workloads, the performance gains, then consider .NET Core. In your migration, if you have any problems, feel free to contact us. Thank you.
See more presentations with transcripts