BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Presentations The Future of Operating Systems on RISC-V

The Future of Operating Systems on RISC-V

Bookmarks
48:35

Summary

Alex Bradbury gives an overview of the status and development of RISC-V as it relates to modern operating systems, highlighting major research strands, controversies, and opportunities to get involved.

Bio

Alex Bradbury is co-founder of lowRISC CIC, aiming to bring the benefits of open source development to the hardware industry by producing a high quality, secure, and open source SoC and associated infrastructure. He is a well-known member of the LLVM community, and is code owner and primary author of the upstream RISC-V back-end.

About the conference

Software is changing the world. QCon empowers software development by facilitating the spread of knowledge and innovation in the developer community. A practitioner-driven conference, QCon is designed for technical team leads, architects, engineering directors, and project managers who influence innovation in their teams.

Transcript

Today I'll start by giving you a bit of an introduction to RISC-V, what it is, why you should be interested. I'll talk a bit about where RISC-V is in terms of status, pick off some topics, which I think are relevant to our software audience giving a taster of the discussions which are ongoing in that community right now, the controversies and things which are actively in development, and then look ahead to the future and what's maybe in the cards, what opportunities there might be for doing something different with open standards and open instruction sets.

Introduction to RISC-V

How many people here have heard of RISC-V before? Basically everybody, but some exceptions, so I'm going to give a brief introduction to what RISC-V is. First of all, RISC-V is an open standard of an instruction set architecture. What is an ISA? That is effectively the interface between software and hardware. This is the encoding of instructions, the semantics of those instructions. To be honest, a whole bunch of other stuff that goes with it as well, such as memory model and things of that nature.

In comparison to something like ARM or MIPS or x86, RISC-V is an open standard, meaning that anybody is free to implement it. Beyond that, it has a somewhat open development model. This is managed by a not-for-profit, which was set up to manage the evolution of RISC-V, the RISC-V Foundation, which has members including Google, Qualcomm, Microsemi, Nvidia, NXP, a whole variety of other companies.

One thing I should make clear is that I'll be talking quite a lot today about open implementations, or open source implementations of hardware and reference designs, but that isn't a necessary property of RISC-V. Many of us are interested because we're interested in open source as it applies to hardware, but there's nothing about RISC-V which requires an open source implementation. In fact, it's a very conscious decision that the RISC-V foundation is set up to support both proprietary and open implementations.

Another aspect of RISC-V which differs versus some of the other more established proprietary instruction sets is that it sets up to allow, and in fact, encourage, custom extension. If I go and get a license, purchase a license to build an ARM processor, even if I get an architectural license, which companies like Broadcom or Qualcomm or Samsung have, which allows me to do my own implementation of the ARM instruction set, I don't also have the ability to add my own custom instructions. That's ultimately something where ARM will get feedback from our customers and then a future revision of the ISA might add new instructions, but it's not something which individual licensees have the ability to add, which we might want to do, perhaps a new security feature which you want to implement or as an acceleration for a particular use case where you want to add in customized instructions.

There are other ways of getting an acceleration of course, for more tightly coupled accelerators for the off-core, but custom extending the instruction set itself is off limits. As I mentioned, RISC-V has an open-ish development process. You have to be a member of the RISC-V Foundation to take part in some of the development discussions. This is relatively low cost, I think minimum entry is around $5,000 per annum for companies, and then free for active open source developers or otherwise $100 for individuals.

I'm not going to paint a picture to say that everything works fantastically off the bat. Open source development and open collaboration is a difficult problem. It's not all rainbows and unicorns from day one, but it's already proven successful in a number of cases. The work on the standard for the memory model is a particular example of something which has been, I think, a real success story of people across industry and academia working together. RISC-V itself came from UC Berkeley, so it was based on a series of academic projects. It's called RISC-V because it's actually the fifth iteration of a RISC-style architecture that they'd worked on. And ultimately they decided that it was worth putting the extra effort into documenting this, reviewing all the design decisions and then sharing more widely.

As it seems almost a sort of bizarre situation we’re up to now, almost every other sort of interface which we interact with is an open standard in some way. From HTTP, SQL, through to even external interfaces on a hardware such as USB, but the fundamental interface between hardware and software has remained something which is non-standard and not free to implement.

With the RISC-V instruction set architecture itself, I'll go through some of the technical design decisions, but actually, and this is a good thing to say about any ISA, it's fairly boring. It doesn't go through particularly quirky or interesting design decisions. 64 bits and 32-bit ISAs, RISC ISAs have been around for decades now and there's a huge wealth of knowledge on how to build them. That knowledge has been applied through good engineering work in order to produce the RISC-V standards. But there's nothing particularly about the user level ISA, for this to be the software which your userspace will be executing, which is particularly novel, beyond the fact that it's open, standard and increasingly well supported by industrial partners and software.

A key aim of RISC-V is scalability. It aims to be an ISA, which will scale all the way up to high-performance computing or rack-scale computing, all the way down to deeply embedded devices with no MMU, use cases even smaller than individual microcontrollers. an ISA much might be a small core which is deeply embedded in some IP block on a much larger SoC design. This is actually a real problem, and something which has attracted initially companies like AMD to RISC-V. At one of the early RISC-V developer meetings, someone from AMD put their hands up and said the reason they were interested was because that when they ship an AMD processor, a recent one had 15 different ISAs on it.

You have your sound module and the power on management, the equipment, whatever AMD's equivalent of the management engine is, these all tend to implement different ISAs and they're somewhat hidden to the user. Of course, this is something of a nightmare in terms of maintaining different compiler toolchains. The RISC-V approach, in general, is that if standard solutions don't work for you, that's fine, you can add your own extensions. Great care has been done to ensure that there's a large amount of encoding space for defining those custom extensions.

This flexibility can, of course, be a disadvantage. As Justin mentioned, I spend a lot of my time with compiler development. I'm the primary author and upstream maintainer of the RISC-V backend for LLVM. One of the challenges is, of course, that RISC-V such a varied architecture. It's structured so that you're split into this series of base ISAs, and then there are a series of standard extensions which layer on different functionality, like the MAFDC, which is Multiply, Atomics, Single Partition Floating Point, Double Partition Floating Point, and the Compressed Instruction Set. So if you hear people talking about RISC-V, they'll quite quickly start talking in these very opaque little sequences, these ISAs strings, which is kind of weird, I'll admit, but you do get used to it.

I mentioned the base ISAs, the main ones to worry about are the RV32I and RV64I. These are the integer-only subsets of the 32-bit RISC-V ISA, so the version where registers are each 32-bit in width and the 64-bits ISA, where the registers are 64 bits in width. These can be implemented in systems which have a MMU, TLB, and full virtual memory systems that are capable of running Linux, or as I said before, something which is much, much smaller and lighter-weight.

There's also a compressed instruction set, this is the C-extension I mentioned before. It's actually a variable width instruction set in that you can have a mix in the standard ISA, 16 and 32-bit instructions. If you have custom extensions, you can actually go to 48-bit or 64-bit instructions giving you a pretty huge encoding space for custom additions. An interesting aspect of it, particularly for what we're talking about today, in terms of interesting directions for operating system design diversifiers, is for split of the privileged versus unprivileged ISA. There's been some argument in the RISC-V community. Arguably other ISAs don't have such a clean split. Arguably, due to pragmatics and real-world problems, the RISC-V split isn't completely clean, but at least as much as possible, there's the split between the user level ISA, so the definition of the semantics of instructions, ignoring support for running operating systems, concerns such as handling interrupts and exceptions and support for virtual memory, or other machine-level control status registers, which you might use to customize handling a trap and that sort of thing.

Then there is the privileged ISA, which defines all of those things, and defines the standard interface to be sufficient to, for instance, boot, a FreeBSD or Linux kernel. Of course, beyond the ISA there are a whole bunch of standards which you would start to care about. Some of these come under the purview of the RISC-V Foundation, others do not and an example would be the debug specification, so specifying what you need to do in order to hook up GDP, via J-tag to your RISC-V implementation.

One which is on the border is the interrupt controller specification. For those who are familiar with ARM in its earlier days, that suffered from a proliferation of different interrupt controllers, there's an attempt to specify a standard interrupt controller for RISC-V which now have become two standard interrupt controllers. These are all opt-ins, and in fact, so is the privileged ISA itself. The privileged ISA, as currently specified with RISC-V, is just like the base user level ISA, trying to be pretty boring. Just providing sensible support for running modern operating systems.

If you want to do something which is much more novel or more interesting and goes beyond the current interfaces which people tend to be using, then you're totally free to take the standard, unprivileged ISA and get, hopefully, full compatibility with compiler toolchains and user-level software compiled towards a standard operating system API, but then go and do completely your own thing on the privileged side.

Background: FPGAs, ASICs & Semiconductor Economics

I’ll give a little bit of background, I'm not quite sure what familiarity people have with hardware and FPGAs, so I thought I'd just take a couple of minutes to just to define terms which I'll probably be referring to later on in the talk. An FPGA is a Field Programmable Gate Array. It's effectively programmable hardware. You buy one of these boards, this is a Nexys A7 sold by Digilent and costs a bit under $300. You'll then take your hardware implementations, let’s say written in Verilog, which is a hardware description language. You'll put it through the somewhat clunky FPGA synthesis tools that will spit out a bitstream, you then put that bitstream onto your FPGA, and suppose, that bitstream could, for instance, be implementing a RISC-V core capable of running Linux.

Through lowRISC, we have a FPGA ready distribution of RISC-V core plus associated SoC peripherals, which runs at 50 megahertz on one of these Nexys boards, which is clearly more than an order of magnitude slower than a custom A6, so a custom chip. On the other hand, it's something which you can very rapidly iterate on. It might take half an hour or an hour to run through the FPGA tools, but then you get it back, versus obviously the huge cost and very, very long lead times of producing a chip. Of course, with 50 megahertz, it's not unusable for Linux but actually a pretty reasonable speed for anybody doing microcontroller-based systems, [inaudible 00:14:11] and that sort of thing. It's actually not that different to a microcontroller chip, other than cost, which is actually different, but in terms of performance, roughly similar.

I mentioned one of the key advantages of working with an FPGA is rapid iteration time. It's also useful in order to start to get some sort of performance numbers, so you can have an effectively real hardware implementation and you could map it to FPGA and get some numbers out of it. It is easy to be at somewhat misleading numbers, I'll talk a little bit later in this about a virtual memory optimization, which was explored using RISC-V, using a open source RISC-V implementation as a base.

If you just naively take performance numbers from an FPGA you are liable to get slightly misleading results, because the memory interface of a DDR controller is a hardened piece of logic that's running at effectively full speed, so that's running fast, whereas your core is running relatively slow, sort of 50 megahertz. The ratio between your core performance and your memory performance is somewhat off. Your L2 cache misses are going to be much, much cheaper in relative terms versus a real ASIC. This is something which, even in the published literature, can trip people up quite a lot as it's not actually trivial to work around unless you actually produce or implement a simulated memory controller, which is simulating optimizations is what a memory controller's doing.

Before you go through all that, it remains an incredibly useful thing to do to modify a software simulator, QEMU, gem5 or something similar. In terms of producing ASICs, so producing real chips, and the people who aren't familiar with semiconductor economics, effectively you have two choices, you can produce a very small number, so, 10s to maybe 100 and you typically do this via a multi-project wafer. As you may recall, semiconductor manufacturer is a lithographic process, so on a multi-project wafer, an intermediary will collect together test designs from a whole bunch of different research groups or companies, stick them all on the same mask, and then they’ll produce some wafers of that, dice them up, ship them back to you.

Economically that only produces say 10 to 100 parts, but that, of course, is very useful because you might spend $100,000 to get these test years back before you then spend multiple millions of dollars, on a mask for a full volume run. The issue is that when you then go to the full volume run, you're buying an entire mask to yourself, which is your design stamped out many, many times and it is really economic when you're producing millions of chips. There's this quite difficult ground on the sort of in between, which is definitely one of real barriers for open source hardware. It's quite difficult to go from a point of something which a tiny community of people to access, to something which is slightly larger community of people to access. It's go big or go home.

In terms of semiconductor licensing, I mentioned, ARM and MIPS have a similar model. For instruction set architectures and core designs which are licensable, it typically works where you would talk to a company, you would pay initial upfront fee, and then pay royalties based on the number of chips that you produce and sell. That might be the same with RISC-V depending on the company you're speaking to, but I think most companies even those with proprietary implementations are going for very low upfront costs, either just upfront cost and no royalties, or very low upfront costs and then some royalties.

If you're taking a fully open implementation, then there's no upfront costs, no royalties, you can stamp out a design or a modified version of design as many times as you want without worrying about increasing your licensing fee.

RISC-V Status

RISC-V has now been going for around four years since public announcement. Berkeley released it to the world, the original developers of RISC-V then produced a VC backed startup SiFive, which has been doing a lot of work in the RISC-V ecosystem, but there's also a number of other vendors who are producing cores and producing tooling around it.

On the compiler side, GCC support is upstream. As I mentioned, I'm pushing primarily the work on the RISC-V backend for LLVM in support for Clang. Support for that is upstream for a large portion of the RISC-V variants which we need to support, and is under active development. Glibc is upstream, Musl has a downstream port which should hopefully be upstream soon, there are initial Rust ports based on LLVM work I did for bare metal toolchains, so Linux is blocked on the next stage of LLVM development. There's also some Go support which stalled for a while, but I believe has now sort of picked up again.

I mentioned that simulation is a popular way to get started. Although it's great that you can modify a real Verilog implementation of a RISC-V core of interest, it’s clearly much more productive to get started with to modify a higher level, easy to modify software representation. There are choices such as Qemu, gem5, and what's become the de facto reference implementation of RISC-V reference software model is spike, which came from UC Berkeley. There's also TinyEMU which is a nice project which is worth mentioning as that was written by Fabrice Bellard of the QEMU and FFMPEG fame.

In terms of available hardware, SiFive produce development boards on the Linux K4 side, the freedom unleashed. There's a microcontroller class development on board and there's also a Kendryte board from a Chinese manufacturer. More recently, NXP have started this open ISA effort that has a development board available which has a PULP microcontroller on it, and then there are FPGA ready distributions, like the one which we work, we publish at lowRISC.

In terms of open source implementations the two major source of course, are Rocket supported by UC Berkeley and SiFive, and PULP from ETH Zurich, which now includes a variety of calls, so on the microcontroller side but also on the Linux capable side, a design called Ariane.

On the operating system side, there's support for at least FreeRTOS, Zephyr, seL4, interesting too, some initial work on Tock. Tock, for people who haven't encountered it before, is a really interesting RTOS implemented using Rust and there's some initial work on that targeting RISC-V based on the Rust RISC-V LLVM toolchain. It has a number of interesting aspects to it, it makes good use of Rust's memory safety and also has a model for loading applications in your RTOS, whereas more usually it would all be compiled into a single binary and it's all fixed at compile time.

Linux and FreeBSD support is upstream and well-developed. There's also, on the more experimental side, HarveyOS, which is a modern reimagining of Plan 9, and HelenOS port's also available. There was a talk about the HelenOS work at FOSDEM recently. We have a series of boot loaders available. I'll say a little bit more about bbl and OpenSBI later, as that touches on an interesting topic in the RISC-V community. There's also been sustained in active work particularly on Debian and Fedora.

Large portions of the Linux based system for both distributions is compiled and working. I'm actually a blocker in a sense on the LLVM side, as Rust has become a quite key dependency for compiling modern Linux system with libraries such as librsvg being rewritten in Rust. In both cases, they're using an older version of librsvg from before it was ported to Rust, and while they await upstream LLVM support for Linux binaries.

Why are RISC-V pages 4KB?

I've thrown up a large quote here, and you're going to be very tempted to read it, but I'm just trying to pick up on something I really like about the RISC-V specifications, which is, for most of the major design choices, if you wonder why a decision was made, there are these really nice little asides, which explain exactly what was considered and what went into that decision, and why they went that way.

Page sizes is one, which remains a slightly controversial topic. Some people feel that 4 kilobyte-pages are a little bit old hat, but if you want to revisit the initial discussions there's a nice little aside in the RISC-V privilege spec, there are similar asides on actually most design decisions, like on the user level spec side has an explanation of why RISC-V doesn't support conditional move, even though ARM AR64 power X86 and most of the current ISAs went with that choice.

SBI: Background

I wanted to pick up on a few topics that I thought would be particularly relevant to this audience. The first one I hinted at before, which was this issue of the SBI, or Supervisor Binary Interface. This visually goes back to this idea of separating the user and privilege level ISAs. So, in RISC-V, there are not just two privilege levels, there are at least on a Linux cable system, you tend to have at least three. You have machine mode, which is the most privileged level; code running with that privilege has typically full access to the system. This is what everything which can't be handled early will ultimately trap to.

The supervisor level is what your operating system kernel would run in, and the user level is of course, what your userspace binaries would run in. The RISC-V community tried to define something called the SBI, or Supervisor Binary Interface, which aims to provide a well-defined interface between the supervisor and machine mode. This is used to handle either operations which aren't accessible from supervisor mode or potentially operations where the best implementation is likely to vary from implementation to implementation, or execution environment to execution environment.

The Linux kernel and FreeBSD port is written to use SBI cores for some key functionality, such as setting timers, for inter-processor interrupts, for remote fences, there's a basic sort of debug console implementation, as well as for functionality such as shutting down the system.

I mentioned the M-mode. As well as having full system access, it's also something that you would use to emulate any missing functionality. Suppose I have a system which doesn't support misaligned loading stores even though the user level spec says that I should, or it doesn't have a hardware implementation of Floating-Point or perhaps it has an implementation of most of the Floating-Point instructions, but not all of them. Then you can trap those legal instructions in M-mode, have your M-mode firmware implement those instructions and then return back to Supervisor mode, so it's a very, very minimal form of virtualization in a sense. You could view the kernels running on top of this as being minimally para-virtualized in some way. Though exactly to what extent you consider that to be the case depends on exactly what goes into the SBI interface.

There have been some proposals to extend it further to even handling things such as context switches where the motivation would be that we talked about having various extensions to the RISC-V ISA. Some of those extensions may introduce new user of visible state. We've certainly encountered that before when playing with extensions on the security side with tag memory, you'd want to at least swap out your tag control register on a per-process basis for our implementation at least, or you might have it for vector support or other accelerators.

One of the challenges here is where does it end? In that you can implement API's for context, save and restore, but at least from our experience with say, implementing tag memory support, it didn't end there because we also wanted to implement support for saving those when swapping memory from memory to backing storage and actually a number of other cases as well. It's difficult, I think the current proposal for SBI tries to remain very, very minimal. There are proposals that try and extend it in the same sort of way that firmware interfaces on ARM have been defined for power and clock management.

This is something which has been quite a large source of controversy in the RISC-V community. Ron Minnich who started the Coreboot project has written a series of well-written posts about this on the main list discussing his concern about this direction, which particularly with M-modes, you have a very, very privileged piece of software providing core system functionality, abstracting away the hardware in a way that the operating system kernel likely doesn't have the ability to replace to introspect, or really have any sort of visibility at all. This makes it very easy to have these very potentially opaque binary blobs which remain resident after boot, which from a pragmatic point of view, kernel developers often don't like because of it. They find that these blobs tend to be buggy and be a source of hassle. It also has security concerns or a more general concern about the ability to modify and replace code on the system that you own.

Virtualisation

Virtualization is another really nice example of successful collaboration and development in the RISC-V community. RISC-V started out with the idea that as well as M, S, and U mode, you'd have an H, your hypervisor mode, above machine level. That would be the basis of virtualization support. Paolo Bonzini, who was one of the KVM maintainers, is very active in the KVM and QEMU community. A number of people they fed back this way has a number of downsides. Although this was actually an approach that ARM started with an ARMv7, they actually moved away later to something which was closer to x86, which is now closer to the RISC-V hypervisor extension which was ultimately proposed by Paolo working with Andrew Waterman and others.

Effectively, rather than adding this new H mode, it adds a virtualized supervisor and virtualized user mode. I'd encourage you to go and look up that main list read or the slides for more details. I think that's an example of where people such as yourself with expertise in a particular area, look to getting involved in the community coming up with a proposal. Writing it up and sending it in has actually worked really well, which demonstrates that there's the opportunity to have real impact.

RISC-V and Open Hardware: the Future

Looking ahead to RISC-V and open hardware in general what do we need to get rapid hardware and software innovation? From my point of view, there’s a wish list to start with, you need an idea, or else what are you going to do? An open standard of some sort so that there's an open and shared standard, so that there's some chance of shared infrastructure in terms of compilers, operating systems doing this from scratch every time is clearly unworkable. Perhaps, most importantly, high quality, well-tested, or to the extent possible, verified open source implementations, you want access to something which you can take off a shelf. Whether that be a software simulator or a hardware implementation.

You want some assurance that when you modify it a bit to add in your new change, if it doesn't work, it's probably your fault rather than because it's all a buggy mess that's held together with duct tape. An active development community so that when you actually start sharing patches or ideas, so there's some response and it's not just going into some void and, of course, some mechanism for capturing these contributions in some way. The RISC-V Foundation manager is one aspect of this, which is responding to proposals, discussing them. I think there are probably more to be learned from communities such as for Swift and Rust communities who actually have a much more developed methodology for taking in detailed proposals or RFCs from community members, and then applying a life cycle to look at who makes a decision on those, what feedback is given, timelines and that sort of thing.

The other aspect is, of course, getting these changes shipping in either a future specification revision or in hardware. With lowRISC, which is the not-for-profit, which I co-founded, we're working towards a model where we have a regular hardware tape out so that when people contribute to such changes, you can have some assurance that in say, the next release in 18 to 24 months, it'll be available in some form either produced by us or by an industrial partner who's producing a derivative design and get that shipped back to you, because I think making it really is important.

Malleable hardware

I've talked a lot about the opportunities with RISC-V. It’s clearly not a totally clean slate opportunity unless you'll throw away all of the operating systems, and all of the infrastructure, the software that you want to reuse, but that's never going to be the case. For some market segments, there actually is a substantial amount of freedom. You don't need to rely on the Linux kernel, you don't need to rely on Libc, you don't need to rely on the same design decisions which've been done before. There'll be some cases where, because you want to make use in existing ecosystem, you're limited in the scope of changes that you're able to make. Whereas there are others where there's the ability to be much more bold. There's no longer any limitation in terms of licensing, it's just a matter of economics and whether the relative cost of producing a software stack for those changes and maintaining that software stack over time outweighs the benefits.

With RISC-V we or any instructions or architecture, we face the same challenges we've always faced, in terms of security, energy efficiency and performance. But, we do have the ability to look at changes which cross all of these elements, so the ISA, the microarchitectural design, the operating system, compilers, languages and beyond. Whereas, although there's clearly been aspects of this through previous instruction set architectures, it's always been behind closed doors. A vendor who is maintaining instruction set architecture will take input from community members and licensees, but it's not an open process and it's not a process which is available to anybody.

From idea to prototype

Suppose you have an idea for an architectural modification, I'll have an example I'll talk through on the next slide. The steps are simply working out what changes you need to make. It would make most sense to produce an initial prototype with a simulator, make the necessary software changes, get that debugged and working. Modify hardware implementation, testing with FPGA or the Verilator. An important step in terms of getting it out to a wider community is publishing those changes and writing it up. This is, of course, something which has been done through academia and computer architecture research forever.

One of the nice aspects of RISC-V is that we're able to produce potentially much better evaluations, and much more meaningful comparisons with existing work because there's a range of high quality, open-source RISC-V implementations to modify, rather than just trying to produce simulations based on estimating what it might look like if you were to modify an x86 core or an ARM core.

As I mentioned, the pathway to inclusion in shipping hardware can be difficult, but there are multiple groups with various solutions on this. It's either just getting adopted by someone who's doing RISC-V stuff anyway. I mentioned SiFive, which spun out of UC Berkeley, they’ve done a huge amount of work in this area. Their model is to try and ultimately reduce the cost of doing new silicon, so to reduce the cost of new design starts, and there's a whole array of other startups and orgs in my area.

An example of this, which I thought was quite a nice one, was adding direct segments support to RISC-V. I called it an optimization for page-based virtual memory. The optimization is not using page-based virtual memory, so this is where you avoid the TLB by mapping some portion of a process' virtual address space directly to contiguous physical memory. There's a paper at ISCA in 2013, which discusses this. It goes into more detail about why it might be interesting, even though you have large or huge pages. In short, you get much more flexibility in terms of the alignment and size.

This was actually really nice because there was a paper last year which was actually by the original authors, which is where they took the idea from 2013 and then applied it to a RISC-V hardware design. They don't hold back in terms of criticizing their previous work. They point out the limitations of the methodology they used in terms of simulation where it was in fact, producing wrong or misleading results, as it was miscounting, it was introducing some artifacts and TLB misses which were confusing things.

It ended up being 50 lines of modification to a RISC-V design which was implemented in the Chisel, the hardware description language, and then 400 lines for the Linux kernel, so a relatively manageable piece of work. Rocket is one of the RISC-V designs which I mentioned previously, which was started by UC Berkeley, and now supported by SiFive, and that's implemented in the Chisel hardware description language, which you may have heard of Verilog and VHDL. This is a sort of newer, more novel hardware description language, implemented as an embedded domain-specific language in Scala.

One of the hopes is that it's more accessible to software developers. People have different views on where it's succeeded in that aim, or whether other novels HDLs have. To be honest, with lowRISC, we've taken a bit of a step back and we're focusing more on traditional hardware description languages like system Verilog, as we found that it adds more complication when dealing with FPGA and hardware synthesis tools, and it risks alienating both software developers and hardware developers, but some people have a very, very productive time with it, so it's swings and roundabouts.

Novel security solutions

I think there's a huge amount of potential in terms of novel security solutions. With lowRISC we've done work with TAG memory, which is effectively associating metadata on a very fine grain basis. In our case, for each word of memory having extra tags, so say 2 to 4-bits, which can be used to encode extra access control information, or other cases where you might want to mark pieces of memory. There's a really nice piece of work from an intern that looks at the work needed to add support to that for the Linux kernel, which I guess touches on some of the issues I mentioned previously, with the SBI, where you can see where if you're adding an extension, which actually touches large parts of an operating system kernel. Even if each part is relatively small, it's very difficult to define an API of a interface, which allows you to abstract all that away, unless you ultimately end up re-implementing most of the operating system in your abstraction layer.

There an interesting story on spectre mitigations. Of course, superscalar implementations of RISC-V have the same fundamental issue that implementations for any architecture author, but it's really nice that we have in BOOM, which is an open source, superscalar RISC-V design, a full hardware implementation in order to test and explore research ideas for mitigating issues such as that.

End goal

For me the end goal is ending up with a productive and public feedback loop between everybody involved from occupation engineers, compiler authors, micro-architects, ISA designers and beyond. This is something which has clearly happened in smaller circles, in companies such as ARM, MIPS, and Intel, but I think the opportunity of RISC-V is going to happen much more publicly in a much more mass participation way.

There's no reason that RISC-V shouldn't be used as the starting point for every undergraduate learning about computer architecture design, researchers who are doing their Ph.D. research, startups who want to take a design and modify it. The ultimate hope is, of course, that it can lead to more rapid innovation and a much clearer path from design to real shipping hardware.

Conclusion

For me the key challenges to this are first, lowering the barrier to entry, so I've discussed a number of the barriers previously in this talk. Increasing the incentive to participate, that links back to ensuring that you have an active development community and that there's a clear story on when you actually share those changes. Who reacts to that? How do they get adopted? How can you see that in real shipping hardware? There's the success, disaster point which is we achieve all this, we get a landscape of interesting, diverse and novel process implementations, and how can we ensure that we maximize code reuse and sharing of infrastructure between all these different architectures?

There might be more work to do in terms of attractions at that the operating system level and kernel level in order to support that. If any of this sounds interesting to you, I should note that we're hiring, we have seven open positions.

Questions and Answers

Participant 1: I’m curious about- I think you barely touched on it- the difference between the implementations, like the SoCs, like the whole combined systems. Recently, I've been involved with porting a certain operating system to ARM, and what we have learned and I didn't know before is that in porting to ARM, the compiler actually does most of the work. The actual work is actually to support its SoC. What's the situation there with RISC-V? Are their SoCs as varied as in the ARM space, or are they more like in the Intel world where basically everything more or less acts the same?

Bradbury: It's too early to say in a sense in that they aren't a large number of shipping production RISC-V SoCs. There is an attempt to standardize around cases which have been most painful in the past. The interrupt controller was one example I mentioned, where having a standard interrupt controller does drastically simplifies the task of porting your RTOS or operating system. For arbitrary peripherals such as UR, SBI, I²C, it remains to be seen.

Through what we're doing, where we're trying to work on very well-specified reusable components, which people are able to use, who are able to take and adapt into their designs, but there's no requirement that people do that. There's bound to be a diversity that we see in other architectures. Look at the number of UI implementations over the Linux kernel. Hopefully, it won't grow too much, but it's just a fact of shipping hardware.

Participant 2: Linus Torvalds recently wrote about ARM and how it wouldn't be successful in the commodity server space until there was a viable desktop development environment for people to buy and have in their homes essentially. Where do you think RISC-V is on that journey to having that?

Bradbury: I think it has a long way to go on the server space. Even with the might that ARM has had behind it and the effort which has gone on the server space, it has been a tricky path with various high profile projects being seemingly shelved, like Centric. The two paths of least resistance for initial RISC-V adoption are the deeply embedded space, so cases where people previously used custom internal ISAs, and the team rewrote the compiler for it, left a decade ago and it's complete pain to maintain, or maybe they're using ARM and they have reasons they want to look at something else, cases where people want to do customization of the ISA. I think the Linux capable and server side, it's perhaps a longer-term goal. On the server space, the path of least resistance there is probably in custom accelerators. Startups like Esperanto are working on that with cores with huge numbers of RISC-V cores and very high-performance floating point engines.

Participant 3: Could you talk about the industry adoption of RISC-V so far? Or, can you buy them right now, the processors? What does the roadmap look like?

Bradbury: SiFive is one of the most well-known companies who are producing RISC-V chips. They are primarily an IP vendor. They sell their IP and help people produce tape, other companies produce tape outs, but they have development boards available. There's been some adoption in companies. More of it has been R&D, though you can find presentations from companies like NXP, Google and others. It's early days, there have been some commitments, there has been introduction into the RISC-V cores as sort of in a small parts of larger systems, but nothing which is both very high volume and user-facing.

 

See more presentations with transcripts

 

Recorded at:

May 24, 2019

Hello stranger!

You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Community comments

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

BT