BT

New Early adopter or innovator? InfoQ has been working on some new features for you. Learn more

Stephanie Weirich on Dependent Typing, Extending Haskell, Type System Research
Recorded at:

| Interview with Stephanie Weirich by Werner Schuster on Feb 05, 2016 |
28:53

Bio Stephanie Weirich has been a functional programmer since 1993 and a Penn faculty member since 2002. She's crazy about types. GitHub: sweirich Twitter: @fancytypes

Sponsored Content

Code Mesh, the Alternative Programming Conference, focuses on promoting useful non-mainstream technologies to the software industry. The underlying theme is "the right tool for the job", as opposed to automatically choosing the tool at hand.

   

1. We are here at Code Mesh 2015 in London, I am sitting here with Stephanie Weirich. Stephanie, who are you?

Hi, thanks for inviting me for this interview. As you said I am Stephanie Weirich, I am a professor at the university of Pennsylvania, there I do research in the design of type systems for functional programming languages. And a lot of my recent research has been about the Haskell programming language.

Werner: So what’s still to do in type system, I thought it’s all solved with Hindley-Milner, it’s all done in the seventies.

All done in the seventies. There is actually a lot of exciting things going on, in type system research. One of the most exciting buzz words you might hear about is dependent types, and the basic idea is trying to capture more invariants with your type system. The Hindley-Milner type system is great, it’s fast, it catches a lot of bugs at compile time, but it doesn’t catch all bugs, Haskell programmers like to say “It type checks, so it must work”, but I must admit I had one or two bugs in my program even though it type checked. And so what we would like to do is look for sources of inspiration at much more expressive type systems out there and see if we can bring them into languages like Haskell, so that people can encode some of their more applications specific invariants in the types and have the compiler check those as well as more universal properties.

   

2. Are these completely different type systems or is it all based on Hindley-Milner or something else, how can we look at this?

You can kind of view Hindley-Milner as a restriction of some of these type systems. A lot of the inspiration I look at comes from dependent type theory and this is a very expressive type system. Hindley-Milner is limited in that quantification must be at the top level whereas in dependent type theory you can have quantification everywhere. Dependent type theory also has dependency which allows programs to appear in types, so you can even quantify over program variables in your types. So these are new kinds of quantification, that just hasn't been available to functional programmers in the past.

Werner: And so you say Hindley-Milner was a restriction, was that on purpose to make the type checker fast? Or was that just a state of the art back then.

It’s hard to say, it was different lines of research that it came out of. In practice it had the property of full type inference, so with the Hindley-Milner type system, when you are within that set, you can type check any program without any type annotations whatsoever. The compiler needs no help from the programmer. But we found over the years that programmers actually are willing to put type annotations and help the compiler out. And so that has enabled in the past a lot of extensions to Hindley-Milner like polymorphic recursion or higher rank polymorphism, and we would like to push that even further.

   

3. What are these annotations? Is it just saying that this is a certain type or is it something else?

So far it’s just been this is a certain type, I’m not going to let the compiler infer the type of my function, I’m going to actually write down what the type is. Or maybe a local definition, a local function I may have to write down the type, and that’s all that’s really necessary and the hard part it’s kind of trying to find the delicate point of what’s enough type information to the compiler so it can do effective type inference, without being too much of a burden for programmers, where they don’t have to write type annotations everywhere.

   

4. How can you help out the compiler, isn’t type inference nice? You don’t have to type the same type fifty times over or how can the compiler make use of this extra information?

Let’s take the example of higher rank polymorphism, so this is where you might have a function that takes an argument that has a polymorphic type. The Haskell compiler GHC supports this as long as the function that takes a polymorphic argument has a type annotation. Without that annotation it can’t infer that that argument needs to be polymorphic because it’s just looking at all of the uses of that function, to figure out its type, and it can’t look at all of those uses and figure out how to generalize that to a more general form. By providing that annotation that’s exactly the hint “This is a general function”, this is something that works at any type, and that you can check and make sure all of those uses are consistent with that type. That’s an easier problem to solve.

Werner: You mentioned dependent typing, that’s a big topic, in 2015 it’s what monads were five years ago, today everybody talks about this.

Yes, dependent types is a big topic, there’s a lot of different definitions of exactly what dependent types really are, there’s some really beautiful languages out there that are based on dependent type theory, like Agda or Idris or the Coq Proof Assistant can be used as a programming language. These languages come out a history of developing proof checkers, based on dependent type theory used as their logic but this logic is a computational logic, it’s got a functional programming language at its core, and so they also work well as programming languages. But they were designed as logic, and while they are very expressive, the dependent types are very expressive in these languages, they are a little bit different than something like Haskell, in many cases you can’t write nonterminating functions in this logic, because nontermination leads to an inconsistent logic. And so if you want to work with things that you haven’t quite proven terminating, you have to work around those restrictions. Whereas I’d like to bring some of those ideas to Haskell, where you work with normal functional programming from the beginning and you work in a more incremental way, you add more types and more type information to gradually build up more knowledge about your program.

   

5. What information is that then? What do you add?

One thing Haskell includes is generalized algebraic data types, this is an extension of algebraic data types that I worked on ten years ago, with Simon Peyton-Jones and my students Jeff Washburn and Dimitrios Vytiniotis, so this extension allows you to take a data type and add indices to it. And these indices they can give some dependence in the type, so different constructors can have different types in these indices. So the actual type that you are working with, kind of depends on which data constructor you have. So what this does is it gives a connection between runtime and compile time. When you pattern match on one of these GADTs you’re giving information to the type checker, because you’ve determined that this value was a member of a particular data type, that had this index, but when you find out what the constructor is you also find out what that index is. And that’s been a way to reflect runtime information into the compiler into the type checker so the type checker can do more flow sensitive analysis, this is a new feature that is not something that the Hindley-Milner type system has been able to talk about in the past.

   

6. These indices are these abstract terms or are these actually numeric indices

I call them indices only because they vary but actually they are anything from the type system, they are any type, and since we have introduced this feature, we’ve also increased what’s in the type language, what is allowed to be considered a type. So at first types were just data types and numbers and Booleans and functions, but people started creating empty data types, that had no data constructors just so they can use it as tags for these indices. I’m going to create a data type called zero, it has no data constructors but I’m going to use it to act like the number zero, and now suddenly I have the number zero in my types and I can use it to keep track of the length of a list or the height of a tree.

   

7. But to the example of zero, do you have to basically build natural numbers from that?

Yes, since that time we’ve introduced something called data type promotion, which is something that allows you to take data type declarations from normal Haskell, from the normal expression language and promote them up to the type language. That allows you to reuse these declarations in both contexts. So if you already have a zero you already have a definition of natural numbers, you can use that constructor now in your type. And then the natural number data type becomes a data kind; the kind is a description of a type, just like a type is a description of a term.

Right now in Haskell expressions have types, and you do type checking to do consistency checking on the expressions to rule out bugs. But it also turns out that there is another language in Haskell there is the types themselves, is another language, and that language also has a description, which are its kinds and the kinds act in the same way to the types as the types act to the terms. And now Haskell programmers might be familiar with kinds like Monad or List, things that are a Monad have a different kind than types like Int or Rule. But it doesn’t just have to be kinds that give us types, data type promotion give us very rich vocabulary of kinds like Nat and Bool in the type language itself.

Werner: So kinds are to types what types are to expressions. Let’s go a bit further.

You could ask do we need kinds of kinds? Well, in that direction you can keep on going further. And this is what languages like Coq and Agda do, is they don’t have words for it but they just have a hierarchy of levels. And they need to do this for consistency, but in Haskell we are not a logical fragment, and so that consistency is not so important, and so one really exciting research project I’m working on with my student Richard Eisenberg right now is let’s not make so many distinctions, let’s take the types and kinds and combine them together. And so he has a new branch of the Haskell compiler, that actually does this collapsing where types and kinds are just one language all together, much in the same way a dependently typed language might be.

   

8. With a dependently typed language can you, say, remove this distinction between type and language? Is that one way of putting it?

In a full-spectrum dependently typed language like Agda that goes even further, where they collapse the distinction between terms and types and kinds etc. So, it’s all very uniform, you can write functions in your types that compute types, just like you can write functions that compute value. And that’s what so powerful about dependent types, the type system becomes a programmable tool for you. But doing that full collapse also comes with some cost. It’s difficult for the compiler to know what needs to be compiled, what needs to be around at runtime, and what is only there just to make the program type check. Haskell has this distinction between types and terms because it’s really important to Haskell because all types are erased before compilation, that makes Haskell programs efficient, it also gives Haskell properties like parametricity, so we can reason about Haskell programs just based on their types. I don’t think we really want to loose that distinction completely in Haskell, I think we want to keep that around, but I do think we can draw on those ideas all the same and bring them into the type system.

   

9. What is interesting to hear that with dependently typed languages it's hard to know what’s compile time what’s run time. Is that correct?

Of course there are solutions for this, where you can add extra things in your type system that distinguishes between things that are erased and things that shouldn’t be erased, the Hindley-Milner type system says because it’s a type it must be erased, because it’s a term it must be around at runtime, but that’s kind of two different features. Because it’s a type and whether it’s erased or not. So there are ways of extending the type system where you can add that kind of description to the type language and so you can still have all of the dependency but still keep track of what’s necessary at runtime. That’s one way at least with a very expressive language, it gives the programmer full control over what’s around and what’s not. This is a separate project, but I worked on a project called the Trellys project where we were looking at designing a dependently typed programming language, and this is the approach that we took in that context, where we made a distinction in the type system between arguments that had to be erased and arguments that had to be compiled. Another approach is taken by Epigram and Idris which is to let the compiler figure it out. This could be a compiler optimization where the compiler can observe that certain things are not actually used, they are only used parametrically and so it's safe to remove them. This is a simpler approach for programmers because programmers don’t have to think about it, but it also means that programmers are in less control.

Werner: So they might have to fight the compiler less.

Some parts of this has to be, if you are going to do it very well it might have to be a whole program analysis, and that’s always difficult because that can make compilation time very slow. And why is that a whole program analysis? If you think of the data structure, do you need to keep all the components of that data structure around at runtime? You can only tell that by looking at all the uses of that data structure. If it’s a whole program analysis you can look at every use, and see ok maybe this argument is only a specification argument, I can get rid of it. If you don’t want to do it as a whole program analysis, if you want to have a more compositional specification, then you need the programmer to tell you, whether any client of this data structure is allowed to use that argument. And then once the programmer has annotated that, with its types saying this argument is an erasable argument, this argument is not, then the type checker can check to make sure all uses are consistent with it.

   

10. In your research you have standard Haskell with ways to do dependent typing, how can our audience look at that? Or how can we use that?

I’ll be giving a talk later today [at CodeMesh 2015], and I am showing some examples, and that talk I am giving today kind of draws of features that are already in the latest version of Haskell. It draws on features like generalized algebra data types, type families, which is a way of writing a function in a type system, data type promotion which I talked about which allows you to use data types and I think that’s about it. Those features are already there, if the viewers are interested in trying out this new version, which I was just talking about, where the types and kinds are combined together and that does extend the expressiveness of the type level programming. My student Richard Eisenberg has a branch of GHC on Github where it’s available, the types system extensions, they can play around with it. And he’s currently working on merging that branch back in with GHC. So it shouldn’t be much longer before all Haskell programmers can play with it.

   

11. Ok. So this will be in the official Haskell standard or is it just GHC?

Just GHC so we plan it to be in GHC 8, the next major release of GHC. Many of these extensions I just talked about are not part of the official Haskell standard, they are all compiler specific extensions.

   

12. So you probably won’t want to write your next big enterprise application in it just yet?

Many people do use many language extensions in their enterprise applications; they go to Haskell because they want to use cutting edge language research, because they see an advantage there. And so working with these extensions is kind of hard sometimes I think to tell which ones are really experimental and which ones have been around a while and just haven’t got standardized, the language standardization processes are very slow and maybe it’s kind of hard to understand how they work. I think that something like GADTs, we’ve been using them for ten years now and so I think we are pretty comfortable with them and how they interact with the rest of the language, and so they are a very mature language extensions and so I don’t think people should worry about working with something like that. Maybe this types and kinds merging, is still not even merged in yet, I think it will take some time to figure out what are the implications of this. But GHC is a wonderful tool for this kind of language research, we work very hard on these language extensions to make sure that they make sense, that they are consistent with each other, so we do a lot of formal modeling, with proofs, to make sure that we are not braking the type system when we bring them in. But another condition is that they have to be simple enough that we can implement them in this large existing code base that is GHC, so that is another gate that prevents language extensions from going in, so there has to be a certain amount of simplicity before things are incorporated, and then the third gate is “Is it useful”? And that we can’t really tell until we release it, and see what people do with it. We have our own ideas, we put out examples and here are some cool things, but I am always surprised by what people are able to do with these extensions, they use them in ways that I don’t even imagine.

   

13. That’s always good. How do you build these extensions, do you actually have to fork GHC or is there another way, do you have plugins? Or how does this work?

Right now it's GHC forks, although you do have to change the compiler for these type system extensions. I generally work at the level of the Core language, so the full Haskell language compiles to an intermediate language called Core, and that language is pretty small it only has a few constructs in it, and that language is easier to understand how the type system extensions interact with the rest of the language in that, because it’s so small, it has taken things that are there for the convenience of the programmer like type inference is gone, it’s a very explicit language. Many of the special forms have all been compiled down to the same thing, so I don’t have to think about all the different cases. A lot of the work is first adding it to Core so that the Core type checker knows about it and then figuring out how to extend type inference in the parser and everything on top so that it becomes exposed to programmers.

Werner: So it’s all throughout the system basically.

Yes, I do want to say that there is a new system for plug-ins for GHC. Iavor Diatchki has developed a way of plugging-in constraint solvers to interact with the type checker. You might have your own domain of constraints for a particular application and you might have a special purpose solver like maybe some extra knowledge that the type checker could use, the type checker is trying to determine whether these types are equal to each other, and you might know that in certain situations these new constants interact with each other in a certain way. For example he has a constraint solver for type level numbers, right now you can have type level numbers but you don’t get all the identities that you want, you might know that X+0 = X but you don’t know that 0+X is equal to X. And so you need a constraint solver to help you with that. I think this is a really exciting feature for GHC and I’d like to keep experimenting with that as well, trying to figure out can we develop new interesting theories and build libraries on top of those theories, and have very strong type inference based on these external solvers.

   

14. That sounds really interesting. So if you weren’t working in Haskell, let’s say Haskell disappears for some reason, which language would you like to work on or, let’s say what would you like to do research with?

What do I like to do research with? That’s an easy question because there are many other really interesting dependently typed research languages out there, so I would probably be working with Idris or Agda or Coq or some variant of that, or Trellys which was a research language that I was working on a while back, it’s always fun to design languages from scratch, and it will take new ideas that are out there that have resisted adoption because it’s hard to work with a large user base, but that’s always fun but at the same time it’s always really fun to work with a language like Haskell that has a huge community, where your ideas can have such impact.

   

16. We are currently looking at dependent types but in five years what’s the next big barrier?

What’s going to be next? I think we are going to see a lot more truly verified systems out there, I think we are going to see types in more places than we’ve seen them in the past, dependent types have this natural affinity for functional programming languages, but I think we are going to see them as we get better at type systems be showing up in more low level programming languages, dependent types are a way that we are going to get there, because you can write programs in your types, to capture these idioms for working with memory or working with concurrent code, and having the type system make those distinctions for you, so that you know which parts of your code are pure and which parts are not pure. I think we are going to see more of that, and I think we are going to see more things like compilers and operating systems written in dependently typed languages so we can start reasoning about those. We already have some really great examples like the CompCert compiler which is fully verified and implemented in the Coq proof assistant and there are some operating systems coming out, CertiKOS which is an operating system, from the Yale group, and we have some languages that are embedded in Coq like Bedrock is a low level programming language. These are really exciting areas, and I think it’s really neat to see how type systems give us this lever of doing our verification with our programming, program design, it’s not this after-the-fact thing of “ok we wrote it now let’s prove it correct”. It’s fully integrated into the language, so that if you need to even reflect on the proof of correctness as part of the implementation, you can, that’s available for you and in some ways that can often simplify your development. It gives you dynamic checking, but the type system ensures that you did the dynamic checks that had to be done for example. Q23 Interesting. So there’s lots to look out for. 1727 A23 There’s tons to look out for.

   

17. Where can we find your publications? Can we just google for you?

You can Google for me you’ll find my website [Editor's note: we googled it for you: http://www.cis.upenn.edu/~sweirich/ ], all my publications are on my website, and you can play around with GHC. I have a lot of examples of dependently typed programming on my Github page, if you have other examples I’d love to see them too, send me a pull request, and there's a lot out there.

Werner: Thank you Stephanie.

Thank you.

Login to InfoQ to interact with what matters most to you.


Recover your password...

Follow

Follow your favorite topics and editors

Quick overview of most important highlights in the industry and on the site.

Like

More signal, less noise

Build your own feed by choosing topics you want to read about and editors you want to hear from.

Notifications

Stay up-to-date

Set up your notifications and dont miss out on content that matters to you

BT