Bio Lennart Augustsson was previously a lecturer at the CS Department at Chalmers University of Technology and currently works for Standard Chartered Bank. His research field is functional programming and implementations of functional languages. He is the author of the Cayenne programming language and the HBC Haskell compiler.
I was in academia for about 15 years and worked on implementing functional languages. I wrote the first Haskell Compiler and around 1995, I started as a consultant doing various language related things. At the moment I'm working for a bank called Standard Chartered Bank using Haskell. I've tried to use Haskell all the time since I stopped academia.
I guess what I'm known for depends on who you ask. Some people would say I'm known for designing and implementing regular languages, but I've done a few DSL things. I think we should distinguish between DSLs that are sort of real languages and embedded or internal DSLs that some people seem to be quoting me on these days. For regular DSLs, I think Haskell is a great language, because Haskell is really made for symbolic processing and that's what compilers are all about – lots of symbolic things that you need to do. Haskell is a very good language for implementing any kind of programming language, DSL or otherwise. When it comes to making embedded DSL, again I think Haskell is a good language because it has some of the properties that a good language for embedding should have. It should have fairly lightweight syntax because you don't want lots of keywords and dots and parentheses and things to get in your way. When you're making a DSL you want it to be readable for the domain people and Haskell is very lightweight – you can design your own syntax, new operators, and so on. Another thing that you want from a language, if you want to do embedding, is that there should be some easy way of creating closures, because if you want to make a new control construct – a while or an if or something like that - there are parts of it that are not always evaluated, so you have to stop the evaluation of some parts of it until you know that you need to evaluate it. There are different ways you can do this and different programming languages and in Haskell it's trivial, because Haskell has lazy evaluation, so nothing is evaluated until you need it. You don't need to do anything special when you make new control constructs in your DSLs. I think Haskell is a very good language for that, too. You can also mold the type system of Haskell to fit your DSLs.
3. As you told, with increasing particularity of DSLs, we hear about DSLs being implemented on top of Ruby and SmallTalk. There is a claim that it's implemented in these languages because they are dynamic and got some dynamic stuff inside. We know that Haskell is statically typed – does this make it less good for DSLs, in your opinion?
I can't really tell because I've never done a DSL in a dynamically typed language. I think it's very easy to do it in Haskell, in a statically typed way, but I like to think about types, I like to have typed languages, so if you want to make a typed DSL, it's natural to do it in a typed host language. I don't find any particular problems with types.
The most commonly used Haskell implementation, the JVC compiler has enough things that you can do the meta-things that you need. You don't always need meta-anything to do a DSL. Maybe you need it if you do it in things like Ruby and so on, if you need to monkey-patch your objects, but I've done DSLs that have nothing but standard Haskell – it's just a number of functions that you define to make your little sort of language, a DA or an API or whatever you want to call it. You combine them and there is nothing strange about it.
5. Today's examples in the mainstream, most like examples of DSLs, are more focused on syntax. There is a lot of emphasis on making it more readable, but there is much less emphasis on the semantics of the language. Arguably, some people say that lacking semantics will make the DSL less extensible or flexible. What do you think about that fact?
If you don't get the semantics right, what is the point in making a DSL? If you just have the syntax, there is nothing you can do. The language has to implement the semantics that the domain people expected to have. I mean that just taking for granted that there should be some kind of correct semantics – whatever that is – only the domain experts can really tell what they expect from the language.
6. In some DSLs, each time you want to add more functionality in the DSL, you have to add syntax. When you design it well, your DSL just works, because you got certain stuff that lets you be extensible.
If you have an embedded DSL, since you have the host language at your disposal and all languages I know of – all reasonable ones at least – have some way of defining functions or procedures or whatever, and then you can use that to make extensions to your embedded DSL. If you have a stand-alone-DSL that isn't embedded in anything, then of course, if you want to extend it, you have to have some kind of mechanism to extend it. If that's good or not, I think that depends a lot on what domain it is and what your domain experts can be trusted to do. You might not want to give them, say, infinite loops because then they could write infinite loops and maybe you don't trust them.
7. When we think of embedded DSLs, we think about reusing certain concepts and composition concepts from the hosting language. To what extent do you think we can get far from the language in abstraction?
We can abstract away from some things, but since you are in this host language, the things that you can do there are very difficult to get away from. If your host language, say, allows recursion, you can make a new recursive definition and you might mess it up when you make it and there is no reasonable way that you can take these abilities away from a host language. It is what it is and it's very difficult to take things away. It's easy to add things. I guess that's an argument for not making an embedded DSL, but making a special purpose language.
8. With certain abstractions, they try to observe certain recurrent problems, then abstract them into some general way and introduce operators or language - we can call them, so that it's better for learning curves, some people argue. When we use DSLs we are more specific to the domain, so we try to introduce languages - for each domain problem we introduce a language. Do you think abstractions like monads and DSLs are on opposite sides? Or can we use for example monad for doing DSLs and DSLs for doing some abstraction that we can use for a lot of domain problems?
In the embedded DSLs that I made in Haskell, I've used monads sort of as a mechanism to make the DSL without telling the users what they are really using. It's just some language construct they don't really know how it works and they don't need to know how it works. I think it's a useful abstraction to have in the host language. If you want to give the full power of it to the domain experts or not, that depends a lot on what kind of domain it is. I mean, are these domain experts mathematicians, like lots of people were in one of the languages I did? Then you can give them all kinds of stuff that you wouldn't want to give to someone who writes Swedish tax rules, which was another DSL that I've been involved in. I think it all depends on the domain.
9. For example, when I use a monad, I know the syntax and you tell me “Use this syntax”, then right away I know how to do it. It is the same abstraction in some way, whereas with DSL I have to learn it.
If you want to do these 3 things, you should write “do” and then you should have 3 lines with the 3 things you need to do. There is nothing much to learn. You might get scared by the error message you get if you do something wrong, but it's learning a pattern to write something, and I don't think it's very difficult – learning how it actually works might be quite tricky.
Static types give you more error messages at compile time, no doubt, because that's why they are there. Somehow, these error messages seem to be more scary to people than runtime error messages. At least, I should say to programmers they are more scary – they don't quite understand them whereas a Runtime error they get into the debugger and they can debug it. For someone who doesn't really do programming, but is just writing these domain rules, I think both of them will be scary. You get the scary error message at compile time, or you get some scary thing when you end up in the debugger at runtime and I think you are equally confused by both of them. I'm not sure if it's better or worse for learning this kind of things. As for safety, static typed checking gives you a certain safety, guarantees that testing can never give you because type checkers actually prove something about your code that certain things cannot happen. Tests can't check, unless you do exhaustive testing, but that is not really feasible and anything realistic. It can't test for certain things never happen, you can only see that certain things do happen. There are some extra safety guarantees in having static typing.
When you want to define some new control construct, like an if or an while or something like that, we have parts of it that should only be evaluated sometimes. You need an easy way to suspend that evaluation until it needs to happen and, of course, in Haskell it's trivial, but it's easy in some other languages, too – like Ruby or SmallTalk. It's very easy to create blocks that suspend evaluations, so I don't think Haskell has a great advantage here. I don't think laziness is an essential property for making DSLs more readable. As long as there is some simple mechanism for suspending evaluation, but it could be something different that's more explicit than Haskell.
I think it applies equally much. It's important to come up with some nice set of primitives where you really understand what these primitives do and how they work together. Once you have this set of primitives, you can build new things on top of them and they will have a clear semantics if your underlying primitives are nice and simple and there should be some simple ways of combining them. I think it's as important when defining DSLs, but there are other concerns when you define a DSL. You also have to pay attention to what the domain experts want and if they don't understand abstraction and don't want abstraction, then you shouldn't give them abstraction. If they want to repeat the same thing over and over again, let them repeat it. The customers should decide when doing domain specific languages.
Haskell is about computing values, that sort of the paradigm for computation. In Haskell, some of these values can be big, or some of these values also involve interacting with the outside world – it would be rather boring if you couldn't interact with the outside world in your program. You have to embrace this thing that computing with values is the important thing when you use any functional language, if you want to use it in the proper functional way. It's not about assignment anymore and updating things, it's about computing values. If that helps solving all the world's problems, I'm not sure. I think functional languages really help you be more productive. I think it's a good thing, but we'll have to wait a few years and see if it actually works out for everyone.
14. Is Haskell a language that is ready for becoming mainstream? In some way, is it just a research language to be inspired from and can it be used in real projects? I guess you use it in a real project, but everyone asks this question.
I think it can absolutely be used in real projects. It's a very stable implementation, stable runtime system, it's very easy to call out to any kind of libraries written in C or at least to have a C API, so you are not confined to just using Haskell, you can call any kind of languages that you want. On the whole, I think it's a quite mature language. It's been about 20 years now, since the Haskell effort first started. If someone wants to try it for real, they shouldn't be afraid, because it's not just an academic language or anything like that, it works just fine in industrial settings. I wrote my first Haskell program that was in commercial use in 1995, so it's been possible to use it for real things for quite a while, but it's better than ever.
I think it was used for 6-7 years. I didn't maintain it. It was maintained by other people for 5-6 years and then they did a complete rewrite of this system and then, what I've done in Haskell got folded into their compiler. This was for another domain specific language, by the way - to do airline crew planning. It's now rewritten in C++, I think.
There are a couple of them that I think are really rising: Scala and F#. Now that F# has a real backing from Microsoft, we are going to see a lot of F# stuff. Something really nice about F# is that it fits very smoothly into all the .NET things, all the libraries and so on - it's well integrated. If you are in the Microsoft trap, if you want to call it, while you can only use Microsoft stuff, then I think F# is a very nice language. On the other hand, if you are in the Java camp and you need to deploy to a JVM, I think Scala is a great language because it compiles down to JVM bytecode and can interact will all the Java libraries and so on. Those 2 languages are something to watch; they are going in the mainstream. I should mention Erlang as well – it might take off as well, but I think the fact that F# has Microsoft backing is going to make it be used because it's just going to be there in Visual Studio, so people will try it out whereas trying Erlang, you have to do something and it might be frowned upon by someone that it is not a Microsoft product.
You don't need big teams when you program in Haskell. I don't know, I haven't been involved in any Haskell projects that needed more than say 5 people working on the code base. I mean, at that size, there is no problem. I don't know why Haskell should be any more difficult to scale than any other language. I think it has the same scalability properties as many other languages. You mean it's a bit more complex language compared to some. Sometimes it's a simpler language than almost other languages out there, but just because you understand the few building blocks that are there, doesn't mean that you understand what you can build from those blocks. That's what makes Haskell a bit more complicated. There are less pre-built things you need to learn how to use the building blocks in the right way to build whatever you need.
Definitely not. I think Haskell is quite approachable. It has this reputation of being incredibly complex and you need a PhD in computers science or something to understand it. It's partly because it uses some terminology for monad, for instance, which is borrowed from mathematics. That doesn't mean that the concept itself requires mathematics to understand it. It's just that we took the name because there was a name for it. If we would've called it something else, then maybe people would be less scared, but since the people involved in designing Haskell know computer science and maths it's natural to reuse names that are the proper names for the things that are in there. It's not at all a difficult language to get started with. One interesting thing is that there is a book coming out very soon, called Real World Haskell and it's been written by people who were not in any way in the core Haskell community, they are the users of Haskell and they wanted to write a book about it. It shows that you can do all the usual kind of things that you can do in other languages and it's not any more difficult. Yes, there is a threshold to get over to get into the Haskell way of thinking, but it's definitely something that you can approach even if you know nothing about mathematics or computer science. Maybe you shouldn't be programming if you don't know anything about computer science.
There are a couple of different ones for building web applications – one is called WASH, which is fairly simple and easy to get into. There is a very nice one that combines the databases and web server and everything, called HAppS, which I've been told – I've never used it myself – it's very powerful, but unfortunately there is no documentation really for it, so that makes it a little bit less useful, but is something to look at.
Steve Van Hoyweghen
DSL in Java and Haskell
Kamil D. Skajotde
From some times I'm interested in DSL and readable domain, specially in Java. Java has few important limitations to implement elegant DSL. After this article I plan to try Haskell, but it looks like it doesn't have good integration with JVM. I'll compare Haskell with Scala which is also static typed and is more abstract than Java.