BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Presentations Panel: the Future of Languages

Panel: the Future of Languages

Bookmarks
51:38

Summary

In this panel, these programming languages experts try to find the places where we could probably past each other to try to find common ground.

Bio

Andrea Magnorsky is a Functional Languages Programmer. Noel Welsh is a founding partner at Underscore. Ashley Williams works on the Rust Programming Language and WebAssembly for Mozilla. Stephen Klabnik is on the core team of Rust and leads the documentation team. Ron Pressler is the technical lead for Project Loom.

About the conference

Software is changing the world. QCon empowers software development by facilitating the spread of knowledge and innovation in the developer community. A practitioner-driven conference, QCon is designed for technical team leads, architects, engineering directors, and project managers who influence innovation in their teams.

[Note: please be advised that this transcript contains strong language]

Transcript

Magnorsky: This is a panel about the future of programming languages, and I'm facilitating it. We might take some questions, we'll see how it goes, if you have burning questions, I'll be keeping an eye on your general locality, and then eventually maybe it will work. What we're going to do first is an introduction of all the panelists.

Williams: My name is Ashley Williams, I go by @ag_dubs on Twitter, as you may know me. I'm on the core team of the Rust programming language, and I am also on the core team of the Rust WebAssembly Working Group. Before that, I was on the Board of Directors of Node.js, and worked at npm, the package manager.

Klabnik: I'm Steve [Klabnik], I also am on the Rust core team, and before that, I worked on Ruby on Rails and I've just generally been a programming language enthusiast. When I went to college I set up my class load to be, how fast can I take the compilers class? This is a topic I've just been interested in for a long time and I've been lucky enough to work in for the last couple years.

Welsh: I'm Noel [Welsh], I'm currently a Scala consultant at a place called Underscore. I'm a big fan of programming languages, I've been studying them in an informal basis for a long time. I like compilers, I like interpreters, I like type systems, all stuff is good stuff.

Pressler: I'm Ron Pressler and I work at Oracle on OpenJDK, where I'm leading a project called Project Loom, which intends to add delimited continuations and fibers through the JDK.

Magnorsky: The missing person, and that will hopefully turn up, is Irina Guberman. She's an Erlang developer, she's been using Erlang for the last 10 years, I was hoping that she can add her perspective.

Programming In The Future

The opening question for the panel is, imagine today is the 3rd or the 5th of March of 2025, and you're still a developer and the world is still spinning around, more or less. You go to do your job as a developer type thing. What do you actually do? Imagine the future type thing?

Williams: I think that might be a little early, I think I'd project my future more around the 10-year mark, but based on the technologies I just told you all I work on, I'm really excited because I'd like to see the web platform expand pretty tremendously. We can think of things on the web platform no longer as networked applications necessarily. I work on WebAssembly, so I'd like to see us focusing on targeting WebAssembly primarily. We'll be riding languages that compile and your delivery mechanism is going to be your browser. We might not need to worry about operating systems as much anymore.

Welsh: Can you define the web platform?

Williams: You should have come to my talks, I spent a lot of time doing that.

Welsh: Could you give us a quick definition of the web platform? Are we talking about the browser as platform, or is it something different, a distinct, or wider than that?

Williams: I define the platform as three things, an ISA, a runtime, and then tools. From that, the idea is that the web platform has not had an ISA or instruction set architecture for a bit. Now we have WebAssembly, and so the browser was a window into the application, but with the instruction set architecture, we don't necessarily need that. You could use the browser as an operating system, or you could have a runtime, a WebAssembly runtime to do that for you.

Klabnik: It's also true, having just left Mozilla, when I joined Mozilla was a little weird, people kept talking about the web platform all the time, and I'm , "What does this extra platform word mean?" It turns out that all the browser vendors basically think about the browser APIs as there's a web platform team whose job is to work on those APIs and improve them too.

Magnorsky: Anybody that wants and is ready to say in 6-10 years' time, what do you feel it will be like to be a programmer and do your job?

Klabnik: I think one thing that's interesting is that it takes a long time to make a language, and make a language that people use. I'm glad we switched to the 10-year time frame because I think that it takes 5 years minimum to get that going. I don't know what languages are being made right this second that we would care about in 5 years. That's something I think I didn't appreciate for a long time, is that you think you just make a language and then people start using it. But it's actually a really long slog and takes a while.

I also think it's interesting that we've had these waves of programming language, reconnaissance. 1995 was an awesome year for programming languages. You have Java, you have JavaScript, you have Ruby - all come out at the same time. I think PHP was really close there too.

Magnorsky: I don't know, my favorite is 1972.

Klabnik: Yes, there were lots of good ones.

Williams: I think early '75 too as well.

Klabnik: Maybe. I think is older than that actually.

Williams: I should pull up my graph.

Klabnik: Then we had this really long time where there weren't a lot of totally new languages. Then about 5 or 10 years ago we have this new explosion of languages and people are talking about, oh, there's always stuff that's really cool. We're sort of on the tail end of that now, where some of those languages have succeeded and some of them have not, and whatever. I think that probably in 10 years, we're going to be in the middle of another one of those like, look at all these cool new options we have to do stuff with.

Welsh: I'll just answer the question directly then maybe I'll put some questions to previous people, maybe that's stealing too much airtime. What I hope we'll be doing in 5 to 10 years' time, I think types are great, I think we did lots of good things with types of avoiding problems in our code, so I'd like to see that knowledge more widespread and I'd to see more interesting problems caught with type systems. We've seen from Rust introducing what they call affine types, and there's more general concept linear types. Maybe those can become more widely used to solve a whole bunch of problems within, and maybe there's more interesting things we can do, maybe along sort of dependent type. That would be really good to see, basically solving problems along that axis.

Then I'd like to see if we can expand the reach of our programming languages. At the moment, particularly in compiled languages, you tend to have a very hard boundary between compile time and run time. I'd like to maybe not loosen the boundary, but maybe introduce more stages because I think there are a whole bunch of interesting applications that don't have this really strict division between compile and runtime. Can we bring back interactivity, can we bring back some of the ideas from small talk? But can we bring it back in a controlled way so we can retain the benefits of raising the amount of code that we have in these type of systems?

Pressler: I think we may see some changes in the run time or the targets we compile to or run on. I don't think there would be much of a change in terms of programming languages, and to be honest, I personally don't think that programming languages matter all that much, in fact, I would say they matter very little. Some things I am very much interested in formal methods, I practice them all the time, I think that linear types are actually an outlier in the sense that they are very interesting. Dependent types I'm sure could be a failure, as someone who does formal verification, they're not a good approach even 10 years from now. Right now we're using languages that are 20 to 60 years old, I don't think we're going to see much of a change in terms of language design. If we do, I doubt it would make much of a difference.

Popularity of Programming Languages

Magnorsky: Since there is no fifth thing, and I do have something to say, I will add that sometime ago I was talking to Edwin Brady, the creator of Idris, and I asked him why did he do this Idris thing? Why would you bother? This thing will never get to Java version 300. He said actually he never thought that it would get very popular, but the thing is he said that his research, and the same for all academic research, just trickles through slowly. This was a few years ago and today popular languages are getting nice, slightly advanced features that before were just completely crazy.

Klabnik: But does it make a bottom line difference?

Magnorsky: I think from a person writing code every day, yes, it does make some difference because you are allowed, even in these primitive languages from whatever years ago, reason in a slightly more streamlined way. I would say as a personal experience, yes, but I'm just one person. I'm not the one creating languages for millions of people to use, obviously, our perspectives are bound to be different, and that is completely great and totally fine.

Williams: Can I jump in there? Perhaps it doesn't count because Rust is not 20 years old, but as someone who has now learned Rust, the way that I feel like I program, and other people who program in Rust, the confidence that is felt, based on the stack analysis that the compiler is able to do regarding memory, safety, I do think is quite empowering and it makes it such that people are more willing to write programs that they might not otherwise feel like they were able to write because they can't do that work in their head, but the compiler can do it for them.

Pressler: Do you have examples other than Rust? Rust is a bit different because it was a low hanging fruit, C wasn't touched for a long time. Do you have examples of languages that do that other than Rust?

Williams: Rust is the primary one, we could talk about, does it make a difference? This goes back to what you said about seeing different targets. The other example I would have is Node.js throwing JavaScript on the server, which did afford us a whole new slog of types of applications, you could say it empowered a bunch of folks who would otherwise probably have remained in the client to do server-side programming. Maybe it doesn't make a difference in the type of programs that they're writing but it does make a difference as to who will be doing that programming, and I think it did really open up a large door for significantly more people to be involved in server-side programming.

Klabnik: Also on the bottom line angle you joke, and being on a language panel mean language doesn't matter. That's actually a very mainstream view, I don't think you're necessarily totally wrong but I think it also depends on domain as well. Microsoft came out with this thing a couple weeks ago, and they analyzed the history of security bugs and applications. That's obviously cost Microsoft a lot of money over the years and 70% of those were memory safety issues.

They're interested in reducing that through language techniques, being C#, promoting C# to be able to do more stuff it can't do, getting involved in Rust and some stuff. That's an example of language abilities affecting people's bottom lines directly, although obviously it's hard to quantify, did this bug actually cost you money or not, but it certainly damaged Microsoft's reputation in the developer community for a really long time.

Williams: Also regarding bottom line, if you think about who can do the programming, if you can have a programmer that is significantly more versatile, say, do something on the front end and on the back end, that also affects the bottom line of a company because the employees cost money. They're often one of the largest costs, if you can have someone who's significantly more versatile, that will affect the bottom line of your sheet.

Klabnik: I do think is really hard to quantify as well, which I think is a large part of this point. I joke sometimes that I didn't do LSD in college, I did Haskell, because it totally changed my perspective on the universe. Is that a quantifiable thing that I can demonstrate made me more valuable to my employer and made them more money? No. Was that personally very meaningful? Yes. Does that actually matter in the scope of things? That's a little harder to say, that's a big aspect of this attitude, if we're supposed to be scientists, we're supposed to be able to collect data about the truth or falsity of things. Programming language design is not a science at all, and we don't even know how to go about starting to make it one. I think that's a difficult thing about programming languages and their usefulness.

Magnorsky: There’s a paper for that, it's called a structure of computing revolution or something that. You might like to read it because is about how computing is not, but also you need an analytical experience to actually be able to write about code. You can't compute science without actually writing code because that doesn't make sense. It's much better than what I just said, just read the paper.

Klabnik: Well, first we have Dijkstra being angry, because that's what Dijkstra was all the time, but he once said that computer science has as much to do with computers as astronomy has to do with telescopes, and so it's interesting to think about, because this is totally a thing.

Small Programming Languages and Different Hardware

Magnorsky: Tangent upcoming. When two of you talked about things that I think are related, but you didn't mention that in a related way. One of you discussed very small programming languages or things that are not maybe perceived as programming languages but are, and someone else mentioned how because reasons, now we can use different hardware. I think those two things are sometimes not related, but I'd like to think that they are, but maybe that's just me, so I want to see what you make of it.

Now we have many hardwares but also that's one side, and the other side is small programming languages and their lifetimes and how do they relate to the hardware they run in. I can do the leap if it sounds totally crazy.

Welsh: Multiple hardware, you want to abstract over in some way. There's a problem there of different hardware present different platforms, and they have different characteristics, and you want to be able to access those characteristics, otherwise you wouldn't bother creating different bits of hardware. I guess that's a counterpoint to the idea of putting sort of the web platform or any platform anywhere, is that you need to be able to access these different features of the hardware. The GPU is not the same as the CPU and there's sort of definite reason for that, you don't want to pretend they're the same because then you lose the distinct advantages of both.

I think it's difficult to abstract over all your hardware with one language, maybe we end up with lots of little languages, but the problem you get then is you have to learn different languages and you have these boundaries between them where you have to try to reason across the boundaries and it's hard. I'm not sure what the solution is there but maybe I could see that we could have languages that could encompass multiple bits of hardware, so you could have sort of deep hue extensions of programming languages, and you could treat these heterogeneous systems as one system that you program. I could definitely see that as a great thing. I'm not sure if it's practical in this point in time to do, but it's a very interesting idea. Is that where you were going?

Magnorsky: There is some part of that, that was one point, I could see your point, it's homogeneity versus, we build these small languages that fit this machine and then you abstract on top of it or the opposite. Both those arguments are possible, so I want to hear what you think, because I'm not the one making languages.

Welsh: One thing to add, it would be interesting to get maybe the Java take on this. One example is you embed some DSLs within an existing language, if you want to use a GPU, one way you can use a GPU is via TensorFlow, which basically can be led to write Python, it's a little bit painful, it's actually a lot painful, depending on how you use it. You can pretend you're using just one system, that's one way you can do it with DSL. Can you create languages that allow you to extend them in certain ways? Into macros and other solutions to that? Java currently doesn't have great GPU support, I don't know if that's something you [Pressler] can talk about, the plans in there.

Pressler: Yes, there are plans. First of all, just to clarify, I'm not involved with designing the Java language, so only the JVM and the cool libraries. There was an interesting project led by AMD, I think, called Project Sumatra, which tried to automatically compile stream Java streams to run on the GPU, I think it was discontinued. Currently we have more modest goals with Project Panama, that just tries to make the Java FFI much easier so you won't have to use JNI. You can directly call, and cheaply, native code from Java, and that is basically where we're going. Then, on top of that, you'd be able to write your own DSLs or libraries, whether for Java or other languages running on the JDK.

The Helicoid Project and Its Effects

Participant 1: There's been a few academic projects, one called Helicoid, which concentrated on graphics programming, and one called Lift, based at the University of Edinburgh, I think, but has a much broader scope, that have tried, or are trying to separate the expression of algorithms from the scheduling of them to target different bits of hardware. I wonder what effect the panel thought that sort of thing might have in the long term on programming languages.

Klabnik: Allan K was working at this VIPER Institute or something, and it was similar in the sense that they were, “Could you build an entire OS and networking stack and text editor, a full computing environment, in under 20,000 lines of code?” That was involving, “It knows how to read the TCP RFC, and then write its own TCP stack from reading it” which is vaguely in this similar vein almost, to this separation of the declaration of the thing you're doing from the way in which it's actually implemented.

I definitely think that there's a lot of interesting, positive stuff in that design, since I've been swinging back more towards the low level, previously I would have been "Yes, who cares what computer it's running it on?" Now that I've been in low level stuff again lately I'm "How could you not care about all the details of the computer that it's running on?

One awesome thing, and I think this is also true about this earlier question about “Do you have little languages or one big language?”, is that we don't have to make every computer the same, there can be some computers and systems in which you do stuff like that and it's awesome, and there can be other systems where you don't, and so we don't need to have one grand unified theory of the way computers work. I definitely think techniques like that would make for a really cool system, but I'm not totally sure how much stuff you can actually replace with that sort of thing, which I think is the job they're working on. Maybe that's not a really good answer.

The Importance of PL’s Diversity

Participant 2: Do we need more programming languages that are more different? Are the programming languages too similar, the ones we've got? I mean, should we experiment more with different things, or should we consolidate what we have?

Williams: I feel an enthusiastic yes to that, but I can't tell if my initial intuition is naive. I certainly know as a programmer, learning multiple programming languages has made me a significantly better and more robust programmer. It's undeniable, I learned Erlang five years ago and it blew my mind, I guess that was my college LSD experience.

Where you're getting at is, are they too similar? There's this idea of programming paradigms, which I am quite a fan of. It's the way the solutions that the programming language will afford you, you can solve these types of problems in this particular way. I personally think, if I got to pick a research project, I would love to see a declarative programming language that actually gets a new type of real use, and I bet I just made 5,000 people angry in saying that.

Magnorsky: Well, you don't like Burlock?

Williams: I do, and I think it's really cool, but a lot of people don't. A lot of people don't know what I mean when I say declarative programming language at all. I do a lot of education work, and just talking about those different types of ways a programing language can aid you in writing a solution is very interesting. I don't know how viable it is for the market, which I hate saying, I already feel dirty having said that.

Klabnik: I was going to say it, so I'm glad you said it.

Magnorsky: I think it will be really difficult, on a programming languages panel, maybe now is the time to say it but programming languages are being held back by the fact that we are terrible at teaching programming. We are so bad, and we are not going to see the type of diversity I think we probably could really benefit from until we actually get better at teaching people how to program.

Magnorsky: I am super interested in your [Pressler] perspective here.

Pressler: I'd say absolutely yes, the reason why I said programming languages don't matter, is not that hypothetically they don't matter, and obviously Python is very different from assembly, and working Python is obviously much more productive, and partly because they're very different.

To me, and it may sound strange, Java, Python, Erlang, and Haskell are almost indistinguishable. They're all quite imperative, they have different rules somewhat, but they're all based on the idea that you have a certain programming term that performs a computation start to finish. Rust makes a difference compared to C because there was some low hanging fruit there and the domain wasn't addressed for many years. It's going to get harder and harder to gain benefits from language design and, because of that, we have to start thinking outside the box.

I do have one example, which I think is a completely brilliant language, which unfortunately did not see much adoption, and that was Eve, Eve works completely different. I like it also because it's a language design for formal verification, it's based in temporal logic. Eve is not like Java and Haskell, which to me are almost the same language, Eve is different. I don't know for a fact whether Eve could make a bottom line difference. Is it really three times better to work in Eve than in Java and Python and Haskell? If any language has a chance to do that, it would have to be as different as Eve is.

Williams: I was about to maybe make a jokey comment that I think explains it, and by jokey I mean, I feel this very deeply, but I really love trolling people at conferences by saying that Microsoft Excel is the most popular and successful programming language that's ever existed.

Magnorsky: See Felienne Hermans’ talk on papers on this.

Williams: My understanding is Eve believes that very fully and took a lot of understanding.

Magnorsky: Does someone want to explain in two minutes what Eve is? Maybe someone doesn't know what Eve is, or where does it come from, what does it do.

Pressler: In Eve you don't write instructions after one another, you don't compose functions, you compose sort. It's a combination of temporal logic and logic programming. You basically define a set of rules for the program, it’s like Excel.

Williams: Excel Macros.

Pressler: Yes. If the state of the program is this, then perform this transformation, and it's transactional, so Excel would be the closest analogy. One of the advantage that Eve gives you is the ability to define correctness properties that are global. For example, instead of putting in a search in your program to say, whenever I hit this point in the program, then such and such must be true, or adding a type signature for a subroutine saying that this is how a subroutine behaves, you can make a sort of an assertion that says, the state of the program is never such that there is an overbooking of the airplane seats. It's completely global and you say this never happens, then you can verify it in multiple ways. You can use obviously model checkers, you can use testing, but it's a completely different way of composing and writing programs as a set of rules.

Magnorsky: I just heard that and I did have a look at it a little while ago, but I just heard you and this is really scary, you just scared me a lot.

Williams: There's a lot of value in you actually bringing that up though because, to bring you back to the education comment, a lot of times in programming languages we talk about ergonomics. Fundamentally, honestly, ergonomics usually falls to familiarity, and that is making it very hard for us to innovate in this space. For languages that actually could potentially be more productive for people, they end up getting maligned as non-ergonomic simply because they lack that familiarity bridge for folks.

Klabnik: I wrote a blog post that received modest success, called, "The Programming Language Strangeness Budget." Basically my argument is that you as a program language designer, if you want programmers to actually use your stuff, you get to or three weird things max, and you get to pick what those weird things are, if you have more than that, people just aren't going to use your stuff. That's not a moral judgment inherently, but it's a statement of you can either go make your vision off in the corner, and then we'll talk about isn't it sad that Eve doesn't exist anymore but it was really cool, or you can be, this language is used by millions of people, but it's a very incremental improvement over the thing that currently exists.

This attitude of teaching, part of it is because we as programmers go, that's too weird, I don't want to use it, part of it is just because it takes so much time and effort to bring a programming language with a level of quality that we expect out of our tooling, that you basically need a company to do it these days, you need people. Rust only happened because Mozilla was willing to spend millions of dollars over 10 years to pay a group of developers to work on it. I made this own programming language on my own in my spare time, and now it's used by everyone, that happened with Ruby and Python, I think that era is just over.

The economics is really hard and complicated, and that also feeds back into this. Eve basically doesn't exist anymore because they took investment money and they were not able to get back on it. Now they all have to get day jobs to feed themselves. There are certain weird market realities that we have to deal with, that's also a real big problem with programming languages.

Welsh: I'm going to disagree slightly with that, because we need some disagreement on this panel. I think if you have a compelling enough platform, people will jump through whatever hoops you put in front of them to use it. Nobody wants write Objective-C, but you had to, to use iOS, now you can use Swift. I'm going to say nobody really wanted to write JavaScript either, which might offend some of you, but you had to, to access the web browser.

Williams: You can't say somebody and then point at me.

Welsh: I think that if you look at a lot of languages, what drives adoption is the fact you can access this platform, this new platform, whatever that is, and platforms should be interpreted fairly broadly. Sometimes it's a framework you can access, Kubernetes is so attractive to the people who write YAML.

Klabnik: I also hate YAML, so we are actually in agreement. I do think this aspect matters a lot, but I also think that those platforms are how people make money, and so these things are connected. I didn't say your language has to be good and there just has to be some sort of way to sustain the people that are doing work on it in some fashion.

Welsh: The problem is a lot of languages that don't get adoption is they give you sort of incremental benefits on an existing platform, is Haskell compelling enough to be able to use it compared to what they already use to do the general programming, web programming they do? Probably not for most people, but we're seeing adoption in blockchain, where we're seeing culturally this sort of formal method aspect is seen as being very important, after a lot of people lost a lot of money.

The Importance of Money in Creating PL

Magnorsky: This is a semi-ethical question, actually sorry, is a fully ethical question. Why can't we build programming languages without thinking about money? Not the obvious parts, it's obvious that everybody that works should be paid and such, but why is it that we need companies backing a language? Also that's really scary, we've seen when companies back a language, what happens.

Klabnik: I was on a panel here last year and I sat in this very same seat, and I said the same sentence, I'm just going to say it because I can do it two years in a row. There's this graveyard over that way, where there's this guy named Carl, and he's buried there. The reality is capitalism means we need to care about money.

Williams: She didn't want you to say the obvious bit.

Klabnik: I know, that's why I have to say it, because I said it last year too. The root of it is that you can't escape the fact that money has to exist to make stuff happen in our world. It's not inherently just about people needing to eat for things, it's just everything is defined in terms of money, so if you want to do a thing you got to either starve or make money out at it.

Pressler: Yes, I want to say something else.

Klabnik: Mr. Oracle wants to talk about money?

Pressler: Yes, wait for this. Usually when you're creating a new language, you have to create an entire ecosystem and libraries. Even though it could be very artistic to design a new language, in the end you have to write an HTML library and ZIP file library and is a file access library.

Magnorsky: Serialization.

Pressler: One of the things that can actually help reduce the cost of new languages, for people who like them, I don't, is shared runtimes that allow new languages to enjoy a wide ecosystem. An example of that is a language like Clojure, I don't know if you'd call it popular, but it is relatively popular. It's not dead, it's doing well and is very interesting, but the reason Rich Hickey was able to do on his own and still succeed is because he had access to the entire JVM ecosystem, he didn't have to write all the HTML libraries from scratch.

Williams: I also have one thing, I want to return to this idea of, in order to have a successful new language, you're going to have to give access this new platform. One, I agree that that's true but I think there's a second way you can do it, which is over time, platforms become silos and if you can be the one to say, well, you can use my thing across all these platforms, that's also a really awesome way to open up a new platform by reducing the other platforms that are no longer individual. This is part of the reason why I'm particularly excited about something like WebAssembly, to break down some of the silos that I think have built up.

Just to add, I think that's true but there's also just the inverse of, instead of having us like a siloed platform, opening up a bunch of siloed platforms. That’s something that I think Oracle did to be successful through databases.

Pressler: I'm not in that part of the company, there’s somebody to comment anyway.

Why Isn’t Racket More Popular

Magnorsky: It's worth noting that there is a language that is really good for creating languages, and it's a language you work on. I always think that someone should create a language that enables you to easily write languages and provides you with all the tooling, so you have an idea, you can write a language, really small language in a few hours, if you're some client, and that's Racket, but is not very popular, I don't know why it's not popular. Is it just the parenthesis?

Klabnik: It's the Lisp. The sad thing about Lisp, is that it's Lisp. Lisp is wonderful, I think Racket is fucking awesome.

Williams: Language.

Klabnik: This is a languages panel, sorry. No, it's totally that same effect. There are too many people that are like, parenthesis, no, or, I'm a Vim user, not an Emacs user, so my parenthesis support is poor, so no, and that's heartbreaking, but I think that's what happens.

Welsh: Just with Emacs

Klabnik: Yes.

Pressler: I was a schema developer for a while, so I like Racket but part of the reason maybe that people don't create that many new languages is because they don't really make a difference. It's not that they couldn't, it's just that currently they don't until somebody figures out how to design languages and actually does, and maybe show us the way.

The Influence of the Architecture in PL’s Design

Participant 3: A lot of languages have been built for the Von Neumann architecture. What if we change the architecture itself, would we change the way we design these languages?

Magnorsky: 1977 calls.

Welsh: Some languages have been built with different hardware in mind. The Lisp machine had its own hardware and there was another project that was looking at massive parallelism, and I think there was something occam. There have been languages there, but then you do get the problem of this new hardware you've got to sell, this becomes a new platform. You see in the case of GPUs, you're probably in Hooder or OpenCL which already taken off. Well, I guess Vulkan is the new thing, I'm not really in that field.

There is a way in there for new languages for sure, my thing I'm going to bang on about is the platform, the hardware defining the platform in this case is compelling enough to actually get some market share, then yes, it does open up a new playground for languages, a new ecological niche, you could say, for languages to grow in.

Klabnik: I don't have an answer so much as I have an admission that on my darkest days when I have my hardest bugs I'm, "Yes, maybe this framework is bad, maybe this language is bad, maybe we made a mistake with choosing Von Neumann architecture. I don't know what to do, I'm going to the pub.", I don't know how it would be different, but sometimes I'm, maybe the mistake is not just the mistakes we're currently making, but we've built on foundational mistakes.

Williams: Richard Feynman's written a fair amount about this earlier, we are really goofed by insisting that computers be 100% precise. Which is why I am not worried about the singularity by the way. They don't work at all like brains work, so we'll be chill.

Pressler: I really don't know, but we need to know what those other architectures would look like, and something that is massively parallel, people talk about [inaudible 00:40:20] , maybe that would be sufficiently different, but it's too hard to imagine until we see something, and then we could tell.

Klabnik: There's a certain fundamental aspect to environment shapes language, shapes the ways that you think, and so if we had a different one, I totally think we would come up with different languages. It's such a hard problem because we're so constrained by language, how do you talk about a different one? I don't even know how to get around that problem.

Welsh: One place where I can see a new language that works on existing systems that would be quite a different thing would be the world of microservices. These are great, big, distributed systems, and normally we program them little service at a time, and if you define a protocol, often it's just you talking to the other team and sort of agreeing what it is. You can certainly wrap this whole thing up in a much less primitive form by defining all services together and defining the protocols, then you could just generate the code and generate all the protocol bindings and away you go.

I think there are some people working on this. There is a language called Dark, which I don't really know very much about, other than what I heard in a podcast recently, but it sounds like they're tackling that problem, and that could be very interesting.

Participant 4: Microsoft Research has some interesting papers on that in the [inaudible 00:41:40] serverless worth reading.

Klabnik: Isn't the language where you define all the microservices and how they work together called YAML? Dark is really cool and really weird, and there's not that much out there about it, and we're not sure if they're just making it all up or not. There's some demos, and they swear they have this proprietary thing, and watch this video, but you can't use it at all yet.

Williams: The Fyre Festival of programming languages. Amazing.

Klabnik: We'll see, it's interesting. Jai is also in this current state of, you can watch the creator, have a video about it, and it looks cool, but it's hard to figure out how good it is of a thing until you can actually play with it.

Small Talk

Magnorsky: I was thinking about something I read the other day, which is programming as an activity has its own semantics. Copying and pasting a bit of code, normally what you think of that happens, and this is the reality of it, then you copy some code and you paste it somewhere and you change some stuff and then you just create a couple of bugs. There's a semantic meaning to an activity like that, and same for many other things, like renaming something. Since some of you are doing actually languages, shouldn't that be part of the programming language per se? All of these activities that we as programmers have as natural things that we need to do. It's just not text, there's meta-information that we're missing out on, and that's very sad.

Welsh: Hell yes. This was perhaps the small talk idea, so I'm going to go on a micro-rant here. Small talk has this whole system, a small rant and it’s coming direct manipulation of code. You can stop your program at any time, go and make a change, run it, and it's as close to direct manipulation as you can perhaps get.

The problem with small talk is it was just incomprehensible what your program was eventually doing because you could just change anything at any time, and the subject you working a team, they could change anything in there, one plus one equals three and universe ends. On the other side we got this idea of compilation, you compile your code, that is it, that is your code, and then you run it. I think we need to add in more levels of interactivity, staging is a thing that we might use there, and then there's also the tooling aspect of that.

Getting onto your point about teaching, one of the best things you can do to see how terrible the current state of affairs is, is take someone who's never programmed before and try to teach them to program, and just watch all of the confusion. "Oh, you didn't save the file," or things like this before you ran it, or just something ridiculous that way. "You forgot to close your string," and all this minutia nonsense that we have built up knowledge for, simplistic knowledge that we have, that someone else has to pick up, all the ways you can go wrong. It's crazy.

Error messages, which some languages are taking seriously, like Rust, but you've still got things, [inaudible 00:44:46] says you're [inaudible 00:44:47] is, what does that mean? I've turned into some Elizabethan ghost running around with my head under my arm or something? The whole experience is terrible but we built up an immunity to it through familiarity.

Klabnik: That is definitely one of the cool things about Eve, is that because it was sort of defined in terms of this runtime system, if you wanted to know to debug your Eve program, you wrote more Eve to inspect the state, and so you could be, "Oh, I want to see how my memory allocation is doing." Ok, so I write a little window that draws a graph of the memory allocations because I can query the system at runtime. Now as the program runs, you have a separate window with a little graph, and you go check it out. That dynamism is really cool, for sure.

Pressler: One of the reasons why I push for delimited continuation in JVM is so that we could easily implement languages like Eve. Going back to your question, I think it's a fascinating topic. I don't know about how to change languages, but because I'm very interested in formal methods, and this ties back to why I think dependent types are going to fail, and that is because programming languages are essentially formal languages. They have formal deduction rules and you look at them, and some of the ideas behind dependent types is that when programmers write code, they reason about the code deductibly. You say, this module has this contract, and the other module has this contract, and I can specify them then I can prove some things about them even though it does not follow, but still.

However, I think that there is some informal and inductive reasoning that people do, and I was trying to think of an example of that, and say you're a programmer who comes to a huge code base - this is something I face every day - and you want to touch a small piece of the code and you don't know whether it's going to affect other pieces of code. You can try to read the code and deduce some stuff, or you can make the following reasoning, following inference. You can say, ok, this code base has existed for 20 years, if it were so brittle that changing some code locally would break something else and would not be detected in testing, then this whole thing would have collapsed long ago. Because it exists evolutionarily, I can assume that local changes either don't hurt other parts of the system or they are quickly detected. This is perfectly valid reasoning, but it cannot be formalized in the language.

Just a month ago I saw a great presentation in Stockholm about a tool that does that kind of reasoning mechanically, not on the code but on the process of creating the software, and that by looking at the commit history. Instead of deducing whether changing one subroutine is going to affect another, you can look at the commit history of all the previous programmers who worked on this, and to see whether changes to those two routines were correlated in the commits, whether they were often committed together or not, and you could get inductive reason, rather than deductive. There are tools that can actually do that, and I thought that was very clever.

Williams: I do think tooling will solve a lot of this. You mentioned the copy-pasting situation, and how that should be integrated into a language. While I think I agree that it's hard to get it into the language, fundamentally formal copying-pasting is called a package manager. This is why people hate NPM and stuff like that, that being said, I think the module system of the programming language can largely influence the success of a package manager and the subsequent library system.

That’s one of the things that I think is incredibly important, I'm pretty sure Node succeeded because of the package ecosystem. I feel very strongly that Rust is succeeding because of its first class package manager in crates.io. Building a good, module system is to a certain extent formalizing of that cut and paste strategy.

Conclusion

Magnorsky: Heads up, three minutes left. Do you want to say any closing remarks? I'd love to continue this conversation, but probably not everybody shares my enthusiasm.

Welsh: One last thing on the design note is that it's also true that language design and compilers you think are sort of different, but because the compiler is how the language exist, they weirdly blend together and so this tooling, with this refactorings and dealing with this stuff is so important, originally we will build a separate tool for IDE support in Rust, but now we're sort of making the compiler also support IDE stuff because it turns out the compiler is what actually understands your code. Writing a separate tool that understands your code is like re-implementing a compiler anyway.

These things do blend into each other, and I think more languages are taking that approach, where the actual compiler or runtime needs to help with tooling because tooling is how you as the programmer interact with the language.

Welsh: I think we're in the real reconnaissance of languages right now, it's a great time to muck around with languages, really exciting developments coming on, lots of fancy. WebAssembly is very interesting to me, Rust is doing great stuff. The bar has been raised in terms of error messages and so on have to actually be decent now. I'm not personally convinced that the barrier is that much higher, I think that's more of a platform. For me, it's more of a platform feature than a feature of language per se, because a language like PureScript for example, has managed to implement a lot of these things. It's got language sort of protocols type of stuff, and it doesn't have a huge commercial team behind it. I think my argument is that it's the adoption aspect around it makes a compelling platform that you need lots of money to create. Yes, I still think there's lots of interesting stuff we can do in languages, load of avenues to explore.

Pressler: One small thing is just I'd like to call and encourage more empirical research too so we could know what works and by how much, and to have fewer religious debates, although I'm just probably very emotional for some people.

Williams: I was going to say my final thing would be we often talk about programming languages as what is the new hotness? You should go learn a new programming language tonight, instead of going to the pub or at the pub even, and don't necessarily feel like you have to try to learn the newest one. There's a ton of really amazing old ones out there that will blow your mind.

 

See more presentations with transcripts

 

Recorded at:

Jun 24, 2019

BT