BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Interviews Andrew Sorensen on Real Time Programming, Live Coding Music, Memory Management

Andrew Sorensen on Real Time Programming, Live Coding Music, Memory Management

Bookmarks
   

1. We are here at GOTO Con Aarhus 2014 and I am sitting here with Andrew Sorenson. So Andrew, who are you?

Yes, Andrew Sorenson, I am from Brisbane Australia, I work at the Queensland University of Technology on research, I have been there for about five years now.

Werner: So yesterday you gave a very melodious keynote, and the sounds came from your computer and you programmed them with many parentheses. So explain yourself.

Yes, so I have been interested for a very long time in using the computer for creating music, using it for artistic outcomes. My background is actually in music, I have an undergraduate degree actually strangely enough in jazz trumpet performance of all things. So I have always had this interest in using the computer for making music, in fact it’s probably the main reason that I got into using programming languages in the first place, was for making music. So in that sense, it’s sort of just been a long but natural progression, I guess one of the things that’s interesting was that one of the things that I missed about using the computer for making music was performance, because the music I was making with the computer was always offline, it was something that you would do almost like working with a stored medium, you would do it offline and render it out as a piece. And so this idea of being able to take music and make it a real time activity, programming it live, it’s something that turns it back into a performance again. So it’s this idea of bringing programming as a performative task for the creation of music.

   

2. So you wrote or created your own language and environment for that. So what’s the name of the system?

The whole system is called Extempore, it’s made up of a couple of different parts, but the system itself is called Extempore it actually came out of another project that I’ve been working on previously called Impromptu. Impromptu was a Scheme environment that I have been working on since about 2005. It was again a live programming environment for creating music on the fly, Extempore was sort of a natural progression that came out from that project.

   

3. You mentioned real time, but your system is also real time in the sense of guaranteed deadlines, so why do you need real time behavior for music?

Yes, so I guess if you look at real time – there are sort of a few different ways of thinking about real time: so on the one hand you can think about hard real time systems, and we’ve had Martin Thompson here at the conference he gave a really good example actually for real time system in his talk which I thought was great, he is talking about air defense effectively, air defense for shipping where you have a gun, its job is basically to track incoming missiles that are heading towards the ship and shoot them down before they hit the ship. So that’s a good example of a hard real time system because ultimately you don’t want the gun sitting there going “Shall I garbage collect? Shall I let the missile hit the ship? Shall I garbage collect?” Of course it’s going to garbage collect. So it has to be hard real time, it has to meet its deadlines, but to do that you almost certainly need dedicated hardware support; it is very very difficult to make a hard real time system that doesn’t have hardware support.

So if you are interested in working on commodity, operating systems and commodity hardware then hard real time is almost impossible. People on these sort of systems usually talk about soft real time systems, but soft real time systems are a little bit wishy-washy when you talk about the fact that if you miss a few deadlines here, you catch up there, and it’s more like in aggregate we are talking about real time, as long as all of the events, everything that happens in a second at the end of the second it all adds up that’s ok. But music doesn’t really work that way, because our human sense perceptions are actually quite sensitive to timing, it’s probably our most time sensitive organ. So we hear rhythmic variations down to about thirty milliseconds or so but then we hear timbral change in sound right down to milliseconds range, so very sensitive. So in that case you can’t afford to miss deadlines, it’s almost like this gun shooting down these missiles. If you are trying to play things in time and make everything sync up you really need to meet your deadlines, you don’t want to miss any deadlines. So it’s not hard real time in the sense that we don’t have hardware support and no one is going to die if we miss a note, but at the same time you don’t really want to miss any notes because it just sounds bad.

   

4. Your environment is basically a Lisp-y kind of language, and with Lisp that usually means garbage collection. Have you solved the real time garbage collection problem or do you get around it somehow? How does that work?

Yes, definitely don’t get around the real time garbage collection problem, so the way we get around it is by not using a garbage collector. Extempore actually has two languages it’s a bilingual language, the previous project Impromptu was a Scheme based language, and this project actually started in part from taking that Scheme environment and shifting it across into this new project. The new project in a sense then has been about development of this new language called XTLang, it’s also a Lisp in the sense that it’s an s-expression based language, but it’s not like a Lisp in the sense that it’s a statically compiled language, it has type inferencing support, and for example it’s not a dynamic language, also it doesn’t support a garbage collector, it’s managed memory environment, we have three memory types, we have stack memory, we have heap memory but we also use zone memory or what some people call region memory. It’s sort of semi-managed, it gives some scaffolding support but it’s basically un-managed for exactly these reasons, we want this real time deterministic runtime performance. So it’s very important that we can reason about our code from a performance perspective. We want to be able to look at our code and we want to have some reasonable guarantees about what its runtime profile is going to look like.

   

5. In music is it somehow easier because you have some idea of what output you are generating, or can you overload the system, because you have too many synthesizers or whatever you call it?

You can definitely overload the system I overloaded the system in my performance the other night I typed a bug in and I produced about a million notes all at the same time and overloaded the system. So it’s definitely possible to overload the system. The thing about music is, so yesterday’s performance for example, my sound card is running at a 192K samples a second, and the audio device where Extempore is writing directly to the audio device effectively, to the buffer of the audio device, and the buffer needs to be filled in 32 sample chunks basically. So that means 6,000 times a second, which is sort of a 160 micro seconds, 166 micro seconds. So that is sort of our window, so all of the processing for all of the notes, of all of the instruments, and all of the effects that you want to run, has to all be calculated and fit into one of those time slices. And if you are missing one of those slices then you are going to start getting glitching in the audio system, which of course is something that you really don’t want. So that’s the kind of hard real time deadlines that we are talking about. But of course we are interested in that not just in the audio single processing layer but also through the stack so being able to control for example the execution start time for example of a function called, so in Extempore we can time when we want a function to be executed, and we can also have exception handling for when a function overflows its time deadline so we can have a start time and also an execution time. It gives us some temporal semantics for the language.

   

6. How do you handle when some code overruns its time, can you override what happens? Can you just turn that thing off or what do you do there?

Effectively we put a hook in so that the programmer can then define some behavior that they want to have happen in the case of overloading, it’s then really up to them to decide how they want to deal with it so they might decide for example to actually just dump events out of the scheduler, so for example if I put an appropriate hook in which I hadn’t in the life performance but I could have actually had a hook to say that anything past a certain deadline event will just get dumped from the scheduler so if you overload it and it gets to a certain delay time it will just get dumped. But a lot of that is runtime hooks that we put in so that the programmer can decide how they want the system to behave under certain circumstances.

Werner: I wonder is there some sort of way of doing some static analysis that says “Ok, that is not going to work”.

Yes, so we would love to, and we are certainly looking at doing that. So we’ve done some work looking at doing static analysis and we have some working code that will give you reasonable worst case kind of scenario so as long as we have a lot of head room on top, we can make some progress in that area. You're right in digital signal processing is that you can do a reasonable job because you have some expectation of what your performance delineations are for signal processing. The thing that makes it tricky is then the liveness aspect and by liveness I mean this live performance aspect for example, this sort of changing and morphing of the system through time. The reason why that makes it very hard is that you don’t know at the start of a performance for example exactly where you are going to go. So although you can do static analysis in the small, so for example as you go along on-the-fly compile a function you can do some static analysis to have a look at what the timing for that function might look like. It’s then not quite so simple to take all of that in the large and see ahead of time how all of those things are going to fit together. If you can do it ahead of time and it’s static and you kind of know this is my whole codebase, that’s a reasonably manageable problem; when you are going and changing everything on the fly that starts to become quite difficult.

Werner: I am not a real time programmer but looking at these systems I think the only way it’s possible to meet your deadlines is if you know exactly all the possible cases and as you say sometimes install hooks as fallbacks.

Yes.

Werner: So basically you have to think things through.

Yes, to think things through.

   

7. We don’t like that as programmers. So yesterday you ran on a standard laptop, so how do you get around the operating system with real time requirements?

Yes, it’s hard, it depends on the operating system. So it turns out that OS X for doing audio single processing at least is actually quite a nice platform. One of the reasons why is that the audio guys at Apple pushed very very hard to actually have the audio thread be the highest priority thread in the whole operating system. In part they managed to get that because of the time the sound and music community were actually sort of the last die hard Apple users back in the dark old late nineties and because of that the audio team managed to convince Steve Jobs, probably not Steve Jobs but whoever they had to convince, that audio should be given this very high priority status and the kernel guys actually pushed back very hard on that they were kind of “Don’t know if you really want to go here guys” but they pushed really hard for it and they got there, it’s a huge boon for the kind of stuff we do for example because we have certain guarantees about the behavior of the audio thread that we wouldn’t necessarily have that in the operating system. But certainly on Linux for example there is a lot we can do with real time kernels that makes a lot of this stuff quite manageable. Windows is a bit trickier in certain ways, and I don’t use Windows a lot so I am probably not qualified to talk about real time on Windows.

   

8. So on the Mac basically the audio threads are real time threads, can you say that?

Well yes I mean all thread are real time threads, I guess it’s all just about what the limits are. But certainly at the kernel level the audio thread is given sort of special privileges, we could say.

Werner: But if you wouldn’t use audio you would be in trouble. If you did other things.

Well it all depends, there are other things that we can do. One of the things about real time is that real time doesn’t necessarily mean fast of course fast helps obviously. But real time in a sense it’s more about guarantees, to some degree, guarantees. The way you go about achieving that sort of depends on whatever the context is. So maybe for example that you actually only need to guarantee that something happens every second. If that’s the case then you have many more options, it’s only when we go down to these low latencies this task is becoming more and more critical to have very consistent hardware support.

   

9. You mentioned that in your manually managed language you have three types of memory areas so what are these zones or regions? What are they?

Yes, so we have this idea of lexical zones, where memory that’s allocated within the lexical zone will be cleaned up so you can effectively allocate as much memory as you like, it’s all allocated within one zone and then the zone will be cleaned up as it leaves. This can work out very well in environments like audio signal processing because one of the things that we can do is that for every tick for every sample that goes through the system, for example, we can have a zone for that tick. And then any allocation that might happen throughout that tick will then be cleaned up at the end of the tick; in fact we don’t even have to clean it up, we can actually just go back to start and write over it again on the next run through. Of course one of the tricks is that with all of this stuff we don’t want to be allocating at all so in some sense it’s also about making people think a lot harder about their allocation. And you really want to try and do all of your allocation up front or at least to the degree that’s possible.

   

10. How does your garbage collected language interfere with the non garbage collected language? Or are they completely separate?

In some sense they are completely separate, in some sense the garbage collector came along with Scheme so it’s part of the old Impromptu environment that came over and so the interaction there actually happens between the two languages so it’s reasonably transparent to move between the two languages but they are also quite isolated from each other.

   

11. It’s interesting to see a Lisp-y environment actually becoming a system’s language in a way. So you can use it to program system drivers, is that something you could do?

Yes, I mean in theory we’d like to think that you could write an operating system if you wanted to, not that that’s necessarily the goal but what is the goal is that we are very interested in this idea of on the fly changes to code and programming not so much as a destination but as a journey, something that happens through time. And what happens when you think about programming that way you start thinking about it more as an exploratory practice, something that you are doing when you are exploring a space. The thing about exploring is that you don’t know what you are going to find. So in some sense that is why we want this full stack capability because we want to be able to go and change things anywhere. So for example in the performance I did yesterday a lot of the performance is generating notes, it’s working at this very high level, so we are doing notes and phrases and cords and scales etc. and we are changing that sort of stuff on the fly and we are generating this sort of higher musical structures but then at the same time we go “Ah, yes but I'd really like that instrument sound to change”. So then we want to jump into the definition of the instrument code and we want to be able to change the digital audio signal processing that the instrument is doing so we make changes at that level and then we might go “Oh, but actually then we want all the instruments to have a certain effect” so we want to drop right down to when we are writing to the audio device, and we want to put some signal processing down there so we can change this all the way through the whole stack.

   

12. So it’s a kind of hot swapping, I think you call it hot swapping?

Yes that is basically right, we are just compiling a function on the fly and then effectively changing its jump, so it’s just switching it into the function table, effectively.

Werner: So that’s the implementation, you have a table and you change it, that’s it. That’s an atomic change. So that sounds very easy.

It is, well it’s not as complicated as I think people would think it is, I mean it’s not necessarily hard to do, I think the thing is that it’s a choice, it’s sort of about making a decision that that’s a behavior that you want and understanding that then there are certain sacrifices that you make to make that possible. It’s not that it’s a hard thing to do, it’s a thing that a lot of other people don’t need so they don’t do it. And for good reason, I mean apart from anything else it’s incredibly dangerous, I mean you talk about spaghetti code, you have incredible spaghetti code when you go and change anything at any time anywhere you like. Obviously the security ramifications are ridiculous. So, it’s more a choice, I wouldn’t say it’s necessarily a hard thing to do.

   

13. What’s the concurrency model in the language, do you have one, what’s the idea there?

Yes, it’s a bit of a layered approach actually so Extempore has an idea of processes, where a process really is fairly similar to a preemptive operating system thread in a sense, the difference is an Extempore process is network addressable so it effectively has a socket and you connect to it and message pass between processes. Within a process we then use a style of cooperative concurrency which actually comes about in part because of this timed behavior, the fact that we can schedule function calls. So one of the things we do is we have this idea of temporal recursions, so a function that schedules itself into the future as its last action so it’s a recursive function but it’s a recursive function through time.

So the function will call itself, it will do whatever activity it needs to do, then it schedules itself onto the scheduler to be called back into the future. Of course the nice thing is once it schedules itself onto the scheduler, control turns to the top level and we can run some other function. So we can have this cooperative concurrency we can have many of these temporal recursions all running at the same time. So it’s a sort of nice lightweight threading model. And because we support continuations, you can also make that behavior synchronous in the sense that you can make it look synchronous. So it doesn’t have to be a temporal recursion, we have a wait for example, or a sleep, and the sleep will do effectively the same thing, it will schedule a continuation to be called back which will obviously then continue through your code. So this cooperative code, it’s a coroutine effectively.

   

14. This self scheduling is not a kind of trampolining, is it?

I wouldn’t call it trampolinging in a sense, I would say it’s more about just placing them because they are all timed I guess that’s one of the distinctions and the scheduler then becomes the main target for the programmer, and I guess that’s what’s unusual, is that the programmer is explicitly scheduling the times that these things are going to be called back.

   

15. I see ok, right. So your processes are network addressable so is that for distributed computing? Distributed computing as in multiple boxes?

Yep, both, multiple boxes and multiple cores. Then we actually use a really old coordination pattern, tuple spaces which kind of came out of the Linda language many many years ago, and we also do coordination using tuple spaces for various things as well and that’s then across the distributed [system].

Werner: You put tasks in a pool and take them out.

Yes, that’s right.

   

16. In your music processing what do you put in there?

Well for example when you are working with a networked ensemble, so more than one performer, you might want to be sharing things like what’s the current scale that we are using, what’s the current chord that we are using, of course you want to know when those things change, because that’s very important things, like what’s our metronome, what's our time at the moment, how many beats per minute are we running, so even in the music performance there is lots of things that can be shared across a distributed coordinated group.

   

17. So to wrap this up, are there other uses for this? Or is it only for custom music?

We’d like to think it could be useful for lots of different people, one of the areas that we are looking at more specifically is that we have been doing some work at the Australian National University with physicists and we are sort of looking at the idea of using a lot of these live on-the-fly programming concepts, this ability to change and morph your program while it is running and we are hoping that physicists will find that interesting for exploring long running computational simulations particularly on high performance computing clusters. So, the idea would basically be that a physicist could sit down and we start with things like particle-in-cell simulation and that particle-in-cell simulation is running across a cluster of machines. The idea is that we would really like them to be able to sit and, while perhaps viewing the current state of the system, we’d like them to be able to modify the algorithm on the fly to change its behavior as a means to explore the behavior of the algorithm and we are hoping that proves useful but we haven’t – we are only just starting the process.

   

18. I guess our audience is just chomping at the bit to get their hands on this, can they get their hands on this?

They can, yes, it’s all up on Github if you just search for Github and Extempore you’ll find it. [Editor's note: https://github.com/digego/extempore ].

Werner: Well, thanks a lot and we'll all check it out.

Cheers.

Dec 13, 2014

Hello stranger!

You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Community comments

  • Permission Denied

    by Charles O'Farrell,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    Is anyone else having problems download the MP3 for this interview?

  • Re:

    by Roxana Bacila,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    Hi Charles,
    Thank you for notifying us about the MP3 download problem. It is now solved.

  • Re: Permission Denied

    by Charles Humble,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    Hi Charles,
    I just tried this and it worked OK for me. You do need an account and to be logged in.
    Charles Humble
    Head of Editorial, InfoQ.com

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

BT