BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Presentations Functional Composition

Functional Composition

Bookmarks
48:49

Summary

Chris Ford demonstrates that music theory can be delightfully represented as code. He shows how to make music starting with the basic building block of sound, the sine wave, and gradually accumulates abstractions culminating in a canon by Johann Sebastian Bach. Examples will be live-coded in Clojure.

Bio

Chris Ford works as a consultant for ThoughtWorks. He began to make music with code to compensate for his poor piano technique. It was later that he realized that programming offers deep insight into musical structures. Over the past few years, he has given many talks presenting music theory to programming audiences, covering topics like classical music, jazz, central African polyrhythms and more.

About the conference

Software is changing the world. QCon empowers software development by facilitating the spread of knowledge and innovation in the developer community. A practitioner-driven conference, QCon is designed for technical team leads, architects, engineering directors, and project managers who influence innovation in their teams.

Transcript

Ford: There's this well-known truism that I've even heard on stage at QCon before this year, that software is eating the world. If that's true, it puts us in a position of unique responsibility. What we're telling everyone, we're telling our family, we're telling the government, we're telling people we've never met that it is safe and advisable for them to let us encode their economy, their culture, their healthcare system into programming languages.

We're saying that this is a good thing, and we're saying, "It'll be good because it'll make it web scale," and they'll say, "What is web scale?" We'll say, "Well, don't worry about it, it's very good. It's good that we're doing that." I think, at least at our stage in the software industry, we're not really mature enough to fully bear that responsibility. If you say that software is eating the world, you might say that it's not really paying attention to the taste. It's not really doing justice to the things that it's consuming.

By that I mean that almost all our efforts as an industry, are around coping with problems that we have created, rather than solving the problems of the world themselves. You can tell by the names of things, right, because they're almost always negatively named in contrast to the previous thing you told people was good. Immutability, "Oh, that's good because we used to do this thing with memory where we changed it and it wasn't so good." Serverlets, "Why is that good?" "Well, we used to have these things called servers, and now we have all these problems with them, so we're trying to fix it." IoT security. "Well, we've connected your fridge to the internet, now it's mining Bitcoin." It's, "Bitcoin? Are you trying to put money into digital?" I'm, "Yes, it will be awesome."

I don't think we really necessarily pay proper respect or attention to domains other than ourselves, so I think this is very solipsistic. One of my hopes is that by the time I retire in a few decades hence, that software will be about what it's about, not about other software. Don't get me wrong, I really like talks about software. I like being a programmer I think most people who are programmers do. If you went to a - I guess I think of these kind of things as not so much conferences as festivals - if you went to a literary festival, you wouldn't find an author who was on stage talking just about typesetting or their routine for how they get up and make themselves write every morning. You might find some of those things, but you'd also find them talking about the actual subject matter. If you went to a music festival, you wouldn't find every song to be about writing songs. Maybe you'd find some clever ones that were and maybe they'd be interesting, but a lot of them are about other important things like politics, love, things like that.

What I'm going to try to do today is not to give you a talk about software. In order to let me give it to you, it needs to be through the medium of software. In this talk, software is not so much the subject but the medium of expression, the paintbrush, the instrument.

Music as Programming Language

I said that we'll be talking about music, so let me show you some music. This isn't really music, this is notation. This is a way of encoding music so that it can be understood, and it comes from a very specific cultural context. If you're part of the European musical tradition and you want to be part of an orchestra, you want to play with a lot of other people, everyone needs to understand exactly what's going on, and so you need to know what note to play at what time.

Probably a lot of people in this room will already be familiar with roughly speaking what this means, but just to be a bit more specific, every dot represents a sound, and the height of the dot on the stave represents the pitch. A higher dot, means a higher pitch, and the horizontal position tells you when it happens. You can see, if you trace that melody, it's really quite close to a graph, with just two axes. This is really well suited to its purpose. I'm going to claim that it's a regular language, in the sense of regular expression. It's one that could be evaluated in theory on a finite state machine.

That's because musical notation is designed to be executed on a peculiar kind of finite state machine, called a musician. If you're there on stage, you do not have time to unbundle nested layers of abstraction and recursion, however many layers it takes to play a video on YouTube. There's all these different layers of abstraction that are unwinding back down to the CPU. If you're a human reading this notation, you don't have time for that stuff. You just need a literal notation that maps one dot onto one note, roughly speaking.

The problem with that is that it can't express abstractions that aren't common to the whole genre of music. If you want to add a new symbol to this language, you can't do it in the language itself. If you want a new kind of marker that means that you want all the people to hit their violins with the back of the bow to make a percussive sound, you can't do that in the language itself, you can't extend the language. If we use other kinds of notation, for example, programming languages, we can do that. That's the concept of the talk.

Live Coding Music

I'm going to do these examples in closure, but that shouldn't really concern you. I'm just going to be evaluating code from my editor and just showing you the results. Hopefully, the actual nature of the programming language is not really material to the understanding.

I had a good friend who I was talking about an upcoming talk, and her advice was, "Never, ever do live coding." Then her second bit of advice was, "If you are going to do a live coding, make sure it's just a small bit." Then her third bit of advice was, "If you are still going to do that, make sure it's not an unfamiliar language to people."

I'm going to break all three of those pieces of advice and I hope you understand because the point of this is not to be about the programming, but about the music. If you see this little routine thing here, this paragraph of code, this defines an instrument using the overtone synthesis library, and it takes one argument, which is a frequency, and just plays a sine wave at that frequency.

What that means is that the sine wave is oscillating at 300 times a second it's higher, if I do 100 times a second, it's lower. To prove to you that is in fact a sine wave, what would happen if we had two sine waves that are very close together in frequency? You'd expect alternating patterns of reinforcement and interference. Can you hear that throbbing? That's the sound alternately reinforcing and getting out its way. Sine waves alone aren't really good enough to model the music I showed you earlier, because notes happen at a particular point in time, like a sine wave is forever.

This second thing that I write here is called Beep, which takes two arguments, both the frequency and the duration. We play that, the note will last for a certain length of time and then we'll use what's called an envelope, which controls the amplitude of the sine wave, and as it shuts that means the note is fading away. Let's play a Beep, my aim is to continually accrete these different levels of abstraction so we can get back towards that piece of music that we originally saw.

The reason why the sounds that we've heard so far don't really sound so realistic, they sound like they might come from a video game or a cheap piece of electronics, is that a real piece of sound does not consist of a single sine wave. If you have something that resonates 100 times a second, in the physical world, it's also going to - to a lesser extent - resonate at 200 times a second with a second wave that's half the size. It's also going to resonate at 300 times a second. You're literally going to have a harmonic series of different frequencies that are superimposed on each other to make a sound.

I don't want you to trust me on that. This is slightly more, a very more complicated instrument called a Bell. What this does is this not only has the frequency and duration, but has the proportions of the different members of that harmonic series, so the H0 is the bass frequency and then H1, H2, etc. You can see, if you look at the numbers on this line, that they're slowly fading away. The higher harmonics are not as present. When we mix all those together, we get something a bit more realistic. This is the Beep, this is the Bell.

In fact, we can do slightly better, because what I've done so far, you can even see, with a commented out bit of code, is give you pure harmonics that are exact multiples of the base frequency. Turns out that bells, because of the way they're built, the higher frequencies are actually not following the harmonic series exactly. We go up to one that's four times the base one, and it actually resonates at 4.2 times the base, and the fifth, the one that's five times, is actually 5.4. If we model this in our synthesis code, we'll get an even better bell. It's much more realistic, and your memory of hearing bells is being activated by that pattern of harmonics.

Speaking of things being activated by harmonics, I'm not going to go into this topic too deeply, but I've spoken so far as though music were physics, an abstract platonic realm, but really it's a signal. The interpreter of the signal is very important to the end result. I just got a quick little demo to prove that. I've got three little function invocations of the bell, 600, 500 and 400 hertz, so high, medium, low.

The trick with this is that I've actually passing additional arguments to two of the three calls. What they're doing is setting a harmonic of the bass frequency, in this case, to zero. In other words, all of the harmonic progression, the first one, the theoretically most important one is being completely dampened by this invocation. In this invocation here, the second one, both the bass one and the one that's double the bass frequency are being set to zero. Where it sounds high, medium, low, you actually got the base frequency of this call is 600, the lowest frequency of this call is 1,000, because we've wiped out the 500 one, and the lowest frequency in this call is actually 1200, because we've wiped out both the 400 and the 800. It still sounds, assuming I've primed you well enough, high, medium, low.

The reason that should at least work, is that that phenomenon of physical sources of sound having a harmonic series in the sound they produce is something that your hardware is aware of. You, as a human, encounter many situations where that's the case. It turns out you can error correct. Your mind can actually replace or infer what the bass frequency should have been. If you can see a sound that looks like it goes 200, 300, 400, 500 hertz, that implies there should have been a 100 hertz sound on anything. It's quite important, because, for example, if you're listening on really small speakers or to an old mobile phone, that might not be able to reproduce the bass frequency of the voice of the person you're talking to, but you can still reconstruct it because you, as a receiver, know enough about what sound should be.

Converting Octaves into Code

The puzzle I'm left with so far is, we know that we have frequency, which is what controls the pitch, we know we can block it out for particular periods of time, but if you take any real instrument, or at least, most real instruments, you don't see a slider. This is in some literal sense, a digital instrument in the sense of discrete. Instead of being able to slide the frequency and play anything on the frequency spectrum, I have to push specific buttons.

What guided the choice of the instrument maker to provide these buttons and not other ones? To understand that, you need to know something about what an octave is. I won't get too deeply into it, but basically, an octave is when you double the frequency of a note. Whenever you double the frequency of a sound, you move up to the next octave, so a C in one octave becomes a C in the next. In some sense, when you're listening to that melody, you treat it as equivalent. This is close to being universal in musical cultures, but it's an aspect of how we interpret sound, not the sound itself. If you play a little melody, and then you shift it, doubling the frequency of each note, you can detect that it's a different melody, but it is in some sense equivalent. If you accept that every time we double the frequency we have an equivalent note, what Western music does is divide that doubling into 12. Some Arabic scales do 24, so it's a cultural choice.

If you divide the distance between one note and the next one up by 12, that means each jump is the 12th root of two, each time you got one. We can express that in code, quite easily while being faithful to the domain. If we look at this little bit of code here, this converts from MIDI, which is a numbering system that gives an integer to every key through to frequency, just by multiplying by the 12th root of two.

If you convert 69, which is concert A, it comes up as roughly, floating point in arithmetic aside, 440 hertz, which is what it's defined to be. If I increase the number, so I go up to the next one, that's 466.2 hertz, which is the 12th root of two by 440. By the way, 440 is just an arbitrary standard, it's relatively recent, like when you standardize in voltage in a particular country. There was a certain period of time when a C in one orchestra wasn't necessarily the same as a C if you go to another country. We can automate that.

We can actually do a better version of Bell that doesn't take a raw frequency, but you just give it which button you want and it plays that note. We have this new function, Ding, and if I play ding, we get the same effect as when I was playing the adjacent notes on the melodica. We're getting closer and closer to the domain, in theory.

There's probably a little bit too much code on this slide. I'll walk you through just a little bit of it, but we'll be using this in action soon, so feel free to let it wash over you and we'll get to the actual music again shortly now. This is just a mini framework that I'll need to do the rest of the music and the talk. A note is by definition, just a structure that has a time and a pitch in this little system I have. It's just, this is closure, so this is the way you define a map. The keys are the yellow ones and this is the value.

We need a function called “where”, which applies a function to one of the elements of a note. You can say where: time is doubled or where: pitch is doubled. We need this function “from,” which seems almost so trivial that you might not really think it's useful which just creates an offset function. If I go (from 4), that returns a function. If I do ((from 4) 80), I get 84. It's just building a translation, if you think of the graph.

Now, I can start doing more interesting things and creating more and more abstraction. I can create a play function that just takes a whole bunch of these notes that have time and pitch and plays them. I can even do other things, for my own purposes, I can create an even melody function that doesn't really have a distinction between time. It just plays a set of notes with exactly the same space between them.

Even melody of the notes from 70 to 81 sounds like [Plays notes]. You notice, by the way that we are, if you close your program, just using a lot of standard library functions here. The range is just a way of creating adjacent integers from one to another. I am going to play this again, and this is going to lead us into the next thing that's wrong with our music. It sounds logical as a sound, you know what to expect as the next note, but it doesn't sound like something that is in an actual song. That's because there's still a layer of abstraction that haven't gotten into.

Scale

Scale, you might have heard, even if you're not a musician. What scale defines is which notes of the 12 that you could theoretically play in a given piece, which subset you are going to focus on. In the case of a major key, and I'm quite starting from this C note, the C major scale is all the black notes and ignoring all of the red notes, it's a little bit more musical. The way that you move to different keys is that you substitute in red notes and fallback notes. I think we can represent that a little bit better in code than in the modica.

A major scale is defined by the jumps between the notes we include. There's a double jump between the first and the second note. There's a double jump between the second and the third, then a single jump, then three double jumps, and a single note to go back. A scale is defined by a pattern inclusion and exclusion. We can just represent that as a function.

There are other alternatives. We can have all sorts of different scales that have different patterns of inclusion and exclusion. Let me play a couple of ones.

By the way, when we say that we want C sharp major rather than D, the difference is our friend from just an offset at the start. We're still having the same pattern of notes, but just where the first note is can be shifted. If we play C sharp major, then the first note is is C, and then we keep going up with that pattern of double jump, double jump, single, double, double, double, single.

If we go D sharp major, we've just got the same pattern of notes but we translated it out in the pitch space. We can do a minor key, that feels quite different, it feels sadder. At least, I think to more people with my cultural context, it probably does. The only difference is that in minor we're rearranging the order of our inclusions and exclusions.

Participant 1: So maybe a three at the end then?

Ford: A three? I guess, a melodic minor, rather than harmonic minor. Good to say that there's people who understand the data. Let's try Blues, which does natively have the triple jumps in. The blues' scale has a triple jump between the first note and the second and then a double, a couple of singles, triple, double.

The blues scale sounds like this [Plays notes]. What I think is really cool is the pentatonic scale. It comes from a world away from the blues scale culturally, but it's quite similar, you can see these two lining up. The only difference is the blues scale splits one of the double jumps in the pentatonic scale into two single jumps. I'm going to play them side by side and I think, even though they're very similar, you'll get a very different feel.

This is D sharp blues and this is D sharp pentatonic. The second one probably sounds like East Asian music, because that's a context in which that scale is used. The chromatic scale, in which every jump is a single jump, is the degenerate case where we just get back to our original pattern of playing every note.

We're now ready to start building what you might call real melodies. This code on the screen is "Row, row, row your boat." The way that I built it in this example case, is that we let pitches be a list of the pitches, so, "Row, row, row your boat," and then the durations are the duration of each note. When we slam it all together, we get data, because we are representing this as data before we play it.

This is just a sequence of the notes, similar to if you'd written it out on a page, but rather than being a visual representation, it's a quasi JSON representation. The one last thing I need to do before I can play "Row, row, row your boat," is I need to define a beats per minute function that tells me how long one note is, because in the same way as we needed a translation function to tell us if we're playing in C major, C is the as zero, well, we need a similar thing to multiply the time. I won't go too much into beats per minute, but basically, it just multiplies the size. One beat takes 100 milliseconds where you can just multiply the time by 1,000 to get where you need to be.

This is "Row, row, row your boat." The nice thing about doing it this way is I now have single point of control over some interesting things in the music. I can, for example, by taking the bit where I translated or stretched the notes over time, I can make that faster if I want and I could also play it in E major instead. Not only can I change things, but I'm making important things about this piece of music directly visible to you.

Creating Custom Abstractions

I promised when I was showing you the piece of Bach, that it's possible to create custom abstractions, so for a particular piece or genre. As an example of that, there's many parts of Bach where you end up with all these adjacent notes. You see that top line? There's a whole bunch of notes. They sometimes go up, they sometimes go down, they're like a mountain range, but they're all adjacent. You should be able to take advantage of that to come up with a more concise way of specifying that series of numbers.

This drum function, which is a little bit ugly, does that. The nice thing about it is that when I've written it, if you see that 0 4 -1, that means I can just define the peaks and valleys of the mountain range and the function will just fill in all the numbers in the middle. I can play something like this [Plays notes], it should sound a bit a bit Bach-like maybe or Baroque. I think this is important because if you were having a conversation with a musician and they had a bit of the music where there was a run of notes from C all the way up to another C, that's how they would express it. They'd say, "Oh, there's a run of notes from C up to C." They wouldn't say, "Oh, yes, there's a C and then a D and then E and then an F and then G and then A, and then a B and then and C."

In order to get close to how the domain expert thinks about the domain, you have to come up with these layered abstractions, and the music notation wouldn't do it.

This bit here, this melody is actually a different rendering of the Bach piece I keep flashing up on the screen. It's actually a little bit more than that, it's not as easily directly apprehendable, although, maybe with further research and tinkering around with it, you could come up with a really nice way of specifying it. The nice thing is I can start to express structures in the music that I observe. This is all composed using like the concept of runs, so places where there is that in mountain range of notes, and the bass part is written using the concept of triples. If we see this bass part, you end up with these three repeated notes a lot. It'll be really nice, if getting close to the domain, we're able to do that.

What Is a Canon

Before I actually start playing that piece there's another part of that abstraction that I need to explain. That's the concept of a canon. A canon is a musical genre from Bach's time, and it can be really neatly expressed as a higher order function. A canon is when you have some notes and you accompany them with themselves, but you do some functional transformation on one of the pairs. You have a melody, you transform it and you superimpose it.

That means that you can have a lot of different varieties of canon depending on which transformation you select. A simple canon is represented by this transformation, it's just delaying the second melody. You have one melody and then you make a copy of it and delay it a bit, so they overlap. We can express that in our little framework language, by just saying, "It's where time is from some weight value." That's how we transform.

An interval canon is the direct analogue of a simple canon, but the translation is vertical. It's in pitch, you play the melody twice, one higher than the other. It's quite nice, you can see the symmetry between this function and this function. They're almost the same except that one applies an offset to the pitch and one applies an offset to the duration.

The Baroque people got really clever with some of these things. They also did other transformations, a mirror canon is when you negate the pitch, in other words, when you put it upside down. You have one melody the right way up and another melody upside down. Our little framework lets us express that, I think concisely and faithfully. It just says, "Where you take the pitch and negate it."

A crab canon, so called because in Bach's day they for some reason thought crabs walked backwards rather than sideways, is the time analogue of a mirror canon. You just negate the time, so you rotate the dots that way.

I think my favorite, which is tricky, is a table canon, which is a functional composition of those two transformations. The reason it's called the table canon is the idea is that you would be on the other side of a table from your friend, you would write these dots on a page and you would both play what you saw. One of you is seeing it from the other direction, which, in other words, is two flips, one around the X axis and one around the Y axis.

Because computation actually can throw light on domains as well as obscuring them, we can represent that really directly by this line of code that just says, "The table function is a functional composition of two other simpler functions." The reason I need to do that is that the Bach piece of music is a canon, so we're going to need to exploit that structure to play it.

I'm going to start with a simpler canon, which is "Row, row, row your boat." It's designed that if you play "Row, row, row your boat" once and then play it again with a certain offset, you'll end up with something that hopefully sounds a bit nice. This is when we have a simple canon with four beats offset [Plays notes]. We're able to represent that structure directly in code. I'm going to do something a little bit later that's tricky, so I'm just showing, I'm not going to hide anything. I'm just redefining canon so that as well as applying the functional transformation, it also just puts a bit of data in the second part so we can distinguish them.

Rendering Bach in Programming Notation

Now, we finally ready to do what I promised from the start, which was to render that piece of Bach in programming notation. Bach was pretty good at this music business and very good at this canon business. The functional transformation in this piece of music is the functional composition of an interval canon, so it drops by three notes, a mirror canon, so it flips, and a simple canon, so it delays. When Bach was composing this piece, he had to imagine what note on the original melody would sound like, but when you apply those three transformations to it, what that would sound like as well and the superimposition of both of them.

Let's do it without the canon to start with, just so you can hear how nice the melody is on its own [Plays notes]. Do you hear that pace intervals? We did the functional transformation, so we're hearing two copies. With a delay and a translation in two dimensions, it sounds like this, that's a rendering in code.

It's rendering in code these dots, but because it's data we can also do other things like actually graphic. You should be able to see, visually, the reflection, you can see that that's been translated and shifted down, because once we're in programmatic territory, we don't have to settle for just one representation, we can pick one that suits us.

We also don't need to settle for one particular key or one particular way of presenting this music. We can quite instantly experiment with what would happen if we made changes to the structure of this song. If, for example, we wanted to play it faster, we could.

I'll play it in a minor key, there's a reason why I've not played in blues, but we can chat about it later. This is what it sounds like in A. You've got this control, you've got this ability, like if you're using a REPL or some instant feedback system, like a hot reloader or your unit tests, you've got this ability to experiment and understand what comes back.

You've also got this ability to start leveraging other tools that we have, like version control. I've done so many iterations of messing about with this and I've always been able to - no matter how badly I've messed it up - easily just revert back to the earlier version. If you want to check out the code from this, it's all on GitHub.

Coding and Playing

I still have one more thing because it's quite easy to cherry pick a particular moment in history where composition was very functional. There are some periods in history where it's easy to do it and others that would be hard. This is not really performance as such. I just played it for you and it happened in front of you and that was cool, and you believed it because it was happening right there, but it's not really a performance as such.

I do think that when we're talking about music and computation, there is quite a deep relationship between the two to the extent that the process of a piece of music developing as new parts come in and are added or are subtracted or are transformed with new and more interesting variations, that process is very similar to that of iterating on a program. What I want to do is build another piece in front of you, but play it as I'm building it, so you can hear that correlation between the active editing and incrementally improving a program and the development of a piece of music. This is "In The Mood," if you know it.

To start with, I'm just playing the baseline and nothing else. I should just do something a bit more interesting, so it cuts out playing up hedge area so it's going to get re-evaluated, so play a bit more interesting now. That's the base part, so it should probably lower, there, that's a bit better, but bass shouldn't really be just a fixed thing, because I want to warp over the different chords, so I really need to make it a function and map it over the chord progression.

I'm going to map the bass line over 0, 0, 3, 0, 4, 0, which is the standard blues chord progression, but we need to make bass into a function, so I'll use a little bit of a brief closure notation, and then say where the pitch is from, the bass node of the progression. When we next come around it should start to do that.

We're walking on the fourth, and then we're back again. We go to fifth and then we're going to return back to our home. Then I'm going to include a beat next time round as well and make it a bit faster also. We should probably also add the actual hook. Let's just follow the progression, it's just going to play, just plain chords the first time round to make sure I got it right, because, honestly, I'm not sure we have. Let's see how that goes, we should just hear like a beat playing a chord. I've sped it up a bit, so it's going to hit the bass on every chord again.

Now, we want to play that like "In The Mood," cool and pitchated melody. I'm going to take the vowels of each chord, the actual notes, I'm going to sort them and then I'm just going to start with the three of them, but I need to do a bit master, before we can play it, so we'll loop round once more. That's playing in right here, play the hook, it's something that could very easily happen. Repeat 11 hard notes.

Now, there's no swing in this, musicians might know, but it's very straight, just hitting the beat, and that's partly because I'm very wide, but also because I haven't add a swing, but I can fix that a little bit. Let's play just a little bit longer to do that. What I can basically do is I can use the same scale function that we used for scales, and I'm going to make the first half of the note bigger than the second half, just do a definition of stream. I'm going to do that and just to give a bit of a prop at the same time.

Let's just go to the version of what I have there, you hear the swing go round. You see this line, this is the line I was trying to write, that I make the first note two-thirds and the second note one-third. Go back into C major. We can do some more stuff with it, so I'm hoping to do another change, which is just to go from playing the individual notes in the chord to just repeating the chords, or just a bit of a variation and feel to what we're doing.

Questions and Answers

Participant 2: Thanks for that, that was amazing. I'm wondering, relatively, the time it takes to learn to play an instrument, as this time to learn how to write to do music like this, relative differences?

Ford: I think you definitely get instant feedback much better just from learning a physical instrument. The difference, probably, with this is that I can do things with this that I can't physically play. It might be because of my own shortcoming as a musician, but if I want to play with multiple parts or different sounds, or different kind of production as part of it, I can concentrate many versions of myself on one version of the code over time. I have unit tests for the music theory version, I can have feedback cycles that lets me concentrate a whole bunch of effort into one iteration. You can't really get that with the physical instrument. You still have to play the last chorus, even though it was just the same as the first chorus and it's redundant. I'm not trying to disrupt music or replace learning an instrument with learning to code or anything like that.

Participant 3: How different the process would have been not using a functional language?

Ford: The thing you have to do to do this kind of structure is that you have to represent the music as some kind of data. I think, if what you had was like a side effecting functions that would call and then with sleeps and so on, it would be implausible to actually plug everything together. I don't think you have to use Closure or Haskell or whatever, but I do think you need that functional design approach where you take your music and represent as data, because only once you have it as a first class citizen within your program, can you mess around with it.

Participant 4: Do you use this framework to compose your own music?

Ford: Yes. I use it to make my music or music that I have composed over years, but never been able to actually put in a [inaudible 00:45:57] form. Yes, I do that. It works best if you're trying to produce electronic sounds because of the Uncanny Valley problem. Bells actually happened to be close enough, you can get close enough that if you hear a bell and it's a synthesized Bell, it'll probably still sound good. If you try to put the same level of effort into synthesizing a violin, it'll sound awful because it won't quite be a violin. Electronica works very well, especially things that are unfolding with complex repetition over time. I can tweet links to examples, if you're interested.

Participant 5: Did this approach to music made you appreciate the composers more?

Ford: Yes, because it's really hard to keep in your head, even when you have all the benefits of coding. You look at the functional transformations and you see how things map and you make mistakes, and it really is hard enough with all the leverage of modern programming. To me, yes. I think it depends a little bit on your point of view, some people view deconstruction as destruction. Are they understanding things or taking them apart? It ruins some of the mystery. I'm not like that, partly, I guess, that goes along with being a programmer, but I'd imagine that if you didn't like to analyze how the bits worked, maybe you would find it took something away.

Participant 6: Thanks a lot. I was just curious about the sound, the sort of pianolistic sound that you generated and how you'd that.

Ford: The piano, I cheated, I used a sample one. There's on Freesound.org, it has library of samples that people can use, so I thought I'd try one where I synthesize it from scratch and maybe one where I use the real sound.

Participant 7: As a user of another functional language, Python, is this a ploy to keep Closure alive?

Ford: I've deliberately steered away from engaging in any kind of language wars. I think it maybe makes a good case for probably as it happens, both Python and Closure being data centric languages and that the interactivity you get from, I guess, the python tooling with Jupiter notebooks and so on as being a good way to experiment. I think it probably, if anything, to just make a plea for peace and love between us, that if anything, a statement in favor of both the Python and the Closure way.

 

See more presentations with transcripts

 

Recorded at:

Aug 05, 2019

BT