00:57:20 video length
Bio Dean Wampler is co-author of Programming Scala and owner/principal of Aspect Research Associates. His expertise areas include polyglot programming, Poly-paradigm programming, and software craftsmanship. He was formerly with Object Mentor and has worked in many industries, including Internet startups, wireless telecoms, medical electronics companies, and tools vendors.
QCon is a conference that is organized by the community, for the community.The result is a high quality conference experience where a tremendous amount of attention and investment has gone into having the best content on the most important topics presented by the leaders in our community. QCon is designed with the technical depth and enterprise focus of interest to technical team leads, architects, and project managers.
I was recently at Object Mentor, which is Uncle Bob’s company and I am actually starting at DRW Trading after the American Thanksgiving, so I have a few weeks here. I'm just enjoying myself and geeking out at QCon.
That's true. I'm a pragmatic guy, I want to get work done and I'm always looking for some way to do things better than the way we've been doing it in the past. I recently got into Scala and wrote the O'Reilly book on it with Alex Payne of twitter fame. He brought the coolness factor to the book. But I also like Clojure.I'm really interested in Lisp like languages and I've done a lot of Ruby, too and some useful Java stuff.
It is. I guess a few years ago the buzz about functional programming was really starting to build and I realized that I needed to learn more about it, so I decided I needed to figure out which language is the best vehicle for that learning and picked Scala in part because it seemed to be a very good pragmatic choice that not only would give me the functional programming, but would be very likely to be a language that Java developers would migrate to. It seemed to have that very practical advantage.
It's very interesting that you could take a Java developer, give them Scala and they could pretty much write Java code in Scala. All the object stuff they are used to is there, but with much more succinct syntax. Without doing any functional programming you can be pretty much effective almost immediately, but then, while you are getting comfortable with the language, you can start learning the functional idioms and eventually become a much better functional programmer and hence, a much better overall programmer, I think.
I think that's a legitimate concern. If you really want to learn functional programming, there is a case to be made for picking something purer, like a Haskell or even Clojure, so that you don't fall back into your comfort zone. I do think it takes a bit of self discipline to push yourself to do the things that you are not comfortable with and don't know, but, on the other hand, there is always the practical issue of "I've got 8 hours of work, I have to do each day and if can learn a little bit each day, I'll still deliver in software and maybe that's the pragmatic choice I have to make", speaking as a normal developer. But, yes, you could say that it would be nice to start with something pure and then go back and maybe pick up a hybrid language like Scala.
6. With a hybrid language like Scala you got object orientation and functional programming, but there is the interaction between both that might get complex. People claim that it can get complex, like mixing both. What do you think about it?
I think that's true. I think Scala has actually been very innovative about combining the two paradigms in the sense that in Scala, functions are objects as well as functions and every object can actually be a function, too if you give it an apply method, so you can use it as if it were a function. On the other hand, there is this tension between mutability, which is very common and the normal object oriented programming VS the emphasis on immutability.
I think that's actually a creative tension because first of all you can use either one, whichever makes the most sense in a given situation, but it really forces you to think in general terms what's the right approach and how do I maximize the benefits of each one. Then, there are other compromises that have to be made, like laziness is not really something that's baked in into Scala in the same way that it is in, say, Haskell. You don't get all the benefits of a pure functional language necessarily, but you get most of them, I think.
7. You talked about mutability it's very convenient to mutate things when you are coming from Mainstream or Java. I was taught Scala and you would tell me not to mutate, but the language doesn't force me into any discipline of not doing it, so I guess it's quite hard to do it right with Scala when you have the option. What do you think of that? Do you see it as a problem in your experience?
I do. I think it's potentially a problem that good programmers will push themselves to do the better thing when they can and only fall back to more - let's call them - riskier approaches. Risky in the sense of mutability is not so great for concurrency. This is one of the appeals of Clojure - they really have a principled approach to immutability and it's very difficult not to do it the right way, which I think it's great. My personal view is that a real software craftsman will try to do the right thing, will learn that mutability should be avoided when possible and only fall back on it when it makes sense and when they're aware of the situation to the degree that they're not going to compromise concurrency issues.
8. You mentioned because of the interoperability with the JVM, you had to do some trade-offs, for example you accept nulls. Scala introduces option types, which is the same null concept, but in a cleaner way. Still you have Java nulls and you have to deal with them at some point, say, if the object is null. What do you think about this kind of trade-offs that you still have to put legacy and functionalities into the language?
I think, on balance, it's a trade-off worth making just because most shops that are already in the Java world have a body of legacy code that they can't throw away or replace and there is just a tremendous amount of third party Java software out there that you can leverage, but what Scala gives you is a way of fire-walling off some of these potential problem areas. You can do something like wrap a Java API and a Scala API where you always watch for nulls and maybe ever use options going in and out of the API from the Scala perspective.
I think you can get the best of both roles in that way of benefiting from the 15 years of legacy Java code while at the same time, giving you an API or a DSL even that gives you the modern approaches like option versus null and the functional ADM closures and so forth. I think that's the best way of doing it. It's a very pragmatic solution. It would be great if we could just get rid of nulls and just have perfectly functional or ideal object oriented API, but we can't do that, so we just compromise.
I think that the most interesting thing is the ways in which Rich Hicky looked at the different kinds of mutability that you might want to do and the trade-offs involved in different ways of handling them and then coming up with a mechanism for each of those ways. To be specific, there are atomic operations that can be very fast, they don't require much ceremony, so there is a mechanism for those. There are operations that fit the sort of an actor model, but not quite actor more like autonomous agents that communicate. So, he's got a mechanism for that.
For more sophisticated kind of, what we would normally do like object updating operations, he has this notion of software transaction memory that very explicitly controls updates, references to data structures in a way that protects other people who are accessing those data structures. Essentially the transactional model applied to in-memory changes as opposed to, like you said earlier, just the naïve thing we've always done in imperative languages of banging bits and memory and sometimes paying the consequences for doing it wrong.
I guess there is something like 4 mechanisms for mutability that each of which fits the kind of particular need that might run into and pretty much covers everything that you really want to do and does so in a very elegant way so that it's not oppressive in any sense. It requires you to be disciplined in how you do your work.
As I understand the difference between for example agents in Clojure and actors in Scala and also Erlang, of course, which is what Scala copied in this case. The Scala actors are designed to be a little bit more autonomous and also transparent so that they could even be running on different VMs, different machines or in the same VM, maybe in the same thread, maybe in different threads whereas the agents were designed to be more of a lighter weight model where everything is still in the same VM but you still have this concept of passing responsibilities and having isolated localized spheres where responsibilities are managed, namely the agents. I think they are very similar, conceptually, but there is some basic differences that are trying to address different application problems.
The actor model in particular is going to have a little bit of overhead that you are basically doing message passing, as opposed to just doing a straight method call. You are building up a message, sending that to something that goes into some queue and then it's read by this other entity that does the processing and maybe it sends a reply. It also affects the way you organize the software. It's not like a synchronous flow and it's more like passing messages, going on with other work, maybe coming back later to check for replies -that sort of thing.
That model works very well for some kinds of applications, but you wouldn't do a number of crunching applications that way. In a situation like that you'd be more likely to use the software transactional memory, where you might do some computing, put the results in one of the persistent data structures with the managed references so that other people who are also putting stuff in these data structures are not stepping on each other.
That's a very interesting thing to think about, because Clojure and Ruby have sort of a similar feel in the sense that they are both more dynamically typed than statically typed. Clojure has an interesting relationship with Java, obviously, because it lets you use Java objects, but it doesn't have type annotations all over the place, like you would have in Java or Scala. I think this general debate of static versus dynamic typing is kind of pointless in some sense, meaning that a lot of times it's the application that really should dictate what's best.
If you are building something like a typical website that may need a lot of iterations very quickly and there is an informal model of the domain, then maybe it's not so important to have the formalism of type theory. But, on the other hand, if you are building something that you wanted to behave in a mathematically precise way, then it's great to have it. They type system of statically language that bakes in the almost provably correct behavior, at the fundamental building blocks. For example, if I'm building a financial application that manages money in some sense, I'd be more likely to want a statically typed language like Scala where I can very precisely specify the behavior of money.
Then, build my account objects and so forth on top of that, knowing that they will be robust at this very fundamental level. But, if I'm building a website, where users may be specifying withdrawals and transfers, I don't necessarily care about that kind of type safety at that level. I would like to have the dynamism, the productivity that I get from a language like Ruby, so I'd be more likely to use Ruby on that part of the application. I could very easily see JRuby with Rails running the website and Scala or Clojure business to your code that's handling preciseness of getting money transactions right.
13. You mentioned static typing and Scala is very statically typed. There are tw0 questions, I will start with the first. Some people think that the static type system of Scala with type inference, gets sometimes complex, compared to people that develop in Ruby or even in Java. Is it true?
It's an interesting question. The type inferencing mostly hides that complexity of the type system from you or sometimes breaks through when you do something wrong and you get an error message that you have just no idea what it means unless you have some sense for how the type system works. I do know this is actually a problem that Scala community is aware of and they are trying to work on better tooling that would help you understand what you are seeing and give you better guidance on how to fix it.
I suppose it really comes back to this issue of "Does the weight of learning and mastering a type system as sophisticated as Scala give you the benefits in terms of this correctness and it may be some improvable correctness?" If you are not getting that benefit enough to justify learning the thing you shouldn't worry about, you should just use a language like Ruby where the type system is more relaxed, so to speak. But, yes, it's definitely a drawback for Scala that you do have to spend some energy to master the type system.
I personally think it's actually not as hard as it may appear at first, if you take it in a step by step approach, and you actually find that it's very beneficial, at least for me it is, when I'm writing a library, to think very precisely about what kind of types can go into this method. What kinds of results are coming back? The old design by contract model. Ironically, for me, I actually learned a lot more about Java's type system by learning Scala. I sort of ignored Java generics as much as I could, at least.
We all just learn enough to get by, but when I started learning Scala's type system it made me go back and look in Java's and realize the ways in which Scala fixes some limitations. But also, the benefits of what they were trying to accomplish in Java that were sometimes achieve sometimes maybe not achieved as well as they might have wanted to. Anyway, it basically comes down to "Is the benefit of this relatively complex and advanced feature something that's going to deliver value in your application every time?" If not, do something else.
14. The second part of the question now, which matches very well to what you were saying just now. In Java, there were errors or exceptions that show on the type system in some way. You got throws for the method that shows you what it's throws, you can make checked exceptions. A lot of people agree - maybe not everyone - that it's not a good feature. Some argue that it's good, some argue that it's not good. The same kind of feature, but other implementation exists in Scala, which is either error, which is also put in the error in the type system. Was the problem actually about the implementation or you still think that having errors in the type system is not a good idea?
It's interesting you bring this up, because there was a conversation I had with some people recently about this and I think the Go language that Google has just introduced apparently does not have exceptions and I don't actually know the reasons behind that yet. It's on my to-do list to figure out. I think the problem, essentially, with checked exceptions was that the goal was, of course, to make it very clear through the API of what could have happened.
You might have this problem arise, hence this exception come out. The problem is that in practical terms, except for rare cases, usually when an exception happens, it's a situation where the immediate code does not really know what to do to recover. It was just better if the exception propagate through and your architecture design - whatever term you want to use - is disciplined enough to have points where it does understand how to recover or not from an exception and the exception is handled there.
Assuming that you have a disciplined enough design, that doesn't just let things fly through and suddenly you have a crash in the system. This is one of those pair-like types, where what want is either an exception that's being returned or some other signal of errors or the actual expected return value. I could see using something like that in a very high reliability system, where I really want absolute type control over the control flow at all times and it's worth it to me to bake in to each bit of code handling both cases.
I want to recover right away, or not - meaning I might decide I can't recover, let's kill the process, let's restart. I'm probably not likely to ever use that or use it very often, because I think a better model probably is to throw the exception, catch it or kill the process. The Erlang view is that processes are very light and you should just destroy a process if it goes wrong, have a supervisor that restarts it and picks up where it left off. I think that's actually a more robust design overall and that's probably what I would do.
Nevertheless, you have this option in Scala of having this sort of either return type or you could rather than throw exceptions that are an out of band return, you could actually return an object that represents either the exception that was thrown or the normal return value. It's an option, but I honestly don't think I would use it very much, personally.
Yes, so let me push back a bit on that idea. Suppose we could turn the API design around so that we can somehow guarantee that an exception would never get thrown, except for the Runtime errors, like out of memory or whatever. As a consultant, what I used to see a lot was code where there was really no sense of boundaries in the system and every function would have null checks at the beginning of the function. Then, every time somebody called a function, it would do a null check on the return value. This is classic Java code that we see a lot.
I think much more important, again getting back to the design and architecture issue, to have boundaries where maybe you do have to do some validation on data coming into your subsystem. But once you get past those boundaries, you guarantee to yourself and to your code, that everything from there on will be well structured, well designed, valid values and so forth, so that you are far less likely to see these exceptions coming back because you have much better control over what's going on now.
If you are thinking through the design and understanding possible ways in which maybe even deep down in this module where you might still have an exceptional condition that might happen, then maybe you do find some way to document at the higher layers what might actually happen. I'm thinking in the case where maybe deep down in the system, I'm doing a database call or something where I have little control over those things and I have to know what is going to happen.
I think it's a legitimate question, that's one of the goals of the checked exceptions - to make it very clear in the API, in the method signature what could actually go wrong. But I think in practice it didn't work and we need to find some better mechanisms for describing that to the user in a robust way. I say "in a robust way" because we all know that comments typically lie because they get out of synch with the code, but there are ways we could arrange the signatures of our methods so that names, classes and so forth indicate behaviors that could happen, both good and bad.
The implicit conversion mechanism is a very interesting way of doing something that you would do in Ruby with meta-programs. For example, suppose I'm writing the DSL and I want to be able to say "One hour from now", where I put in a literal 1 integer. Of course, it's no hour method on integer. In Ruby, the way I would typically do that is to open the fixed num class and add a method for hour in there, so that now, the person writing the DSL can actually just call an hour method on fixed num.
There are some drawbacks with that that we don't need to get into here, but in Scala we have closed classes, like Java and because of static typing we don't really have that mechanism for doing that kind of open class meta programming. Instead, what we have in Scala is a mechanism where you can define a conversion method that will take a value and convert it into another type.
In this particular example, I would have a method call an implicit converter method and it's indicated as such by the key word implicit that might take an integer value and wrap it in some special hour object, let's say and the hour object has the hour method that we want. As long as this converter method is in scope, then the compiler would say "He is trying to call an hour method on an integer that doesn't exist, but I happen to see that there is this converter method that will take this integer to an hour object that does have that method".
So, it will implicitly invoke to do the conversion and now I can have my DSL. It's a great feature that the big disadvantage is that it does have a little bit of feel of magic because you can't just read the code necessarily and know what's going on unless you are aware that these implicit methods might be present. It would be obvious that they are present if you see it higher up in the file but of course, typically they're imported through the import statements, so they are not necessarily visible. You do have to use it judiciously, but it's a great feature for doing these kinds of DSL constructs.
17. Scala interoperates well with the JVM with Java, so you could actually use any of the frameworks that exist in Java, to mention few, like frameworks people use a lot, like Hibernate, Spring and Struts and other frameworks. Would you use them? Some of them conflict with the idioms of using Scala.
I don't know of any actual things that would not work with Scala. There were some limitations in terms of Scala's map into Java constructs, like the Java generics. Scala generics were not compatible for a while, but most of these things have gone away. It would really come down to a question of "Do I want to use a Java style API or would I prefer to use a Scala style API - more functional and more using closures and so forth? If you have that choice, and there is such an API available and say it's a green-field project, I would certainly make that choice.
But if I ran a mature Java project and I wanted to ease Scala into the code base, and we're out of using Hibernate or Spring or whatever, these things would work just fine and I might actually write a DSL to wrap them to give me a more convenient syntax, but they tend to work very well. In fact, we had a talk earlier today by Ishay Smith about his experience as link in and they use Spring dependency injection all the time, there is Java code and it just works, without any real effort.
I might. It depends. There are some very good Scala libraries for some things, but there is nothing that is quite as mature as and as comprehensible as the Spring Framework. I could very easily see using Scala with Spring. I may not use all the features as much.
It turns out there are some very nice rather innovative object oriented mechanisms in Scala for wiring components together in code, rather than seeing it in a configuration file that's done at Runtime. I might be less likely to use Spring dependency injection, but then I might decide I'm going to use JDBC and I really like their JDBC wrapper, so I'll use that. That's the beauty of that - you've got all this Java world that you can use or not, as you see fit, plus all the new Scala stuff that's coming out.
I would really like to see Clojure be successful, too. I think a lot of people are overly put off by the parentheses. Stewart Holloway is very good at pointing that out that actually Clojure made a few key interesting choices that eliminated a lot of the parentheses It's actually not as bad as people think. I do think it's such a beautiful language that whether you use it in production or not, you should understand it for its ideas.
I can see in the future that we'll be doing a lot more of polyglot programming where we might have a language like Scala or Clojure that's doing the heavy lifting, but we put something like JRuby on top as a scripting language to glue things together, to give us dynamic configurability to let us interrogate the system at Runtime in a convenient way. You could do some of that already with Scala and Clojure, just because they also have a eval print loop, but you might especially if you are handing off, this wad of of JAR files to a web designer or rails developer, they might just more happily bring out J Ruby and use it for the web tear.
The language I'm really very interested in are Ruby, because it is so pragmatic and approachable and Clojure for its ideas and its applicability in a lot of problems. Scala is wonderful well-thought out general purpose language and I like to keep up with other things, like Haskell and Erlang because they are also very important and relevant either for their ideas or in practical terms.
20. In your experience, there are some languages that prefer to do most of the things in libraries and very little syntax, like Clojure, Lisp and Smalltalk. There are other languages that prefer to introduce more syntax, like Java, for example. There are arguments from the compiler writers, interpreter writers that it's easier when everything is a library, but this is from one side. From the consumer, or the user side, what do you think is the difference? Does it matter? If it matters, which is better for people, for usability, for learning curve, for everything?
I think it's probably better not to have a lot of special case stuff in the language itself. One of the things that is seductive about Scala is they made a few conventions for syntactic sugar, that had a way of making it very easy to put a lot of stuff in libraries. For example, it would actually be possible to write things like or- loops and while-loops and all in libraries just using the constructs like closures, of course and some syntactic sugar like replacing parentheses with curly braces in certain contexts.
In this particular case, for-loops are baked into the language, the point being that if you really think through some key basic abstractions and just a few bits of flexibility, sometimes it opens a creativity on the library side that gives you the expressiveness you want, the ability to customize the sort of language experience without actually making the compilation process hanging up on the grammar bad. The flip argument for me is I don't object too much to adding keywords to a language when the key words make sense, because often times it's better - in my opinion - to have a descriptive key word, rather than overload the meaning of another key word that doesn't make sense.
A classic example I'm thinking of is the keyword "Static", which of course originated in C and C++ and it's in Java and now that we've all learnt what it really means, when you see "static" in front of a declaration it's kind of funny interpretation of the word the way it's come to be used. As a general rule, I think it's better to keep the languages very simple. The Lisp philosophy, everything, the code is data and essentially data is code or can be code is really the right model that uses a baseline and then only build abstractions on top of that or special cases on top of that when they really justify their existence. That's the way I see it.
I started learning on Basic in high school, which back in those days, was basically like a mainframe - which shows you how old I am. The first professional language is Fortran. I come from a physics background, so I did a lot of number crunching in graduate school and it was all in, like, batch Fortran, but then I picked up C++ when I became a professional developer.
I think we really have. It's amazing to me how young our industry is and how rapidly it's evolving and we sometimes forget that we're some 50 years old, maybe 80 - depending on when you want to say we started - as opposed to civil engineering, which is thousands of years old and things like this. We are very young and there is still a lot that we don't really know how to do, even though sometimes we think we do. But when I look back even 10 or even 15 years ago to Java, it was in a lot of ways cutting edge, although in some ways it was more of a gathering together of some really good ideas that come before, such as garbage collection and removal of some relatively bad ideas, like explicit memory management.
Nonetheless, it seemed like the perfect language for its time in a lot of respects. Now, when we look back on it, we realize that it's a lot nicer now that we have closures in some of these new languages and we are apparently getting closures in Java, finally. Yes, I do see a lot of evolution, I see a lot of refinement in thinking about what actually works and what doesn't, but, at the same time, rediscovering ideas that have been around for decades, that were still good ideas.
23. You just mentioned that closures might make it into the next Java. Do you really want it to be there? Because there are two camps - there are people that want Java to die and to be replaced by another language, maybe like Scala and others want it to live. What's your take about that?
I actually think that's a very interesting question that I've been giving some thought, recently. I would certainly be happy, personally if Java just stopped where it is and we all moved on to other languages and treated Java as the fall-back position, the common denominator that glues it all together. But I'm worried that if we do that, it could actually be harmful for Java as an ecosystem because for better or worse, the language of Java is seen as the face of the entire ecosystem.
I think people might perceive that if the language is stagnant, then the whole ecosystem might be stagnant, even if there is a lot of activity in alternative languages, even if the JVM is continuing to move forward. I would like to see the community figure out effective ways to add really useful constructs like closures, without overly complicating the language. To be honest, Microsoft has been very good about moving C# forward and I think we can learn a lot of lessons from them about this. I'm actually in favor of keeping the language moving forward.
I think Microsoft has actually been very smart to put a lot of energy into moving their languages forward because that's the sort of stuff that keeps developers interested and actually gets them interested in moving to the platform and maybe they were on the Java platform and were perfectly happy there. I think they've done a lot of innovative things with C#. Getting F# as an officially supported language an actual functional language I think that is a wonderful move and it's going to be really great for the industry.
Link integrated query, if I remember acronym stands for, is a brilliant piece of work that people are still trying to figure out what it means, but I think it's something that it's going to be potentially as revolutionary for the kind of code we write in the same way that the software transactional memory stuff in Clojure is now already becoming extremely influential. It's wonderful having the competition, I intend to be in the Java space myself, but I'm really impressed with some of the stuff I'm seeing at the .NET space.
25. You just mentioned before that there are languages that are inspiring a lot of today's languages, like Lisp, a language that was used a long time ago, but now it has inspired a lot of languages. For a long time, Lisp was just far from any serious enterprise developer, but today, people are getting interested in Clojure. Do you think that such a thing can happen to Haskell, which is giving today a lot of things to programming languages from C# to Scala. It's really inspiring a lot. Do you think that people can move into such kind of languages or it will be very strange as a language to start with?
There are a couple of issues there. One would be what developers understand and take the time to really understand in order to use it. You will always that there are groups of developers who are willing to make that effort, because they see the value there and they will do things like implement cutting edge applications that no one else can implement as easily. Even though it may never go Mainstream and Lisp is actually a good example of this.
Paul Graham implemented the first e-commerce store that was Amazon affiliated with direct individual sales channel and he did it in Lisp and it was very fast, it was very successful, it made him rich when he sold it to the Yahoo and so forth. There is definitely a case to be made there. The other things that have to be considered, though, are the pragmatic issues of what are the tools like, what are the libraries like - we've been talking about that -, do we have really good web-tier libraries for the Haskell and so forth.
A final question is actually one that is an issue for the Scala guys is Scala started out as a research language to explore ideas. Haskell is always been about that, and at some point, for something to become a production viable language, you have to lock it down or at least control the evolution a little bit. It may be that the Haskell community will decide to bifurcate into those people who really want to push a dialect of Haskell that is tuned for industry use and other will have a research oriented Haskell that continues to experiment and isn't so worried about performance or whatever, issues they may consider it more or less important.
I think it would be fantastic if people started - I hate to use this phrase, but I guess it's applicable and that is - to think outside the box and say "Why don't we use Haskell when we're playing with these other functional languages? Why don't we take the one that is really been the trailblazer of them all?"
26. The first argument we are hearing about for using functional programming is concurrency. Imagine that I have an application and I really don't care about concurrency. Is functional programming still interesting for me?
That is a neat question, because the same thing happened with objects. My personal view is that, if you will, the killer app that drove adoption of object oriented programming in the 80s was the graphical user interface because it was such a natural fit, the idea that these little physical things from moving on a screen. Then, we discovered that they are actually generally applicable. I see the same thing happening with functional programming. I don't personally write a lot of code that's deliberately designed to be concurrent and yet I find myself making very succinct code, because I'm using closures or I'm using data structures like lists instead of building custom classes or whatever.
I find that the immutability and also side effect free functions make the code much easier to test, much easier to reason about. Very few bugs creep into my code as a result. I see all of these just general benefits that you would gain from applying these idioms, even aside from this whole issue of concurrency. I think that's going to be true, that even people who are fortunate enough not to have to worry about concurrency will still get a lot out of learning and using functional idioms.
27. With all these languages there come paradigms, because now we're talking about object orientation, functions, actors, lot of paradigms. The solution is formed by the paradigm, by the way you view the problem. How do you apply and how do you choose which paradigm to apply to the domain you are targeting?
In a way, you could see these fundamental paradigms like functional programming and object oriented programming as kind of an assembly level, design choice. Then, on top of that, you might have DSLs that are more domain specific and they may be some hybrid of the two or they may have their own sort of paradigm and they just use these other tools as implementation choices. I think the first thing really is to think about what makes the most sense for a problem that you are working on. For example, if you are building rule engines that you really want more of a logical paradigm style and having a lot of objects floating around in the rule system doesn't make much sense.
Even though, there certainly are rule engines written in Java for example. Certainly, a lot of the data transforms that we do, like the ma/reduce kind of jobs that you would run in Hadoop, much more of a functional thing where I'm processing data. Most people don't think about this either, but the sequel model is a strip down functional model. It has closures even in the sense that it has stored procedures. Nevertheless, the first question really is "Don't assume that one paradigm is the silver bullet that will solve all your problems", which has been a problem of the way we thought about software and, fortunately for the last 10 or 20 years, but rather think about what maps best to the problem domain that I'm dealing with.
That may be different in what the team next to me is worried about. They may vary from "Today I need to work on these rules" and tomorrow I'm going to think about how to decompose this large data set, process it, put it back together, which will be more functional. I do find, though, that as far as the trade-offs I might make between one design choice and another that I'm finding that I'm less likely to think of things in an object oriented style now than in a functional style. Ironically for me, it's not so much that I think objects are bloated in some sense because obviously you can misuse anything, but it seems that mostly applications we're building today, the notion of any particular domain object is constantly changing.
It's changing because requirements are changing; it's changing because your notion of a customer for the problem that you're working on today is different than the guy in the next room who has other concerns. Trying to figure out what's the right customer class for those two guys is not going to work usually. We either end up with lots of little customer classes or maybe we just decide "why don't we use a map to represent the fields you care about?
I'll use a map to represent the fields I care about". Guess what? That's what the SQL query might look like or I ask for the social security number in the packing list or whatever for this customer and you might ask for their bank account information or something. I do find that I tend to think more of functional data types like lists and maps is the way of getting data back and forth and working with data. I also really like this idea that we get from the Haskell community maybe more than any other. Let's understand what the fundamental building blocks.
Let's get money nailed down, let's figure out exactly what's valid for a street address - postal service requirements or whatever. Let's understand those, let's nail those down very carefully and then use those as our atoms in collection data structures to represent larger domain concepts. That's the way I think about design these days. I'm less likely to define that customer class and more likely to put a customer map with these fundamental well-thought true types that represent the key elements that I care about.
Yes. The complexity VS simplicity issue is important. I'm more inclined to mix paradigms, but not so much really go deeply one way or the other, except when -like I said - there is a problem domain that I'm working on that just is so natural for objects that doesn't make any sense to use anything else. Even then, I'll slip in closures for configuring behavior. I might think less about having a custom class for a customer object or something like that. I prefer the mix and match things. For me, that's one of the reasons I like Scala so much - it let's me do what whatever I feel comfortable doing in a particular situation and does it very elegantly, I think.
29. The way Ruby did things is to be pragmatic about targeting usability more than anything. It was like all pragmatic choices to offer something very usable and very easy to use to programmers, but we're seeing also concepts that have been introduced to programming languages that were for a long time coming from the theories. Concepts like monads, lists comprehensions, all of this stuff is purely mathematical applied to programming language. It's interesting - some ideas come from pragmatic stuff and some others from the research. How do you think this combination of both is?
What you reminded me of when you asked this question was one of the things I really love about software, which is that it's a wonderful combination sort of a balancing act between art and science. Computers only do what you tell them to. You have to be very precise about telling them to do something. But even in that limitation, if you will, there is a lot of room for creativity and aesthetics and so forth.
I think this is a similar thing, where a lot of stuff we have in our languages was just born out of pragmatism and also this is something people can understand. I think objects in fact in part have become so successful because they are intuitive at one level in terms of the developer trying to understand the idea of objects and see how that maps to the problem domain. On the other hand, I've certainly seen this lately in fact, if you take that extra step to understand the hard stuff, like the monads, the list comprehensions or whatever that is.
Sometimes it has scary names but actually is extremely powerful and it has an extremely great way to conceptualize certain problems, like the notion of putting something in a container, so it's in a box and it's not leaking all over the place, in particular state transitions. Then, you have this powerful tool that you can now apply to help you be more principled in your design choices and come up with software that's inherently more robust.
I think like most developers I really don't want to get waken up in the middle of the night because something is going nuts in production. I hate it when somebody reports a bug, so I am more than happy to invest in hard concepts that I think will help me deliver even more robust software tomorrow than I can do today and do it faster, too, because I have this huge palette of tools that I can pick from.
30. All this stuff you talk about, functional programming, Scala and Clojure seems to be nice, but a lot of Enterprise decision makers are not making the decision because they are afraid it is too complex for developers to understand and to get to use it. Is it true?
Certainly there are a lot of people in software development organizations that shouldn't be touching code. That's true, not controversial. I think that, in a lot of cases, organizations should make the hard call of making sure that they really do have very good people working on the code and keep them inspired by giving them the cool tools or the challenging problems so that they continue to deliver, because unfortunately, if you have mediocre people writing code, often they create a mess that becomes a maintenance burden for a long time that you can't really get rid of.
I also think though that it's true that people tend to be a little too skeptical about the quality of their developers. Sometimes people are just de-motivated because of the various problems of the environment or just bored and they need something new. Therefore, if you encourage them to embrace new stuff, to learn new techniques, to learn new languages they can actually be something that re-inspires them to learn to produce good work and they may actually rise to the occasion more than you think they can. I think it's certainly something to consider.
I think that no matter how good your team is, there is still a process of evaluation you should go through with any new technology. In the case of a language like Scala I would start out with using it say, for writing tests. In non-production uses writing tools, writing simulators if I'm trying to simulate an external service for testing and then build up confidence and decide if it's right for you and then move on. I wouldn't shy away from it, I wouldn't assume that the language is too hard, any of these languages we discussed, because the benefits are there.
I actually said earlier that you could take Scala, give it to a Java developer and this Java developer could just write Java code in Scala, but actually do it more succinctly and get the other benefits. I think there is really very few solid arguments for saying that any of these languages are bad choice on the grounds that the developers can't handle it. I think that they'll rise to the occasion if you give them that option.
It's very much a misunderstood and under appreciated language because it really does have a lot of beautiful ideas. It supports a lot of functional programming stuff. Most of the bad wrap is because of bad implementations of the DOM in the browsers and incompatible implementations. But there are a lot of people, like Google, who are demonstrating just how much you can do with Java script if you really take it seriously.
I think it is better as a general rule to learn the language and use it natively. I'm not a big fan of all this tools. I could see using them as a stepping-stone to get some work done, because you maybe don’t have time to learn Java script right now. In the long run you'll probably want to be closer to the medal, so to speak, to actually be using Java script in all it's glory.
I want to do some projects with cloud computing and also work a little bit on thinking about design of large systems a little bit more because there is this tension between is it better just to make a lot of relatively little simple applications that are glued together by networking essentially or is it better to have larger applications to get the extra speed of being in memory together. I would really like to explore that space a little more. The other thing that I've been involved in that we haven't talked much about was the aspect oriented programming, although it didn't pan out to be as big a thing as we all thought it was going to be 5 years ago.
I think there is still some very interesting ideas there, things that it does well, that aren't handled well through other means. I'm still thinking about how that fits in, both its disadvantages and advantages in this whole mix of multi-paradigm programming. I should put in a shameless plug when I mention that where I'm one of the guest editors for an IEEE software issue next year on multi-paradigm programming, so we're looking for contributions you can go to IEEE software site to find out more about that.
I'm actually going through finally this structured interpretation of computer programs, the famous MIT Wizard book, which is not only a beautiful example of great writing, but it really does teach you a lot about algorithms and software design and principled approaches to different problems. Another one I would have to cite is The Design Pattern book, the Gang of Four book from 20 years or so now - maybe not that long.
Even though patterns sometimes get a bad reputation these days, it really was a very wonderful book for thinking about design above the level of classes and in particular issues of when is something appropriate and not appropriate to use. The third book is Robert Martin's Agile Software Development Principles, Practices and Patterns, which I think came out in 2003 or so, which is a really great book bringing together a whole bunch of ideas from XP. It was very eye opening in the discussion of test driven development and motivated the idea of TDD as a design discipline, as well as discussing a number of principles of good software design and design patterns, too. I'd list those three as probably the biggest influences for me.
Yes, there is an IEEE software issue coming up next year. We now have a request for papers out. It's on the subject of multi-paradigm programming. I'm one of the guest editors. We're looking for submissions on combining different paradigms, the issues and the benefits and so forth, as well as different languages. For example, the way Emacs uses the C kernel and Lisp as the scripting engine. You can find out more about that at IEEE software. Thanks a lot for talking to me.