BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Interviews Josh Bloch on Java and Programming

Josh Bloch on Java and Programming

Bookmarks
   

1. Josh, what are your thoughts on IBM joining the OpenJDK project?

On the face of it I think it’s a great thing, the more members of OpenJDK, the better. On the other hand, while it was not explicitly included in the past release, it was said at the same time that IBM is dropping support for Apache Harmony which I consider very sad. I think a healthy thriving open source platform will have multiple implementations and I think it’s sad that one of them is going to suffer.

   

2. Does IBM contributing to OpenJDK have any ramifications for the JCP itself?

Not in and of itself. On the other hand, as part of the press release, it was said that IBM would be given a leadership role in the JCP and I don’t know exactly what that means, so I guess we’ll have to wait and see what Oracle has to say. One can only hope that the JCP remains as open and democratic as possible, so that the language can be evolved in an open democratic vendor-neutral fashion.

   

3. You mentioned the idea of conceptual surface area of a language at QCon San Francisco in discussions about potential features being added to the Java language and how they would affect that. What are your thoughts on the Project Coin changes and how those affect the conceptual surface area of Java?

I think that Joe Darcy has done an excellent job choosing features to include in Project Coin. I think with pretty much no exceptions they do not significantly add the conceptual surface area of the language, generally speaking. They just make it possible do to what you’ve already been doing more concisely or in a manner less likely to cause error. I’m looking at the details, which I happen to have here. Let’s see: you’ve got the diamond operator and that’s a pure win, it’s just junk that you don’t have to put on the line when you are creating generic types, instances thereof.

Improved exception handling once again just makes things cleaner and means you aren’t going to be throwing overly broad exceptions that wrap actual exceptions. Automatic resource management blocks - there’s a little bit of complexity there, but on the other hand what it allows you to do is it allows you to properly handle errors when you are in possession of resources that demand explicit termination. Right now, because it’s too hard to do it right, especially when there are multiple resources involved, people don’t try or they get wrong, even the JDK itself got it wrong most of the time.

It means in fewer lines of code you can guarantee that all the resources you opened are closed, so I think that’s a win. The Simplified Varargs Method Invocation is a pure win; basically you’ve taken a meaningless confusing error message away from the clients of a library and instead you can give a better error message to the author of a library. I guess that about covers it. There is binary literals, I guess it doesn’t matter one way or the other (if it’s important for some people - great!) and strings in switch. People thought it was already there. That’s not a significant increase in the surface area of the language.

Basically I think that Project Coin does exactly what you wanted. It makes a few small changes (small change coin - that was the idea) that make a language more pleasant to use without increasing its conceptual service area appreciably.

   

4. One of the debates, which has been ongoing for the last several years, is that around adding closures to Java the language. What are your thoughts on that?

I think that some form of closures would be a good thing because anonymous inner classes are verbose and nobody really like them all that much. On the other hand, we have certainly seen proposals that add immensely to the conceptual surface area of the language and I’m glad that Project Lambda seems to be moving away from these overly complex solutions. I hope they come up with something nice.

   

5. Which of the variety of Lambda proposals do you think is best suited to putting into the Java language or do you think it’s one that would contain some of the capabilities of the different implementations which have been discussed?

Clearly it’s going to contain something from each of the proposals, but there are certain things that add a lot of complexity. For example, permitting local and non-local returns and forcing people to think about whether this is a local return or a non-local return. That’s a massive source of complexity. Function types is an interesting one, I was kind of resigned to seeing them in the language, although of course they do add greatly to the complexity. The most recent proposals from Project Lambda don’t include function types. We’ll see what happens there.

I think the most important thing is to make it easier to do what you can already do with anonymous inner classes, just to remove all of that needless verbosity. Another thing which I see as worse than a mixed blessing is the ability to access to close over mutable state from within Lambda expressions. Generally that causes more harm than good and I don’t mind at all if it’s outright impossible or if you have to somehow annotate (I shouldn’t say annotate but), to place some syntactic cue as to what state you wish to mutate from within a Lambda expression.

I think that notion that you can access and mutate local variables from within a Lambda expression will be so alien to current day Java programmers. What if you are in a call and the call returns before the Lambda expression is evaluated? Has the lifetime of this local variable been increased? I think that is far too much complexity for an already complex language.

   

6. There seems to be a growing interest in the JVM as a platform for multiple languages. What are your thoughts on this?

I think it’s a great thing. An immense investment has been made in the JVM over the years by many companies, whether it’s IBM or Sun, Apache, whatever. There are a bunch of VMs out there and you might as well leverage that investment. I’m also thrilled that it enables all of the language research that’s going on now. I think everyone realizes that we’re going to need some new languages before too long and I think it lowers the bar to be able to do experimentation in that area.

   

7. Do you think that there will be a next big language in the sense that C++ was a big language and it was used by a large proportion of the software development community, or like Java is a big language and it’s used by a large portion of the software development community? Or do you think it’s going to be more of a polyglot trend in the future where no one language will have that kind of major market share?

I’m almost certain that there will be another big language. I suspect it has not yet been written. I think that there are a few gaping holes in the software landscape today, there is a need to be fulfilled and when languages are written that fill that hole, I think they could be massively successful. One of them, of course is the multi-core and many-core space. The other is web apps end-to-end. It’s still the case the most successful web applications are made of four or five independent unrelated languages sort of held together with duct tape and chewing gum. Eventually we may have a language that actually targets the web as the platform.

   

8. My concept of why Java became so popular and such a widely used language was that it addressed many major pain points in C++, such as addressing, such as implicit type conversion, such as the memory management garbage collecting. These kinds of things were just done at this part of the VM. What do you think some of those major pain points are right now? You mentioned multi-core. What are the major things that involve a lot of hard work in Java which can be abstracted away with this next language?

Another pinpoint that was addressed by Java was the sheer complexity of C++. As a language ages, it inevitably becomes more complex and when you start again, you have the opportunity to ditch the features that either didn’t work or have been made obsolete by more recent innovations. So I think one pinpoint is the growing complexity of the Java platform and the one that I mentioned making effective use of many-core processors is a pinpoint. On the other hand, I don’t think anybody really knows the solution to that yet and I think it quite likely that when we really do come upon a solution to that it may involve fundamental changes to the programming paradigm.

But that remains to be seen, that really is in the domain of language research. I shall also point out that they are all a manner of small pain points. If you look through the book that I wrote with Neal Gafter, Java Puzzlers, it’s chock full of little pain points. For example why does Java make bytes signed? There is no good reason for that. It causes so much pain. Next time around you make bytes unsigned and if you go through puzzlers you can page by page find out what to do differently next time. I’m not saying that you can apply the advice thoughtlessly and you’ll come up with a good programming language.

I think there are some real open questions like, for example, many traps and pitfalls come from numeric overflow. So what do you do? People from the Lisp community think that the best idea is to automatically go to a larger type so you simply don’t lose information. I’m skeptical, I think you’ll get memory leaks basically where you the programmer think of a number as something small, but in fact it grows and grows and grows until your process keels over. Another option is to throw exceptions when you have numeric overflow. I kind of like that, so long as it’s the default and you can get around it. You can say "In this case I really would like to do math. Mod 2^32 [2 to the 32], I know what I’m doing, trust me!"

That’s a reasonable choice, but I’m not minimizing the art of language design. Language design is more than a whole bunch of pointwise decisions. A good language has conceptual integrity and it’s a work of art, frankly.

   

9. Speaking of language design, do you think that having a single person who has the vision and the direction of the language helps to ensure consistent design? I’m thinking for instance of Anders Hejlsberg, of Yukihiro Matsumoto and I’m thinking of a counterpoint being PHP. Do you think that it helps to keep that consistency?

Undoubtedly. I think that there is a reason that most great works of art are identified with a single individual, whether it’s a programming language or a painting or a piano concerto or even a house. I think that if you have one person rather than say a committee, you have a much greater chance of producing something that is conceptually pure.

   

10. How do you foresee Java development changing over the next few years as mobile development becomes increasingly common?

I think that people may do a lot more mobile development using the Java programming language and I see this is a good thing. The Java programming language and libraries require a certain amount of computation horsepower, but we’ve entered a phase where mobile devices have more computing power than the general purpose computers on which Java was developed. Interestingly, by the way, it’s come full circle; it was originally developed for the Star7, which was a way ahead of its time mobile device.

   

11. Java started off in the mobile space and then moved, I guess to the desktop through the web via applets and now it’s in essence returning to mobile?

It may return to mobile, but it will clearly continue to be used on the server side where it has seen its greatest successes. When we say it started in the mobile space - it started in the mobile space when there was no mobile space. The Star7 was an experimental device and it was actually made by taking the microSPARC and folding the motherboard in half. It was just a proof of concept. At this point any old cell phone has as much compute power as that microSPARC did.

   

12. What changes do you think are going to occur as CPU speed remains fixed or possibly even drops a little bit and the number of cores increases?

People are going to have to learn how to make effective use of those cores or are going to have to be used to the fact that programs aren’t going to get any faster. We had this free ride, this is almost trite at this point to say, but Moore’s Law gave us a free ride for a couple of decades where programs would just get faster with no effort on our part. And that’s over. At this point we are going to have to change the programs to make them run faster. There are some cases where it’s reasonably easy to do it - so called embarrassingly parallelizable problems - but many problems are not embarrassingly parallelizable and I do really see it as an open question how we’re going to make use of those processors.

A lot of excellent work has been done. I would single out Doug Lea’s Java.util.concurrent and in particular his Fork/Join framework as examples of that. They help a lot, but they take us only so far. Years ago I had a conversation with Bob Caldwell who was the lead designer of the P6 core (that is the Pentium Pro and then up through Pentium 3 were built using that core) and he told me he was really worried because for the first time that the mainstream chip industry (by which I guess he meant Intel and AMD) were producing a product for which no demand had been established.

They were producing a product because it was what they knew how to do rather than because it was what the customers had been asking for. At that time I thought "It will be OK. The techniques that we’re working on now will enable us to make use of these processors." Now I’m not so sure.

   

13. A lot of developers do use languages that are in the C family of languages - C++, Java, C# - and there are many languages which are which have different concepts. How can you use these modern languages to learn and how can those concepts be translated back into languages that are in the C family?

I think you almost answered your question yourself. There is only one way to learn a language and that’s the code in it. If you program in a modern or even an ancient language like Lisp or Scheme or something, you will learn a bunch of concepts that are not directly present in this C family of languages that you described. But they give you another way of thinking about programming problems and the techniques that you learn in those languages (whether it’s pure functional programming or whatever) can be mapped onto the curly braces languages. I think the more languages you are familiar with, the more options you have.

It can also be frustrating because sometimes it is more verbose to express the same concepts and you can find yourself fighting a language if the language has an easy way of doing something but you want to do in a way that was easier in some other language. That’s the potential downside, but generally speaking, I think the more languages you learn, the better. The teachers at the high school and college level should certainly be teaching their students a wide variety of languages and just encouraging them to explore and even to write their own languages. I expect that the Darwinian evolution would take place on the new language construct front, and constructs that prove their worth will show up in mainstream languages eventually.

   

14. Are there particular languages or families of languages that you would recommend for certain concepts?

I think everyone should do some programming in a functional language and I don’t think it matters so much which one; I don’t know them all so maybe I’m wrong here, but Scheme, Haskell, Clojure, whatever, everyone should use a language like that. This morning at the programming languages panel, when I was asked a similar question, I answered "assembly language." I do think it’s worth programming in an assembly language, even if you’ll never do it professionally, because everything you do will end up executing in assembly language and it connects you to the ground and it gives you some chance of predicting how fast the programs you write will run, which is an increasingly difficult thing to do as the stacks that we use grow more complex and have more layers.

I think one assembly language presumably on X86 or ARM, something like that is worth playing around with. I’m not sure what else. Some people think it’s important to program in Smalltalk. I actually haven’t done it, so I can’t really speak to that one, but it’s probably worth doing.

   

15. Thank you very much.

Thank you.

Dec 17, 2010

Hello stranger!

You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Community comments

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

BT