Debate and more Insights on Dynamic vs. Static Languages
The transcript of Steve Yegge’s presentation on dynamic languages in Stanford University, which he posted on his blog, triggered many reactions in the blog sphere. The transcript provides rather extensive insights into dynamic languages that Steve advocates for. According to him static languages have reached their limits and dynamic languages offer today comparatively more opportunities. Even though he acknowledges the existing issues with performance, maintainability and the lack of tooling, he believes they can be solved and what prevents dynamic languages from a more widespread use is rather the reluctance of the industry to use new languages.
Cedric Beust responded to Steve Yegge’s post to express his disagreement with many of Steve's arguments and in particularly with his assertions that it is “not harder to build dynamic tools than for static languages, just different”:
It is different *and* harder (and in some cases, impossible). Your point regarding the fact that static refactoring doesn't cover 100% of the cases is well taken, but it's 1) darn close to 100% and 2) getting closer to it much faster than any dynamic tool ever could.
What will keep preventing dynamically typed languages from displacing statically typed ones in large scale software  is the simple fact that it's impossible to make sense of a giant ball of typeless source files, which causes automatic refactorings to be unreliable, hence hardly applicable, which in turn makes developers scared of refactoring.
There is one point however, Cedric and Steve seem to agree on even if they use different terms to explain it. Both believe that in today’s industry it is extremely difficult to introduce new languages on large scale development projects. Nevertheless, Ted Neward, who echoed at Steve’s and Cedric’s posts, disagrees with this observation. He argues that “the barriers to entry to create your own language have never been lower than today”. He recognizes “the cost of deploying a new platform into the IT.” Still, he believes that “there's a lot of project work that goes on, that has room for some experimentation and experience-gathering before utilizing something on the next big project”. And such experimentation can be facilitated by making it possible to run the language on one of the available execution engines:
This is where running on top of one of the existing execution environments (the JVM or the CLR in particular) becomes so powerful--the actual deployment platform doesn't change, and the IT guys remain more or less disconnected from the whole scenario. This is the principal advantage JRuby and IronPython and Jython and IronRuby will have over their native-interpreted counterparts.
To conclude Ted Neward says that “at the end of the day, the whole static-vs-dynamic thing […] doesn't matter.” One should simply chose the languages that can :
- Provide the ability to express the concept in your head, and
- Provide the ability to evolve as the concepts in your head evolve”
Ola Bini’s reaction goes along the same lines, as he talks about polyglot programming. He believes that each kind of languages – strong static, weak static and dynamic – has advantages and downsides and that they are actually too different to be compared. One should use the language that offers features that fit the best the goal:
These languages are all useful, for different things. A good programmer uses his common sense to provide the best value possible. That includes choosing the best language for the job. If Ruby allows you to provide functionality 5 times faster than the equivalent functionality with Java, you need to think about whether this is acceptable or not. On the one hand, Java has IDEs that make maintainability easier, but with the Ruby codebase you will end up maintaining a fifth of the size of the Java code base. Is that trade off acceptable? In some cases yes, in some cases no. In many cases the best solution is a hybrid one.
According to Greg Young, the discussion on static vs dynamic languages should also take into consideration “the concept of static verification of more than just types” and he talks about opportunities offered by design by contract approach. He explains what the added value of using DbC is and suggests that static languages are more appropriate for it:
[…] it is reasonably easy with dynamic and static languages in general as can be illustrated by the existence of the DLR. There is however a much larger disconnect between the world of theorem proving and dynamic languages. Dynamic languages are in their definition runtime defined and static verification is in its definition compile-time defined, the use of a dynamic language makes the concept of statically verifying your code at compile time impossible. To try to verify dynamic code at compile time would likely walk you straight into the halting problem just like it would for many kinds of tooling.
Static verification and design by contract relay on theorems that are mostly based on deterministic approach. Steve Yegge has however argued in his presentation that verification can be done in a non-heuristic way exploiting the fact that “the runtime has all the information”. He uses the example of natural language processing and grammar checking to show that probabilistic approach may work a lot better and be computationally cheaper:
[… ]Microsoft Word's grammar checker does it, where you'd have a Chomsky grammar. […] And you're actually going in and you're doing something like a compiler does, trying to derive the sentence structure. […]
None of that worked! It all became way too computationally expensive, plus the languages kept changing, and the idioms and all that. Instead, […] they [Google] do it all probablistically.
[…] you just obsoleted a decade of research by saying, "Well, we're just gonna kind of wing it, probabilistically" — […] they get these big data sets of documents that have been translated, in a whole bunch of different languages, and they run a bunch of machine learning over it, and they can actually match your sentence in there to one with a high probability of it being this translation.
Yegge's reasoning is completely bogus
I like Groovy and Common Lisp, but you compare everything against Java or Haskell then you miss the point of what a statically typed language is all about.
Re: Yegge's reasoning is completely bogus
Brandon Holt, Preston Briggs, Luis Ceze, Mark Oskin May 21, 2015