Should languages be fully flexible, allowing the developers to tweak them as they like, and trusting they will be responsible in their work, or should there be clear constraints set in the language from its design phase to avoid mistakes that create bad code, hard to maintain or to read?
Bruce Eckel believes that some features are so complex they end up being mostly unused:
In C++, which has no run-time model (everything compiles to raw code, another legacy of C compatibility), template metaprogramming appeared. Because it's such a struggle, template metaprogramming was very complex and almost no one figured out how to do it.
Java does have a runtime model and even a way to perform dynamic modifications on code, but the static type checking of the language is so onerous that raw metaprogramming was almost as hard to do as in C++.
Clearly the need kept reasserting itself, because first we saw Aspect-Oriented Programming (AOP), which turned out to be a bit too complex for most programmers.
Like Smalltalk, everything in Ruby is fungible (this is the very thing that makes dynamic languages so scary to folks who are used to static languages).
Michael Feathers thinks Eckel is suggesting constraining the language:
Language designers do persist in this sort of reasoning — this notion that some things should not be permitted. If you want to see an example of this style of reasoning see the metaprogramming section in this blog of Bruce Eckel’s.
Rather than setting up safety nets, Feathers advocates trusting the developers which promotes the ethic of responsibility, that he believes exists in the Ruby community:
In some languages you get the sense that the language designer is telling you that some things are very dangerous, so dangerous that we should prohibit them and have tools available to prohibit misuse of those features. As a result, the entire community spends a lot of time on prescriptive advice and workarounds. And, if the language doesn’t provide all of the features needed to lock things down in the way people are accustomed to, they become irate.
I just haven’t noticed this in Ruby culture.
In Ruby, you can do nearly anything imaginable. You can change any class in the library, weave in aspect-y behavior and do absolutely insane things with meta-programming. In other words, it’s all on you. You are responsible. If something goes wrong you have only yourself to blame. You can’t blame the language designer for not adding a feature because you can do it yourself, and you can’t blame him for getting in your way because he didn’t.
So, why aren’t more people crashing and burning? I think there is a very good reason. When you can do anything, you have to become more responsible. You own your fate. And, that, I think, is a situation which promotes responsibility.
Coda Hale does not think the Ruby community has a better sense of responsibility, but rather one of permissiveness:
I’d really like to believe that Ruby has a culture of responsibility, but I don’t think that’s accurate. I’ve debugged plenty of problems in Ruby applications which were caused by a third-party library taking liberties with other people’s code—turning
NilClass
into a black hole or usingalias_method_chain
without perfectly duplicating the underlying method signature. Each time it took effort to convince the offending library’s author to fix the problem, and each time I had to hot-patch the code myself until the library was updated. That’s not responsibility in the positive sense; it’s responsibility in the sense of “no one else will fix this if you don’t.”I think that Ruby has more of a culture of permissiveness than responsibility. Just about anything is acceptable, and rather than complaining about another library’s bad behavior you should write your own version with its own bad behavior.
Niclas Nilsson agrees with Feathers, to let the developer free and make him feel responsible:
This is a discussion I for some reason end up in all the time with people in all kinds of different contexts. “Isn’t that dangerous?”. The answer I give is often “Yes, it is. But so are cars, knifes and medication”. Almost anything useful can be misused and will be misused, but we seldom do it or learn fast if we do, and in software we have unmatched undo-capabilities of all sorts.
I’m not willing to pay the price for (false) protection as long as the price is less power, especially since they are circumvented anyway. Your not supposed to change a class at runtime in Java, so to solve that, people turn to byte code weaving. Barrier broken, but at a high cost with high complexity. I much rather trust responsibility, good habits and safety nets I set up anyway to protect myself from logical mistakes.
Liberty and responsibility goes hand in hand.
Keith Braithwaite gives the example of the List community which was not patronizing its users, and they learned how to avoid pitfalls:
What lessons might there be from the history of the Lisp community? Lisp (along with Smalltalk and assembler) is about maximally non-patronizing of its users. I seem to recall reading some Dick Gabriel piece that explained how Lisp programmers ended up self-censoring, deeming certain kinds of template cleverness to be in poor taste—and taking a care not to inflict those clevernesses upon their peers. They could do (and write code to do to itself) pretty much anything, but they chose not to do certain things.
Glenn Vanderburg believes that language constraints do not stop developers from making mistakes:
Weak developers will move heaven and earth to do the wrong thing. You can’t limit the damage they do by locking up the sharp tools. They’ll just swing the blunt tools harder.
If you want your team to produce great work and take responsibility for their decisions, give them powerful tools.
Aslam Khan considers that newbies feel the need for protection mechanisms, but they want those removed when they become more experienced:
Many (language designers) make the assumption that people need to be “controlled”. Indeed, newbies do prefer a rule/recipe/formula-based learning culture and languages that impose all sorts of “safety nets” around “dangerous” things serve that group well. Until, the newbie is no longer a newbie and then the constraints become impositions.
Ward Bell bring to attention that people are not very responsible:
Seductive case … who could be against responsibility and learning? ... but years of reading actually existing code tell us both that others are not responsible and that we ourselves are often lax. How has Ruby changed our nature?
I wrote commercial applications in APL back in the ‘70s and ‘80s. Very dynamic. Very easy to morph code written by others. You could write new APL as strings and execute them with the thorn operator. Very little interpreter support (there was no compiler) to catch or warn.
What wonderful messes we created. It was cool that we could fix almost as fast as we wrote. But you had to be damn good to read anyone’s code (including your own).
T. Powell considers that a developer can suck no matter what language he’s using, so it is not really a matter of language but rather of attitude:
You can be lame or great in any language. … Sure, a language influences things and you often see the hand of the designer or community in place, but you can be an idiot in Java or Ruby or Fortran or ….
What is your take, should we have no-constraints languages and trust the developer, or should we have safety nets in place?
Community comments
Missing aspect in this discussion.
by Olivier Hubaut,
Re: Missing aspect in this discussion.
by Marc Stock,
Ruby is permissive but it has a wise early adopter community so far...
by Raphaël Valyi,
Re: Ruby is permissive but it has a wise early adopter community so far...
by Sandesh T,
It's not a binary decision
by Tero Vaananen,
Re: It's not a binary decision
by Abel Avram,
Re: It's not a binary decision
by James Watson,
constraints are not just for protection
by Eelco Hillenius,
non-linear
by Dmitry Tsygankov,
Missing aspect in this discussion.
by Olivier Hubaut,
Your message is awaiting moderation. Thank you for participating in the discussion.
I think that one, if maybe not THE one, aspect is missing in all of the presented consideration. I don't know a single good developer that will use only one language to achieve all of his tasks. Every language has its pros and cons (some have more than others, but that's not the point), but more important, every language (even LOLCAT) was created with a certain vision on how to solve SOME problems.
The issue is that, for fun, by fear or by ignorance, some people tried to get out of the initial scope of the language.
And so on.
I'm not saying that breaking these barrier is bad. It just that it really rare that you can extend the domain of excellence of a language without either introducing weaknesses in its overall design or breaking the backward compatibility. And this last option was nearly never chosen, for many economical and emotional reasons (most of people hate change).
A new or weak developer will try to stay with the same language, while an experimented one will try to choose the most appropriate one he knows. Because sometimes, constraints worth it, and sometimes not.
Ruby is permissive but it has a wise early adopter community so far...
by Raphaël Valyi,
Your message is awaiting moderation. Thank you for participating in the discussion.
Hi,
I think you guys missed an essential point in that interesting article: yes Ruby is one of the most permissive language around and yes it allows to easily to do mistakes while framework like Rails tend to make it really obvious what good practices should be.
So then you say some argue that the Ruby community is especially wise at encouraging the good practices. Very true, but one should also keep in mind one thing: currently Ruby coders are still essentially people who have chosen to escape J2EE or PHP for Ruby. Those are not the mass of stupid average coders who tend to follow the corporate conventions. And if the Ruby community is still so wise IMHO this is very much because they aren't composed by the average PHP developer guys.
But, as languages will move on, I expect Ruby and Rails adoption to increase when managers will finally get that J2EE is really overkill for 90% of the web dev and PHP doesn't lead to sustainable coding. When that happen just like Cobol, Smalltalk or C++ started to fall, I expect that the average Ruby coder will really not be as skilled as today's one. Then I think we will really start to see the pitfalls of a language that is so permissive.
So IMHO, a good tradeoff again might lie in the JVM language and polyglotism: things that should be strict and can afford to be developed slowly might continue to be developed in static languages such as Java or possibly Scala/Clojure, while integration code will be better written in JRuby using those lower level libs. Meanwhile, as you said, some smart frameworks such as Rails might continue to exist in permissive dynamic language at the condition that the static compiler guards are replaced by a smart wide community adopting very large test suites.
So in a word I doubt Ruby itself is the silver bullet because the community would be more responsible. It's true today but it won't be true tomorrow and languages guards will still be required at some point.
Raphaël Valyi
It's not a binary decision
by Tero Vaananen,
Your message is awaiting moderation. Thank you for participating in the discussion.
The topic of the debate is a bit polarizing as everyone knows it is not a binary decision between constraints and responsibility.
Any tool, be that a programming language or a kitchen knife has aspects of design constraints and user responsibility.
Any tool should be practical. It should do the job it was designed to do and as such it will have constraints. The constraints should not be there to hinder the user to do the intended tasks - they should help the user to do the job. The user can still misuse the tool but the intention of the tool should be intuitive enough so that anyone can learn it with little practice and understand the principles.
A knife as handle and the blade. You can grab the blade but you quickly learn you should hold it by the handle. The handle also protects the user while the knife is being used so that you do not slip your hand over the blade, and aids in proper cutting. You can still cut your fingers, stab yourself etc. but that is beyond the design of the tool. The user has the responsibility to learn simple rules about knives and practice to use it adequately.
That being said; the tool should not be so obscure to and complicated to its users that it is hard even for the experts to understand why it does or does not work. For that reason, there has to be constraints that simplify the usage of the tool. If the usage is simple, but consequences are complicated, it's still wrong. Consequences should be straightforward and unambiguous.
In that light, something like C++ is too complicated. Ruby is too loosely defined as problems are too easy to create and consequences hard to track.
constraints are not just for protection
by Eelco Hillenius,
Your message is awaiting moderation. Thank you for participating in the discussion.
A more constrained language makes it easier to read what code does, and generally makes it easier to build tooling around it.
It's also interesting to me to read that the Ruby culture somehow avoids the problems which C++ is infamous for. Is this because Ruby's user base doesn't include the plebs (I would think the opposite, since many have a PHP background), do we have a wiser generation of programmers who learned from mistakes in the past, or is Ruby's freedom simply better to handle than C++'s?
Ideally, I'd like a language where rules are strong and explicit, but can be broken if it makes sense and without a lot of hacking; a bit like poetry or songwriting maybe. What exactly that looks like is a tough question... maybe Scala is the closest practical thing currently available :-)
Re: It's not a binary decision
by Abel Avram,
Your message is awaiting moderation. Thank you for participating in the discussion.
Excellent metaphor: the language as a knife. Some features are like that. A child is more likely to hurt himself or another with a knife, while an adult is less likely to do so, but there still is a possibility to harm himself and sometimes it happens. Beginners misused C++ in many ways, and language designers reacted to that introducing constraints. See Java. An experienced programmer wants freedom because he instinctively avoids some pitfalls that he has encountered in the past, so he prefers the freedom.
Indeed, it is not a binary decision, that's why the poll contains 3 options, not 2. Also, the quoted reactions represent different positions.
Re: It's not a binary decision
by James Watson,
Your message is awaiting moderation. Thank you for participating in the discussion.
I think this is a good, if a bit simplistic, analogy. One thing I would like to add is that while the design of the knife cannot prevent accidents in general, not all knives are equally well designed.
There are features of a knife that can make it safer. For one, only one edge of the knife should be a cutting edge and it should face away from the user. The shape and texture of the handle can make a big difference in preventing accidents (for example, I have knives with handles that are non-slip even when wet.) Somewhat paradoxically, the sharper the knife the safer it is when used properly. Larger knives are generally safer which is often unintuitive. Knives that fold up are less safe especially if they don't lock open.
I think the take-away on this is that not all features are good even if they seem desirable at first glance. Not all restrictions limit the usefulness of a language or make it harder to use (I have no need for goto.) Powerful features are not necessarily dangerous. What I am really getting at is that this is not a simple black and white question and I don't think there will ever be one language that fits all needs.
Re: Ruby is permissive but it has a wise early adopter community so far...
by Sandesh T,
Your message is awaiting moderation. Thank you for participating in the discussion.
"Those are not the mass of stupid average coders who tend to follow the corporate conventions."....Wow, talk about the Fundamental Attribution Error in action....
non-linear
by Dmitry Tsygankov,
Your message is awaiting moderation. Thank you for participating in the discussion.
Not only is it a non-binary decision, it is also a non-linear one. Do we have to choose between no constraints at all and a strict set of rules put into the language by the designer? Is there an alternative solution, like, maybe, programmable constraints? Recent developments in functional languages suggest that such an alternative could exist. Take Haskell with its type-level functions, for example. There are also some experimental languages like Agda where one can define functions of both types and values.
And, yes, I do agree with Eelco that constraints are not to protect me from lame programmers. First, more constraints mean that it's easier for me to reason about the code, even if that code was written earlier by me. Second, more constraints on the type level means less unit tests. I don't have to write unit tests for some dumb errors that can be caught by the compiler before the code runs. Third, more constraints means more comprehensive tools available for the language.
I don't want a language designer telling me what constraints I should use, but I also wouldn't like him to tell me that constraints are bad in general, so, sorry, you can't have them. What I really want is programmable constraints that I can grow and get more sophisticated together with the rest of the code.
Re: Missing aspect in this discussion.
by Marc Stock,
Your message is awaiting moderation. Thank you for participating in the discussion.
+1
This one size fits all mentality is sad, especially in this day and age. The language is no longer the platform, folks. Languages are just tools and you should use the right tool for the right job.