BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage News Should We Rely on Language Constraints or Responsibility?

Should We Rely on Language Constraints or Responsibility?

This item in japanese

Bookmarks

Should languages be fully flexible, allowing the developers to tweak them as they like, and trusting they will be responsible in their work, or should there be clear constraints set in the language from its design phase to avoid mistakes that create bad code, hard to maintain or to read?

Bruce Eckel believes that some features are so complex they end up being mostly unused:

In C++, which has no run-time model (everything compiles to raw code, another legacy of C compatibility), template metaprogramming appeared. Because it's such a struggle, template metaprogramming was very complex and almost no one figured out how to do it.

Java does have a runtime model and even a way to perform dynamic modifications on code, but the static type checking of the language is so onerous that raw metaprogramming was almost as hard to do as in C++.

Clearly the need kept reasserting itself, because first we saw Aspect-Oriented Programming (AOP), which turned out to be a bit too complex for most programmers.

Like Smalltalk, everything in Ruby is fungible (this is the very thing that makes dynamic languages so scary to folks who are used to static languages).

Michael Feathers thinks Eckel is suggesting constraining the language:

Language designers do persist in this sort of reasoning — this notion that some things should not be permitted. If you want to see an example of this style of reasoning see the metaprogramming section in this blog of Bruce Eckel’s.

Rather than setting up safety nets, Feathers advocates trusting the developers which promotes the ethic of responsibility, that he believes exists in the Ruby community:

In some languages you get the sense that the language designer is telling you that some things are very dangerous, so dangerous that we should prohibit them and have tools available to prohibit misuse of those features. As a result, the entire community spends a lot of time on prescriptive advice and workarounds. And, if the language doesn’t provide all of the features needed to lock things down in the way people are accustomed to, they become irate.

I just haven’t noticed this in Ruby culture.

In Ruby, you can do nearly anything imaginable. You can change any class in the library, weave in aspect-y behavior and do absolutely insane things with meta-programming. In other words, it’s all on you. You are responsible. If something goes wrong you have only yourself to blame. You can’t blame the language designer for not adding a feature because you can do it yourself, and you can’t blame him for getting in your way because he didn’t.

So, why aren’t more people crashing and burning? I think there is a very good reason. When you can do anything, you have to become more responsible. You own your fate. And, that, I think, is a situation which promotes responsibility.

Coda Hale does not think the Ruby community has a better sense of responsibility, but rather one of permissiveness:

I’d really like to believe that Ruby has a culture of responsibility, but I don’t think that’s accurate. I’ve debugged plenty of problems in Ruby applications which were caused by a third-party library taking liberties with other people’s code—turning NilClass into a black hole or using alias_method_chain without perfectly duplicating the underlying method signature. Each time it took effort to convince the offending library’s author to fix the problem, and each time I had to hot-patch the code myself until the library was updated. That’s not responsibility in the positive sense; it’s responsibility in the sense of “no one else will fix this if you don’t.”

I think that Ruby has more of a culture of permissiveness than responsibility. Just about anything is acceptable, and rather than complaining about another library’s bad behavior you should write your own version with its own bad behavior.

Niclas Nilsson agrees with Feathers, to let the developer free and make him feel responsible:

This is a discussion I for some reason end up in all the time with people in all kinds of different contexts. “Isn’t that dangerous?”. The answer I give is often “Yes, it is. But so are cars, knifes and medication”. Almost anything useful can be misused and will be misused, but we seldom do it or learn fast if we do, and in software we have unmatched undo-capabilities of all sorts.

I’m not willing to pay the price for (false) protection as long as the price is less power, especially since they are circumvented anyway. Your not supposed to change a class at runtime in Java, so to solve that, people turn to byte code weaving. Barrier broken, but at a high cost with high complexity. I much rather trust responsibility, good habits and safety nets I set up anyway to protect myself from logical mistakes.

Liberty and responsibility goes hand in hand.

Keith Braithwaite gives the example of the List community which was not patronizing its users, and they learned how to avoid pitfalls:

What lessons might there be from the history of the Lisp community? Lisp (along with Smalltalk and assembler) is about maximally non-patronizing of its users. I seem to recall reading some Dick Gabriel piece that explained how Lisp programmers ended up self-censoring, deeming certain kinds of template cleverness to be in poor taste—and taking a care not to inflict those clevernesses upon their peers. They could do (and write code to do to itself) pretty much anything, but they chose not to do certain things.

Glenn Vanderburg believes that language constraints do not stop developers from making mistakes:

Weak developers will move heaven and earth to do the wrong thing. You can’t limit the damage they do by locking up the sharp tools. They’ll just swing the blunt tools harder.

If you want your team to produce great work and take responsibility for their decisions, give them powerful tools.

Aslam Khan considers that newbies feel the need for protection mechanisms, but they want those removed when they become more experienced:

Many (language designers) make the assumption that people need to be “controlled”. Indeed, newbies do prefer a rule/recipe/formula-based learning culture and languages that impose all sorts of “safety nets” around “dangerous” things serve that group well. Until, the newbie is no longer a newbie and then the constraints become impositions.

Ward Bell bring to attention that people are not very responsible:

Seductive case … who could be against responsibility and learning? ... but years of reading actually existing code tell us both that others are not responsible and that we ourselves are often lax. How has Ruby changed our nature?

I wrote commercial applications in APL back in the ‘70s and ‘80s. Very dynamic. Very easy to morph code written by others. You could write new APL as strings and execute them with the thorn operator. Very little interpreter support (there was no compiler) to catch or warn.

What wonderful messes we created. It was cool that we could fix almost as fast as we wrote. But you had to be damn good to read anyone’s code (including your own).

T. Powell considers that a developer can suck no matter what language he’s using, so it is not really a matter of language but rather of attitude:

You can be lame or great in any language. … Sure, a language influences things and you often see the hand of the designer or community in place, but you can be an idiot in Java or Ruby or Fortran or ….

What is your take, should we have no-constraints languages and trust the developer, or should we have safety nets in place?

Rate this Article

Adoption
Style

BT