Friday, November 7, 2008

Fight Features!

by Jesper Larsson

I still fervently believe that the only way to make software secure, reliable, and fast is to make it small. Fight Features.
Andrew S. Tanenbaum

“Features”, “power”, and “richness” are generally perceived as good things in computer systems, not as problems. Still, most people who use a computer have some experience of their problematic side. Quoting Wikipedia: “Extra features go beyond the basic function of the product and so can result in baroque over-complication rather than simple, elegant design.”

Photo by tanakawho (some rights reserved).

I don't expect to ever be able to convince a marketing person to use not adding any power or with the most rudimentary features only as a sales point. But I do sometimes get disappointed when I see the naive attitude that even experienced computing technology people take to new or exotic functionality. I am going to give you some examples from the field that all programmers have opinions about – programming languages – before finishing off with a little bit about the issue in other areas, such as database systems. Since most of the text is directed at programmers, it may be a little more technical than the previous posts on this blog.

Cool?

It was at a developers conference a few years ago that I had my major wake up call as to how uncommon my views seem to be among programmers. There were a number of presentations about new features in various languages and systems, and I was struck by how uncritically people received this. I particularly remember a reaction from the audience on a presentation of a new object language or framework of some kind, where apparently a core feature was that you were free to modify objects dynamically in any way, for instance by adding methods.

Audience member: Can you remove a method?
Presenter: Yes.
Audience member: Cool!

I was flabbergasted. What on earth would be the need to remove a method? To me it seemed like an extremely uncool feature!

The standard reaction to this is “So? If you don't like it you don't have to use it.” This always makes me want to jump up and down screaming in furious frustration. I am not sure if it is just because it is an incredibly stupid comment, or because it is something I would have been likely to say myself when I was a young and inexperienced programmer. I wish I could go back twenty years and tell myself off about it, but I wonder if I could have convinced myself – probably not, I was a pretty obstinate young man on these issues.

Anyway, if I did get a chance to persuade myself, there are four main arguments I would have used. All based on experience I did not have back then.

1. Blurring the focus of the system and its creator

This is rather obvious really, but it is easy to underestimate the extent to which limited time and resources keeps you from doing everything you can think of. If the developers of a system have to write and test code for fifty features that someone might find useful, there is going to be considerably less time to spend on things that are crucial for almost everyone.

In the programming language context, the time spent to make sure that the extra feature code worked properly could have been spent thinking about code generation for some common language construct. For instance, optimizing to get tight loops to run really fast.

You may say that this is a problem for the developer that creates the system, not for you who are just using it. But that is not true: it is a problem for you if you cannot get tight loops to run fast. It becomes a problem for the system vendor only if it is such a big problem for you that you stop using the system.

2. Blurring the focus of the user

I think my first realization that less possibilities increased my efficiency came when I wrote a rather large automatic proof program in Lisp, without using any looping constructs – only recursion for repetition. This severely limited my ways of expressing what the program should do. The result: I no longer had to think about how to express it! I could concentrate fully on figuring out how to solve the real problems!

I had a similar feeling moving from C++ to Java. C++ is a mess of language constructs taken from at least three different programming paradigms. You can write C++ code in pretty much whichever way you like, and specify everything in detail. This makes me change the code back and forth, unable to decide. “This method should probably be declared virtual. No, wait, I can probably make it static – that will save a few machine instructions. No, sorry, it has to be virtual. But if I change this call I can make it static.

Java is far from a perfect object-oriented language, but at least it is much stricter than C++. If you write Java without thinking in objects, your code tends to kick and squeal until you get it into a reasonably attractive structure.

Working on Ericsson's WAP gateway I was forced back into C++, but then a strict set of coding rules was imposed on me. The rules said things like “All classes which are used as base classes and which have virtual functions, must define a virtual destructor.” Again, this narrowed down the choices, and made it much easier to create reasonably well-structured code. Of course, it would have been nice if the compiler would have assisted in enforcing those rules.

Photo by Ramchandran Maharajapuram (some rights reserved).

“Syntactic sugar” people call constructs that let them say the same thing slightly more concisely. Saving you from typing the same thing over and over is obviously nice, but what you gain in typing you may lose to the distraction of having to think about which syntax to use. Syntactic sugar may have a smooth taste, but too much of it erodes your programming teeth.

3. Others will use it

If you are a young student or hobbyist creating one little program after another for fun or examination, you may think that software development is just about writing code. But once you work in a reasonable large project, or come in contact with running code that needs to be maintained, you realize, first, that you are not in complete control of how the code is written and, second, that much of the work is reading code.

Hence, not using a construct when you write your own code does not save you from being bothered by it. In order to read what others have written, you need to understand what everything means. The more features a language contains, the more difficult it is to learn to read. The more different ways of doing the same thing it allows, the more different other people's code looks from yours – again making it difficult for you to decipher.

One way of warding off criticism against potentially harmful features is that they are no problem when used correctly, with disciplin. True as that may be, not everyone who writes the code that you come in contact with has exactly the same disciplin, values, and experience as you do. Losing the possibility of carefully using a feature yourself is often a small price to pay to prevent others from using it hazardously.

4. Optimization

I already mentioned that adding features may have optimization suffer because developers have less time, but there is one more issue concerning efficiency: the possibility of some features being used may keep the compiler from making assumtions that would allow it to generate considerably more efficient code.

An old example from the C language is the option to modify all sorts of variables through pointers. A neat trick in some situations perhaps, but it makes it difficult for the compiler to know if it can keep values in CPU registers or not. Even if the program does not contain any such side effects, that may not be known in compile time. Hence, to make sure that the program runs correctly, values have to be repeatedly written down to main memory and read back, just because of the existance of a rarely used feature of the language.

Another example is when the assembling of program components is highly dynamic, such as with dynamic class loading in Java. As I mentioned in a previous post, I and my colleagues have been quite impressed with the optimization capabilities of Sun's HotSpot virtual machine. It has allowed us to produce high-performance systems completely written in Java. But there are some situations where optimization is less effective. Particularly, inlining does not happen as often as we would like it to. In part, this is due to the general difficulty of figuring out which method really gets invoked in languages that support polymorphism. But it is made worse by dynamic class loading, because the system can never be sure if execution is going to involve code that has not been seen yet.

Polymorphic method invocation and dynamic class loading are wonderful features. I would not want to lose any of them. But they do have their drawbacks for efficiency. They make the optimizer's task more difficult, even in situations where they are not used.

Not Just About Programming

I am obviously not the first to have noticed the feature creep problem, and it is certainly not limited to programming languages. It is everywhere in computing. The opening quote from Tanenbaum is about operating systems, and the problem is huge in database systems.

For example, the more recent versions of the SQL standard is virtually impossible to implement in full. It is well known that this “richness of SQL”, as Surajit Chaudhuri diplomatically (or perhaps sarcastically) put it in his recent Claremont presentation, is a major obstacle for efficiency in database systems.

It is even an issue for data models themselves. But in that area, there is hope of some progress in the right direction – towards simplicity. More on that in future posts.

6 comments:

Anders Janmyr said...

Hi Jesper,

I agree with you that feature creep is one of the main problems of our industry.

It has been said well by the 37 signals people in their book, Getting Real.

http://gettingreal.37signals.com/ch05_Half_Not_Half_Assed.php

http://gettingreal.37signals.com/ch05_Hidden_Costs.php


On the issue of programming languages I agree partly. If I have a flexible language like Lisp or Smalltalk I don't need a lot of features built into the language since the possibility to add what I need to the language is built into the language.

If I on the other hand am stuck in a less flexible language like C# I am happy when they add features to the language that should have been in the language from the start, like closures and type inference.

My point is that the language should be designed to be grown.
Guy Steele, the creator of Scheme, says it better than I.

PDF
http://www.cs.virginia.edu/~evans/cs655/readings/steele.pdf

Video
http://www.google.com/search?q=growing+a+language

Anonymous said...

I totally agree with you Jesper. When it comes to programming languages and features, I've caught myself thinking "Hmmm, I must be getting old! This can't be any good..." more often these days.

I see one hype after another hit the scenes, and I must say in most cases I'm really surprised no one asks the simple questions as "Why is this better?", "But.. we could accomplish that by using the wellknown language/feature X, why must we do it with language Y which by the way is at pre-release stage still...?". I feel a whole industry jumps the feature wagon in such a pace that the quality of work produced is at best "a little buggy". My opinion is that you should not make production systems by using a technology that you're just learning - use what you can!

Jesper Larsson said...

Thanks for the pointer to the Guy Steele talk, Anders. It contains many wise observations and points to keep in mind for language designers, and I think I agree with his main point, which I understand to be: Programming is to a large extent growing the language, and that growth should be made easy for the ordinary programmer.

I am not sure about all his conclusions, however, and I note that his only argument against, as he puts it, "a large language" is that it is difficult to learn for the user. That is part of my points 2 and 3 above, but there are other aspects that he does not deal with. Perhaps that is why he comes to the conclusion that it is a good idea to expand the Java programming language in ways that I do not sympathize with.

I have one more issue with expanding (in the sense "monolithic growing") a perfectly working language, such as Java in this case: it makes my old code obsolete. If I embrace, say, the new for syntax for Iterable, it makes me uncomfortable with all my old and running code that was written without it, and makes me want to go back and change it. I do not like this feeling at all, so anticipating that changes like this may occur in a language makes me reluctant to use that language.

In addition, the generic type additions to Java (for instance) clearly do not fit into the model of the language, which creates the need for ridiculous compromises such as "erasure".

Jesper Larsson said...

gk: Thanks for your support. I agree with the buggyness aspect in being too eager to adopt new features.

Anders Janmyr said...

Jesper: The "Growing a language" speech was given in 1998 before Java got bloated. If I remembered it correctly all Steele wanted to add was operator overloading and generics. The generics that was finally added to the language was a mistake since backwards compatibility was a requirement. Generics in C# is done right and you don't get the problems you have in Java. Guy Steele cannot be blamed for Java since it is not his language. If you want to see some of his recent work, take a look at Fortress.

Jesper Larsson said...

Ok, I will not blame Steele for Java 1.5. However, operation overloading is a feature I have serious doubts about, "programming with puns" as Scott Guthery put it. (Not that I agree with everything in that article.)