Friday, November 7, 2008

Fight Features!

by Jesper Larsson

I still fervently believe that the only way to make software secure, reliable, and fast is to make it small. Fight Features.
Andrew S. Tanenbaum

“Features”, “power”, and “richness” are generally perceived as good things in computer systems, not as problems. Still, most people who use a computer have some experience of their problematic side. Quoting Wikipedia: “Extra features go beyond the basic function of the product and so can result in baroque over-complication rather than simple, elegant design.”

Photo by tanakawho (some rights reserved).

I don't expect to ever be able to convince a marketing person to use not adding any power or with the most rudimentary features only as a sales point. But I do sometimes get disappointed when I see the naive attitude that even experienced computing technology people take to new or exotic functionality. I am going to give you some examples from the field that all programmers have opinions about – programming languages – before finishing off with a little bit about the issue in other areas, such as database systems. Since most of the text is directed at programmers, it may be a little more technical than the previous posts on this blog.

Cool?

It was at a developers conference a few years ago that I had my major wake up call as to how uncommon my views seem to be among programmers. There were a number of presentations about new features in various languages and systems, and I was struck by how uncritically people received this. I particularly remember a reaction from the audience on a presentation of a new object language or framework of some kind, where apparently a core feature was that you were free to modify objects dynamically in any way, for instance by adding methods.

Audience member: Can you remove a method?
Presenter: Yes.
Audience member: Cool!

I was flabbergasted. What on earth would be the need to remove a method? To me it seemed like an extremely uncool feature!

The standard reaction to this is “So? If you don't like it you don't have to use it.” This always makes me want to jump up and down screaming in furious frustration. I am not sure if it is just because it is an incredibly stupid comment, or because it is something I would have been likely to say myself when I was a young and inexperienced programmer. I wish I could go back twenty years and tell myself off about it, but I wonder if I could have convinced myself – probably not, I was a pretty obstinate young man on these issues.

Anyway, if I did get a chance to persuade myself, there are four main arguments I would have used. All based on experience I did not have back then.

1. Blurring the focus of the system and its creator

This is rather obvious really, but it is easy to underestimate the extent to which limited time and resources keeps you from doing everything you can think of. If the developers of a system have to write and test code for fifty features that someone might find useful, there is going to be considerably less time to spend on things that are crucial for almost everyone.

In the programming language context, the time spent to make sure that the extra feature code worked properly could have been spent thinking about code generation for some common language construct. For instance, optimizing to get tight loops to run really fast.

You may say that this is a problem for the developer that creates the system, not for you who are just using it. But that is not true: it is a problem for you if you cannot get tight loops to run fast. It becomes a problem for the system vendor only if it is such a big problem for you that you stop using the system.

2. Blurring the focus of the user

I think my first realization that less possibilities increased my efficiency came when I wrote a rather large automatic proof program in Lisp, without using any looping constructs – only recursion for repetition. This severely limited my ways of expressing what the program should do. The result: I no longer had to think about how to express it! I could concentrate fully on figuring out how to solve the real problems!

I had a similar feeling moving from C++ to Java. C++ is a mess of language constructs taken from at least three different programming paradigms. You can write C++ code in pretty much whichever way you like, and specify everything in detail. This makes me change the code back and forth, unable to decide. “This method should probably be declared virtual. No, wait, I can probably make it static – that will save a few machine instructions. No, sorry, it has to be virtual. But if I change this call I can make it static.

Java is far from a perfect object-oriented language, but at least it is much stricter than C++. If you write Java without thinking in objects, your code tends to kick and squeal until you get it into a reasonably attractive structure.

Working on Ericsson's WAP gateway I was forced back into C++, but then a strict set of coding rules was imposed on me. The rules said things like “All classes which are used as base classes and which have virtual functions, must define a virtual destructor.” Again, this narrowed down the choices, and made it much easier to create reasonably well-structured code. Of course, it would have been nice if the compiler would have assisted in enforcing those rules.

Photo by Ramchandran Maharajapuram (some rights reserved).

“Syntactic sugar” people call constructs that let them say the same thing slightly more concisely. Saving you from typing the same thing over and over is obviously nice, but what you gain in typing you may lose to the distraction of having to think about which syntax to use. Syntactic sugar may have a smooth taste, but too much of it erodes your programming teeth.

3. Others will use it

If you are a young student or hobbyist creating one little program after another for fun or examination, you may think that software development is just about writing code. But once you work in a reasonable large project, or come in contact with running code that needs to be maintained, you realize, first, that you are not in complete control of how the code is written and, second, that much of the work is reading code.

Hence, not using a construct when you write your own code does not save you from being bothered by it. In order to read what others have written, you need to understand what everything means. The more features a language contains, the more difficult it is to learn to read. The more different ways of doing the same thing it allows, the more different other people's code looks from yours – again making it difficult for you to decipher.

One way of warding off criticism against potentially harmful features is that they are no problem when used correctly, with disciplin. True as that may be, not everyone who writes the code that you come in contact with has exactly the same disciplin, values, and experience as you do. Losing the possibility of carefully using a feature yourself is often a small price to pay to prevent others from using it hazardously.

4. Optimization

I already mentioned that adding features may have optimization suffer because developers have less time, but there is one more issue concerning efficiency: the possibility of some features being used may keep the compiler from making assumtions that would allow it to generate considerably more efficient code.

An old example from the C language is the option to modify all sorts of variables through pointers. A neat trick in some situations perhaps, but it makes it difficult for the compiler to know if it can keep values in CPU registers or not. Even if the program does not contain any such side effects, that may not be known in compile time. Hence, to make sure that the program runs correctly, values have to be repeatedly written down to main memory and read back, just because of the existance of a rarely used feature of the language.

Another example is when the assembling of program components is highly dynamic, such as with dynamic class loading in Java. As I mentioned in a previous post, I and my colleagues have been quite impressed with the optimization capabilities of Sun's HotSpot virtual machine. It has allowed us to produce high-performance systems completely written in Java. But there are some situations where optimization is less effective. Particularly, inlining does not happen as often as we would like it to. In part, this is due to the general difficulty of figuring out which method really gets invoked in languages that support polymorphism. But it is made worse by dynamic class loading, because the system can never be sure if execution is going to involve code that has not been seen yet.

Polymorphic method invocation and dynamic class loading are wonderful features. I would not want to lose any of them. But they do have their drawbacks for efficiency. They make the optimizer's task more difficult, even in situations where they are not used.

Not Just About Programming

I am obviously not the first to have noticed the feature creep problem, and it is certainly not limited to programming languages. It is everywhere in computing. The opening quote from Tanenbaum is about operating systems, and the problem is huge in database systems.

For example, the more recent versions of the SQL standard is virtually impossible to implement in full. It is well known that this “richness of SQL”, as Surajit Chaudhuri diplomatically (or perhaps sarcastically) put it in his recent Claremont presentation, is a major obstacle for efficiency in database systems.

It is even an issue for data models themselves. But in that area, there is hope of some progress in the right direction – towards simplicity. More on that in future posts.