Well the elections are over. With an epic battle for the white house, it's like a slow-moving car crash - how can you NOT look?
But it is now time to get back to the code. Time to crank out those functions, write those new classes, debug those files of twisted goto-ridden lines.
Recently at work we were having a discussion about what kinds of new systems we might need to build. Some people were advocating trying to make do with existing systems, but tweak them more. Others were arguing that this might not be enough, and we'll have to end up coding new persistent-daemons anyways, we might as well get a start now.
Both arguments are pretty convincing, the first one is summed up by "don't over-engineer, only build what you need." The best bug you can have is the one you don't write. The second argument would be "if you can easily predict what you need, get a start early so you can build experience with it." If you know you'll need a database, why not set it up early and build expertise in-house?
The first argument seems to be a perfect counterbalance to the second. More code vs less code, in this structure, the less code would seem to have to win. If you take a longer term approach, you can easily come to the conclusion that you may end up with more code in the long term - sometimes the best time to make changes is when things are flexible. Less cruft to work around later, less data to migrate. Build expertise when things are relatively low volume.
These two approaches are seemingly intractable. Stubborn developers in the #1 camp can drag out debate and ultimately "win". But there is a third way.
What if a new system wasn't 50,000 lines of Java code? What if a new persistent-server-process was 500 lines of code? What if a complicated transaction-logged persistent engine was in reality 2000 lines of code? A good developer can make radical changes to 2000 lines of code in a weekend easily.
What if you can build the new system in an afternoon? There is practically no reason not to toss out that system, after all, afternoons have been wasted on even more trivial things - OS problems, expense reports, the DMV. The sooner you can build and throw out code, the more you'll learn about the nature of your problem.
But how do you build and throw out code quickly? The ultimate limit is how fast a human being can type - say 60 words per minute. If we assume that one line of code is 4 words, then we have 15 lines per minute, or about 1000 per hour. One afternoon would be about 4000 lines. And practically speaking most programmers would be lucky to get a solid 4 hours of coding a day.
So if you can only realistically get a few thousand lines of code out of a programmer a day, you better make sure those lines really count. By increasing the level of abstraction, coders write less words yet get more done. This is a fundamental argument that is difficult to dispute - no one would argue that programmers are more productive in C vs assembly, or in Python vs C. The power generally comes from what you do not have to do as you code, vs what you do. Memory management, structure definition, fundamental libraries, loops and so on. All of these things improve the levels of abstraction, and thus improve the level of programmer productivity - you simply go farther with those 4000 lines of code.
This is a popular minority argument to make. I think one of the better ones is done by Paul Graham. Nearly every language jockey takes this position - some languages are more powerful than others, and why would you use the lessor of two powers?
But in the end, programmers are human, and humans are creatures of habit. And so we only get a new language once every 7-10 years. C in the 80s, C++ in the early 90s, Java from about 1997-2005, and now... what?
Wednesday, November 5, 2008
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment