Broken windows applied to software

I was reading Malcolm Gladwell's Tipping Point (I know, as a resident geek I should've read it a long time ago), and he was explaining the theory of broken windows, which Wikipedia defines as:

"Consider a building with a few broken windows. If the windows are not repaired, the tendency is for vandals to break a few more windows. Eventually, they may even break into the building, and if it's unoccupied, perhaps become squatters or light fires inside."

There has been a lot of criticism towards this idea, and it's not been conclusively proven. But it is intriguing - because what it does is that it suggests that it's not always best to solve the biggest problem first. So I started to think about software and bugs.

In a complex project, you always get bugs. You get simple errors (like typos in the user interface), and large and complicated errors (like concurrency issues). Sometimes they are trivial (do not affect the program execution or purpose in any way); and sometimes they are critical (they make the software useless for the intended purpose).

In your average corporate project, you typically fix bugs by starting from showstoppers and you go down in criticality. This often means that you are left with a number of simple and trivial bugs, that you just don't have time to fix before shipping. Since these simple bugs tend to be also the most common, your error counts don't necessarily even go down, but increase slowly over time.

With open source projects, people often like to "get the low hanging fruits", that is, fix the simple issues. It makes them feel useful, and it gives them bits of fame. For JSPWiki, we get a lot of fixes for the really simple things from the same people who found the issue in the first place - they're not complicated patches, but they scratch their particular itch.

So, I'm wondering, could it be that even in software, the fact that the software has lots of bugs, breeds more bugs? If the codebase is already buggy, developers become more relaxed about maintaining quality, and think they can get away with something that just sorta works. And, if the project management adopted a zero-tolerance policy towards ANY kind of bugs, it might actually increase incoming code quality. This means that instead of allocating people to work on the top-level issues, everybody would be encouraged to squash the simple bugs first to keep the total error counts as low as possible, because in the time it takes you fix a really complicated thing, you can fix ten small ones. It would make people care more about quality, and hopefully, over time, make the project better. Some of this thinking is visible in Test Driven Development, as well as most of the other Agile methods, but I don't know if someone has really done any studies on this.

Apparently there's a book called The Pragmatic Programmer which touches the same subjects. Anybody know if this is any good?




Comments

Yes, it's a good book. Not perfect, but good. I recommend at least reading it once.

--Pare, 20-Jan-2008


Recommendation seconded.

Kernighan & Pike's "Practice of Programming" is more erm... practical, but includes lots of good theory inbetween the examples. And is written with a lesser amount of arbitrary provocation mixed into the dough.

--lavonardo, 20-Jan-2008


More info...     Comments?   Back to weblog
"Main_blogentry_190108_2" last changed on 19-Jan-2008 14:42:42 EET by JanneJalkanen.