Recently I spent some time profiling and optimizing a certain program, and tripled its performance. My reaction was predictable. I am awesome, I thought. I made this three times as good as it was before. I'm three times as good a programmer as whoever wrote that slow code. (Even though “whoever wrote that slow code” was me.)
The results of optimization are easy to quantify: not just faster, but three times as fast. I suspect this accounts for some of the lure of optimization. Comparing performance numbers provides an obvious indication of improvement, and exaggerates its value, so even a modest optimization feels like a triumph. Adding a new feature usually has much less emotional reward, especially if it's not technically impressive or immediately useful. Fixing a bug may bring feelings of disgust at the bug, and guilt (if I created it) or contempt (if
some other idiot someone else did), but it seldom feels like much of an accomplishment. Optimization, though, feels good, because it's measurable and therefore obviously valuable (even when it's not).
I also enjoy cleaning up code, for the same reason: if I make something 50 lines shorter, that tells me how much I've accomplished, so I have something to feel good about. Predictably, I feel better about making code shorter than about merely making it more readable, because readability doesn't come with numerical confirmation.
If this effect is real, it would be valuable to have a similar quantification of bugs and features, to make improvements emotionally rewarding. An automated test framework can provide this by conspicuously reporting the number of tests passed, and perhaps the change from previous versions. (It's probably best to report the number passed, not the number failed, so writing more tests makes the number better rather than worse.) If you've used a test framework with this feature, have you noticed any effect on your motivation to fix bugs?