Sunday 26 April 2015

Reviewing code




Code reviews. Love them, hate them - they have to be done. They also remain largely untouched by the Agile ceremonies framework; apart from pair programming, and other assorted ideas, people have scope for doing them as they like.

I'm one of those people, and having been doing a few of those each day for a number of years, I've acquired a number of practices. Naturally, there is no better place for sharing them than this blog!

Usually, when a new review lands at my (virtual) desk, I try answering this question:

How I would have done it?




This is before even looking at how the other guy wrote his/her/its code. 
Why? Because if I can't think of an approach myself, then I'm obviously not qualified to critically review the design which lies before me. Also, all competitions need at least two entrants; if there is just one, the winner, the one that passes review, may not be fit for purpose.

Of course, in some cases we would have selected a low-level design before coding even began (see post on planning spikes). Then, we are already looking at the right approach.
However, this luxury is not always there; many small items, minor features and featurettes, simply don't justify a separate planning task, and we end up reviewing both design and code at the same time.

The five minutes test

Ok, let's assume I have a solution rotating in slow 3D in my head. Now, it's time to look at the artefact to be reviewed (for the benefit of my spell checker and US colleagues, artifact).

Here we come to the 5-minute test. It's as simple as tests go: if I can't figure out in five minutes what this does and how it does it, then the review travels back with a few well-chosen question marks attached. If circumstances allow, we'll also have an informal design review rather than bouncing the code back and forth.

The reasoning is simple: the task is recent, and I understand what it should perform. In one year, both statements will become untrue. What's worse, it might not be even my memory, as there are/will be other developers in the team, some of them new.
Hence, if I can't quickly figure out what it does today, then tomorrow I, or indeed, someone else, might not figure it out at all. 

Of course, there are shades of grey. It does not often happen that the entire approach is incomprehensible; it might be a mother of an "if" condition, or a function with 6 nesting levels, or a convoluted class hierarchy.
Then, I prefer getting those refactored before proceeding with the review, as they may obscure other problems. 
If the code deserves to be complex, then the explanation should make its way into a code comment.

Once the implemented design is understood, and understood well, it's compared against the one I have in slowmo 3D rotation. If, for some (and I hasten to say, objective) reason, the preconceived option is better, we have a discussion.
It might cause a rewrite, but more often than not, I'll be comprehensively shown that the implemented design is better; after all, the other guy spent more time thinking about it.
This is fine: it is a good reason for another couple of comments, and gives me better idea of what is being reviewed.

Ok, we cleared the low-level design barrier, what comes next? Next come
  
Personal angles


Every reviewer has their own pet subjects. Someone hunts for Python loops and swoops in to convert them to list comprehensions. Another determinedly looks for C++ raw pointers in a fashion popularised by guys with metal detectors on beaches. In short, everyone has some form of bias.

Even putting those aside, in most cases, you'd be looking for the latest defect you've found or experienced. It's hard to cast that away, which is why it is worthwhile doing at least a couple of rounds on important reviews.

Usually, I'm following this order:

  • Code reuse (or lack thereof)
  • Encapsulation, interfaces and self explanatory variable names. 
  • Correctness, i.e. no obvious defects with inverted conditions, missed basic use cases etc.
  • Performance and efficiency
  • More esoteric errors, such as off-by-ones, imprecise variable types, signed/unsigned, proper variable scope and so on
  • Auto-tests and code coverage
  • Correct merges to release branches and external dependencies.

Not all changes are born equal, and a one liner defect fix won't be experiencing all the trials and tribulations above.

The more important point is that for bigger reviews I tend to stop if serious issues are found at any stage. For example, if a developer duplicated a chunk of code, it won't be even reviewed; I'll just ask to adapt or reuse what we already have.
Similarly, if I find basic functional issues, I won't be looking for performance, auto-tests or advanced defects. Basic correctness should be verified by auto-tests, not me, so it's back-to-drawing-board time.

Let's rotate this by 180 degrees, and say

What code review is not?

It is not a coding standard check. Every self-respecting language has tools that verify naming syntax, spaces vs. tabs etc. No need to waste highly paid humans' time on this.

It is not a validity test. Code should be working before it is submitted for review. Computer will tell much faster than a human if something is completely and utterly non-functional.

It is not like reading a book. If you're going over sources top to bottom in an alphabetical fashion (or whichever order imposed by your code review tool), then you won't find too many defects.
You might first hit some internal helper module, then auto-tests, then core functionality, and then wrapper functionality. Tracing the narrative is hard, since the actual order should be: core functionality->helper modules->wrapper->auto-tests.

Moreover, you might have to go over sources a few times as you understand better what is going on (as covered above)

It is not a management metric. I've seen places where the time spent on code reviews was measured, and if reviewers spent less than X seconds per LOC, they got a slap on the wrist. 
I've also seen places where you could not approve a review unless the tool confirmed that you scrolled over every single line of code.
They were probably done with the best of intents, but if you, for valid reasons, don't trust developers to review properly, then I assure you - no metrics will help; the problems lie much deeper. What's more - if engineers are smart enough to get paid IT salaries, they will also be smart enough to work around the metrics.
The only number I found useful is the count of defects found during code review as compared to those that went to QA and production.

(Ok, there's also this one)

Code review is not a formal meeting. Let's add a weasel statement first - sometimes formal code walkthroughs are ok. 
If you're developing mission critical software, where a minor failure carries untold consequences, then yes: throw whatever you can at it, including a formal, line-by-line, code reviews with numerous people present.
For the remaining 99% of us, a formal code walkthrough is one of the most unproductive practices known to man. Everyone is going through the sources in their order, has a different starting point, and comes with a different baggage. Sitting in a room, eyeballing source lines one by one, and wearily commanding the reviewee to move to the next line just does not provide much excitement.
Informal chats and mini-meetings are good and useful though, especially when dealing with complicated solutions to complicated problems. 




Code review is not about transferring responsibility. If you created a defect (which happens to everyone who writes code), then you're still its author irrespective of how many people reviewed it. Code inspection is a service, not an ownership transfer.

Wrapping up

There was too much talk and not enough content. Next time, I'd like to compensate for that by showing how it all works in practice on a willing victim (i.e. chunk of code).



No comments :

Post a Comment