Frank Allan Harrison
What makes a good code review setup?
I miss having a good code-review system in place. Here are the things I loved about the code-review setup I had at Crytek UK, and why it was so powerful.
A really good code-review system needs a few features to make it really useful, features that help develop a good coherent team, and not just improve "code quality":
- Reviewees (the code writers) need to be able to:
- easily create a code review(s).
- asynchronously continue work uninterrupted without having to wait on the code review - not easy (editor's note in 2025: this was written in 2012) unless you're using git/hg/perforce.
- Reviewers (the code readers) need to be able to:
- leisurely, asynchronously, review code at their own desk with all of their own tools.
- get patches so that they can try the code themselves (should they wish and the patch makes sense).
- Communication channels and recording are best if they:
- allow deep discussion of the code and the principles ("why did you chose Dijkstra's algorithm instead of A* here?"), as well as high-level cursory comments ("you have a typo!")
- allow cursory, throwaway, discussion of the code
- communicate asynchronously during the review process.
- keep a historic, linkable record - of decisions made etc.
- The ability to have one or more reviewers.
- The ability to have subscription to areas of 'owned' areas of code.
- A code-review culture.
Create and continue
Easy-create
and continue
are important for a coder wishing to commit some atomic patch, and be able to start another feature, or continue working on one, asynchronously.
Easy-create
means that you don't need to hard-context-switch, or disrupt their current train of thought, especially important when you're in the zone on "complex feature", whilst being able to get small manageable commits signed off. At Crytek (where we used perforce) we had a very nice custom tooling whereby you could simply right-click a change-lists and submit it for code review.
Continue
is perhaps the most complicated requirement of the whole process and done wrong can lead to code-divergence and conflicts, especially if the reviewee has continued to work on a block of code which is found to have "bugs"; syntax "bugs" are less of an issue here but if the reviewer finds issue with the higher-level, algorithmic aspects of the code, well then this means a large re-write and if the reviewee has done more work in the same area, can mean a lot of wasted time and probably some difficult conflict-merging - although in this situation, the reviewee probably should have been communicating with the reviewer earlier and more often. The tools to allow this are inbuilt in git and mercurial (hg). In modern Perforce clients there is now the "shelve changes" option which makes putting commit patches to one side for later use much easier - and, IMO, makes Perforce a viable contender as a serious modern CVS. I'm yet to see anything like stashing or shelving in SVN or CVS unless you use external tools like smart-svn.
Async review, with context
Leisurely review
is perhaps the most important. During "over-the-shoulder" code-reviews, where the reviewee guides the reviewer whilst at desk, I've found that much gets missed, mainly because the onus is not upon the reviewer to understand the code but the reviewee to explain it cohesively. Because this skews the bias towards the reviewee's mindset, the intent overrides the actual implementation (which is often different) and in my opinion this is no better than rubber duck debugging and filled with flaws as the reviewer will often miss a very important point or bug, because of the inherent, aforementioned, bias. The onus should be upon the person doing the review to make sure they fully understand what the code does; yes this can mean that the code review takes longer but often means the produced code is a much higher quality, and mid-term, less time is spent in that code area.
For the reviewer to fully understand the code they should be able to examine the surrounding, existing code at their own desk/computer, in their own time, with their own tools. This is why review in context
is incredibly important. For example, because I want to understand your code fully, within my code-browsing setup so it is in my context, using the tools I work fastest with, I can get you to success faster. But, if I'm sat at your desk with your setup, your keyboard, your vim or your Visual Studio, without my keybindings, shortcuts and scripts, then I'm going to be slower, and I will feel implicit pressure from you to rush. But if I can do it at my desk, in my context, oh boy I'm quick and diligent.
Downloadable patches
is perhaps the least common use-case, but like review in context
it is useful when you're doing cross platform development or the reviewer is concerned that a particular slowness or bug will be introduced by the code - or they want to make sure that the code is compiled/tested properly. The best devs I have worked with do this on the regular. Perforce excels at this with the new stashed, sharable change-lists feature, but obviously git is king.
Decision records and code archeology
If leisurely review
is the most important then communication
and communication records
are probably the second most. Communication
simply allows the dissemination of ideas between stake holders. Communication records
are incredibly useful as they capture intent and are much better than comments because comments can drift are a transient, semi-permanent record of the design approach and decisions made with respect to a particular snapshot of the code - that is the code will likely change but the comments will stay the same. The record of decisions made around code at the time the code was submitted can really help understand the source of those types of bugs that require code archaeology - and can really help get to a bug-fix faster, especially when the team is moving with super-high velocity, or trying to hit deadlines.
Ownership and reducing the bus-factor
Subscriptions
are useful when you have areas of code that are the specialism of a few individuals, especially in critical-path code or complex, well designed code. For example in the Cryengine we had some pretty funk, macro-generated code that looked after triangle collisions, so one of the Physics PHds rewrote it so it was less funky, less code-smelly, but added a single op per cycle, which cost a few frames per second, so we didn't ship the non-smelly version but instead improved the code with high quality comments and the relevant performance metrics.
Also, if the person who knows an area of code happens to be a bus-factor problem
then ensuring they get to comment, discuss and validate the team's thinking on that area of code can significantly reduce the bus-factor, disseminating their knowledge.
Culture
But the above is irrelevant without the people who care about such things and are willing to work and collaborate on such a system.
Either you're all in, or your not at all. If you are the qualitative difference of output, speed, self-organising, caring, passionate teams is vast.
But this is not possible if the team has to work against the tool-set, instead of the tool-set working for the team, if the tools do not get out of the way, then it will fail.
Good teams use good tools, the best teams improve them.