The value of code review
Code review is one of the few widely accepted best practices.
Starting out as a solo dev, I literally dreamt of code reviews. Couldn’t wait to join a big team where we would review each other’s code and dance in the moonlight holding hands.
Well, that day came, and oh boy did I review. I went hard. Being the detail freak I am, doing code reviews was the best part of my day. Ok that and us playing PS4 vs the backend team. Ha!
I would block a couple of hours and go through every little detail. Not super effective, but I did learn. A lot.
I was exposed fast to a variety of approaches and solutions. How people would structure their code. What patterns they would use. Features of the language I was not aware of. You get it.
Reviews are a huge learning opportunity.
Soon, I got a keen eye on spotting issues in the code. Unnecessary complexity. Error paths that we did not handle. Hard to find corner cases. Some of them would never be caught at testing (unit, E2E, QA team). The test case steps were hard to come by without the context of the code.
Reviews are a quality safeguard (for the codebase AND the product)
A colleague once said:
“Man, I don’t know how you spot this stuff. It’s like you run the code on your mind”
I was the chosen one. Or I had just spent hours running the code on my mind - ahem - on my machine. Probably the latter.
I found the holy grail. Everything needed to be scrutinised. Does it match our style guide? The architectural patterns? Are the tests complete?
And as with everything in life, taking any practice to the extreme is a bad idea.
Time spent on reviews meant less time for other activities. Oh, and reviews take their toll on the reviewee, too:
“Rebase Hell” - When you have 3 open PRs branching out of each other.
“The Beggar Dilemma” - “Hey man can you please, please review my PR? Then, I only got three more to go”
And beware, people don’t usually like it if you are a super thorough reviewer. ON EVERY. SINGLE. REVIEW. I still remember a heated discussion about a comment I made whether an E2E test was clear enough.
They say pick your battles. Didn’t know it back then.
Reviews come at a cost
No matter how good an idea is, you need to find out how it applies to the circumstances.
And surprise, surprise, not all reviews are equal. It pays to do a cost-benefit analysis to decide how much a review is worth.
Below is the rule of thumb I use to determine the value of a review.
Note: The result is not binary, but for the sake of simplicity I left it that way.
Ok, so I figured out the value of a review. What next? Well, I try to take into account the cost (aka time). Then, I can make an informed decision.
Do we have time? Are we under high pressure for delivery? Then spend minimum time reviewing low value PRs or …
Do not do the review at all.
Yep, I said it. Me, the guy who went hard on reviews like Tom Cruise in the volleyball scene of Top Gun. There you go.
Two handy ideas you can use as safeguards across this direction:
Provide artefacts in the PR
This can be a recording of how the feature works, log files and tests (unit, snapshot, E2E). Some proof that this PR does what it claims, with a decent quality. This way the reviewer might quickly spot something, without doing the full review.
Tag skipped PRs with a label (e.g. “async review”)
You can use this tag for any PR that you would like to have a look at, post-merge. Then, at your own pace, you can go back and do a proper review. Should you find any issues, you can open a PR to address them.
PS: On part 2, I will list lessons learnt on how to create a PR. Essentially, what can make those high value PRs easier to review.
You cannot take ideas, opinions, or advice as law without the filter of direct experience.
- Dan Koe