I worked in a team that didn't do reviews because _everything_ including spikes, research, etc, was done by two engineers pairing. This was remote, cameras on, all day.
I found it utterly exhausting. I was somehow working at 100% capacity but producing output at 50% because so much of my cognitive bandwidth was taken up with the pairing process.
I've been on full time pairing teams but noticed 2-5x more throughput because there's fewer meetings, code review, and surfing the web. It's exhausting because suddenly I'm actually writing code 7+ hours a day and that's really hard.
Every other team I've been on without pairing, writing code is at most 3 hours a day. The rest of the time just gets chewed up by other communication channels.
I'd guess I've done somewhere in the ballpark of 1000 to 2000 code reviews. After that time, my opinion of them has drastically changed. I can give something of a long laundry list (my apologies for it being kinda ranty...):
CR slows things down & often unnecessarily. Overly time consuming
Time lost with needless explanations.
Somewhat impossible when reviewer pool is one or two people. A bigger reviewer pool is probably not going to have to have context.
Creates a bias to not ship. "waiting for CR" can be a big impediment for shipping. Perhaps that tonight was the good time for you to get something into prod, not tomorrow - but you're waiting for a CR.
It's an example of process over people (AKA: cargo-culting). The context of when/where/how/why CR is important and is situational. CR best practices are going to have different results in different situations. Often CR is just done because it is a best practice, blindly. It would be preferable to think deeply about what is important & why - rather than just a "oh yeah, here is another hoop we jump through because #AmazonDidIt"
Stratifies the team. The senior/elite/favored people invariably get easier times during review.
CR can get political. Scrum masters & managers scrambling to unblock a team member and get something reviewed. Which is great for that one time, but reveals an utterly broken process otherwise. When a CR is "escalated", what's the chance the reviewer will actually spend the time needed and the back-and-forths to get things "properly" reviewed?
Conducive to nitting, which are are useless and counter-productive. Time spent on nits is waste and draining to keep coming back to something to then tweak "is it good enough yet?"
Drive by reviews without context
Coming to agreement during CR is difficult. Not everyone is able to observe/experience and/or resolve conflict.
CR is late in the code development process; it's one of the worst times to get feedback after something has been done, polished, tested, made production ready - and suddenly then someone thinks it should be done differently (which is a lot easier to see when something is already done and working). It's akin to writing code 3 times, once to get it to right, a second time to get it nice, and a third time for whatever the hell the reviewer wants.
Shows lack of trust in team. It is gatekeeping.
Does not scale well. I was once told by a reviewer that I write code faster than
they could read it. (At the time I reviewed about 6 PRs every day, and sent out about 3. I was doing 80% of reviews, it was a shit-show; the reviews I received were slow and useless - the team members were focused on trying to deliver their over-stretched projects; I was too streched to give proper feedback and not work 100 hours a week).
That latter branching strategy puts it up to the code author to decide, "does this comment typo fix need a review? Does this fix in this code that I authored originally and know everything about - really need a blocking review?" In that last case, if a person can just merge a bunch of precursor refactors right away, they can get that knocked out - the PR they send out for review is then clean & interesting; and there is no time lost managing branches or pestering someone
A second option to help make things better is to allow "after-merge" reviews too. A few teams I've been on, we did that enough where we learned what kinds of things are good to ship on their own. To another extent, there wasn't the bandwidth to review everything. It was not a bad thing.
A number of these critiques seem like problems with the work culture rather than code reviews. Yes, nitpicking, cargo culting, gatekeeping, and favoritism are problems, but are they problems with _code reviews_ specifically?
Some of the others are actually desirable features of code reviews in my experience. Yes, we don't want needless explanations or the code reviews only going to a small number of people, but my experience is that code reviews are a good mechanism for encouraging authors to write self-explanatory code and for sharing expertise around the team.
This is a bunch of downsides, but I guess I'm asking, does anyone intelligent think that given the alternatives, on balance, it's better to not do them? You seem quite against them - what would you propose as a concrete alternative and what would the result likely be?
AT the end of the day, IMO it's like Agile & Scrum. You actually have to think hard about your context, can't just go in and apply a boiler-plate one-sized fits all solution. (I've got a chip on my shoulder these days as I feel too many in IT don't really want to think hard about their context. Too much of: "I want the magic formula for my team to 'go fast!'". The mantra: just do the stand-ups, do the retro, do the code review, do the planning poker, add checkstyle (a code syntax linter) - it'll all go great! I've now learned to ask - but did the project ship in time? Were any of those items actually useful in this context? If so, tell me exactly why and give details. I've also learned that if a team is far in the stone age, trying to adopt all of the best practices is likely to take longer than the remaining lifetime of that team and/or company. Which means they will do nothing further useful other than churn on process changes. Which brings me back to CRs, how many of them are actually useful? Some CRs are useful for sure. Though, how can a team spend more time doing useful CRs and less time doing non-useful CRs. What's more, every PR is a blocking activity, if we are blocking development for a usually not-useful activity -> that is a problem. The alternative does not have to be to throw out CR completely - I already mentioned two: "ship/show/ask", & post-merge review)