Wednesday, September 20, 2017

How do you measure the value of a PR Review?

How do you measure the value of a PR Review?


Thinking out aloud. You employ a developer whose sole job is to review PRs submitted by other developers. How do you measure the value that developer is providing?


The formula


The time saved by not having to (1) debug issues in production plus (2) the time taken to fix the code plus (3) the time taken to review and deploy the fixes multiplied by the cost of that time.

Plus...

The revenue generated by the customers who would have signed up if that bug did not exist.

Plus...

The revenue that would have been maintained if those customers had not canceled their subscriptions because of the bug that got through.

Plus...?

Minus...

The cost of the review-developer doing the PR.

Minus...

The other noise created by the review-developer trying to make the value look obvious by over-commenting on the PR. (A quick aside: If you are being measured or incentivized to do PRs then you will have a tendency to comment more frequently to show value. Sometimes that will be value but sometimes it will be costly noise.)

Minus...?


Review Feedback and Measurement

What if the PR submitter and added value markers to each comment that the reviewer made?

Given that the submitter has to read through all the comments anyway it should be very little effort to tag the comments with a marker. In Stash there's the "like" link and in GitHub you have a collection of emojis.

An automated scripts could aggregate the tags assigned to the reviewer.

What is a valuable comment on a PR?


This brings up the question on value in comments.

"I like that you used reduce() here instead of filter() and map()" - is that a valuable comment? I think that it is. Perhaps sometimes it needs more context. The value here is that your telling the PR submitter that you (1) read through it and understood it and (2) thought that a particular section was well written.

A comment like this has even more value if the submitter has not used an optimal pattern (that you like) in the past or in another section of the PR. I have seen this where a submitter may have used a "perfect" pattern in one file and then something "messy" to solve the same type of problem in another module. Highlighting the good code as well as the "needs improvement" code has as much value.

Thursday, September 7, 2017

The Architectural Decision is Wrong

Every architectural decision we make today will be wrong at some point in the future.


It will be wrong for one of two reasons:

  1. Information we didn't know at the time we made the decision.
  2. Technology that's changed or become available since then.



This also goes for the technology stack that you choose.

When you make your decision, list all of the reasons why you're taking that path. Do it in your Jira or GitHub Issues (or Whatever tracking system) in the first post or description of the ticket.

Copy/paste the top of this blog post that includes the two numbered points into the bottom of the description and stop wasting anymore time on this. This is a reminder that it's going to be wrong one day.

Months later when you can't remember why you made this decision you can revisit this ticket and make notes about what you could have done at that time to make a different decision if needed. Or perhaps that was the best decision you could have made at that time.