Code Reviews – From Bottleneck to Superpower

Title: Code Reviews – From Bottleneck to Superpower
Date: Wednesday, December 9, 11:00 AM ET/8:00 AM PT
Duration: 1 hr
Speaker: Michaela Greiler, Software Researcher and Consultant

Resources:
TechTalk Registration

Code review skills for emotionally intelligent developers (O’Reilly Video, Free for ACM Members)
Implementing Effective Code Reviews: How to Build and Maintain Clean Code (O’Reilly Book, Free for ACM Members)
Software Testing, Second Edition (O’Reilly Book, Free for ACM Members)
Software Quality Assurance: A Self-Teaching Introduction (Skillsoft Book, Free for ACM Members)
Role of Testing in Managing Clean Code (Skillsoft Course, Free for ACM Members)
Unit Testing in Java (ScienceDirect Book, Free for ACM Members)

I am wondering the current state of research regarding path expressions and symbolic execution.

Please elaborate on the topic: code reviewer accepts only minimal change versus the code review accepts also code changes supplementary to the absolut minimal change. The rational of this question is the contradiction between the easy task of a reviewer to read only one line changed, or only some characters in a line and to check a function that is completely changed, including the absolut necessary modification, provided all the unit tests, new supplementary unit and integration tests succed. I use each and every occasion to re-factor the function I change. The error in that function was possible in a context, that was not enough clear and of high quality. A function is written once, but is read many times. I use each and every occasion to change a function to make it much readable, much nicer, to improve its quality for the next programmer, that is going to read it, but this contradicts the idea of only minimal changes are accepted by some code reviewers.

What Language are you going to use for the Code review demos? Python?

Is there much literature (or anecdotal evidence) of how code reviews impact individuals learning to program?

This is a nice presentation.

My feedback:

Humans still review code without help from machines. It’s nice to hear a talk about how to spend quality time on code. It’s a humanizing discussion and I like the fantastic list of recommendations. These two exact recommendations are probably parts up the list that are up to the developer to put on themselves.

  1. ML Data. Most workloads include machine learning components. Asking about the data necessary to test the component comes up because it’s a different question than automating tests.
  2. Compilation times. Go is faster so it’s not a big deal now. It matters for an old language like C. For example, gcc tracks build times for its codebase over commits. Measurement can be automated though it’s also worth asking about in a review in certain continuous deployment environments.

Anything else?