Using Cyclomatic Complexity effectively

Method and class level thresholds for Cyclomatic Complexity (CC) are sometimes used as a means of controlling code quality. So, for example, a method with CC greater than 10 or a class with CC greater than 30 is a candidate for refactoring. But this does not catch procedural partitioning of monolithic methods into smaller units and similar bad decomposition of classes. No single method or class exceeds the threshold. But the total conditional logic in the codebase hasn’t changed. Average CC per method or class doesn’t help either because of significant variance around the average (e.g. get/set methods on beans skew the average CC per method).

A better measurement would be ‘cyclomatic complexity per 100 lines of source code’ (how about calling this CC100?). Now, one would need to make real improvements to the code (e.g. removing duplication, introducing polymorphism) to improve the metric. Tools like javaNCSS can help calculate this. It gives the total size of code base (NCSS = non comment source statements) and the CCN per method. Actually CC100 can also get skewed by duplication of low CC code. So we could use CC100 in conjunction with a duplication detecter like PMD's copy-paste detector. A combination of near-zero duplication and low CC100 should give a good indication of this aspect of code quality.

All that now remains to be agreed upon is a reasonable threshold for CC100. Here is a heuristic - typical CC threshold for a method is 10, typical NCSS threshold for a method is 30. Now, 100 lines of code would be roughly 3 methods; therefore a resonable threshold for CC100 would be 3 methods * CC of 10 per method =30.

3 comments:

sriram said...

Oh, but we do have the choice of writing less conditional code. A lot of times, blocks of conditional code turn out to be code smells crying to be refactored into polymorphic code. They may represent abstractions/patterns waiting to be discovered and fleshed out.

Jit Roy Chowdhury said...

I too am a bit confused. By using code size in the denominator in your formula (CC/lines of code)X100, you seem to be encouraging people to increase the program length as much as possible, to get a lower value of CC100. This may lead to bloating of the code-base, or a proliferation of classes. Rather, IMHO, we should aim at both keeping the average CC down and the code concise. Possibly a figure of (Avg. CC + LOC/100) would be a good measure. We can also think of attaching different weights like (Avg CC X 0.6 + LOC X 0.4/100).

sriram said...

Hi Jit
You have revived a 7 year old post! Is your avg.cc is per line/method/class/module? In general, normalizing across very different measures only makes sense if they are of the same order of magnitude.

Also, in my experience, developers are rarely crooked enough to purposely increase LOC redundantly (without duplication) just to dress up a metric. This isn't something that can happen by chance.

Post a Comment

Note: Only a member of this blog may post a comment.