Notes on “Slice-Based Cohesion Metrics and Software Intervention”

May 11, 2009

“Slice-Based Cohesion Metrics and Software Intervention”: Timothy M. Meyers, David Binkley

The authors applied CodeSurfer-based slicing to a large code base. According to the authors, slice-based code metrics successfully quantify code quality. Several metrics are provided:

  1. Tightness: Number of statements in every slice
  2. MinCoverage: Ratio of smallest slice length to module length
  3. Coverage: Ratio of length of slices to length of the entire module
  4. MaxCoverage: Ratio of largest slice length to module length
  5. Overlap: How many statements in a slice are only in that slice

Apparently this work was not possible in the past because no slicing tool properly handled all of C. It’s a shame that the only “good-enough” tool is costly. I am thinking I would like to get an academic license to play with.

“It is believed that smaller intraprocedural slice size indicates a more modular design and thus higher code quality; a formal investigation of this conjecture is left as future work.” Lately I am in love with “future work” statements.

Of interest for my DB2 days: “The above data suggests that slice-based metrics have the potential to quantify the deterioration in cohesion that a program often undergoes as it ages.”

I’ve read through this paper twice and I don’t see where they empirically support their claim that the metrics correlate with quality. I think the argument is that higher cohesion implies higher quality, and on an intuitive level the metrics directly measure cohesion.

%d bloggers like this: