Robust Statistical Methods for Empirical Software Engineering

Robust Statistical Methods for Empirical Software Engineering

From Empirical Software Engineering comes a paper on the Robust Statistical Methods for Empirical Software Engineering. This paper is free to read (link) for approximately the next 30 days of this posting.


There have been many changes in statistical theory in the past 30 years, including increased evidence that non-robust methods may fail to detect important results. The statistical advice available to software engineering researchers needs to be updated to address these issues. This paper aims both to explain the new results in the area of robust analysis methods and to provide a large-scale worked example of the new methods. We summarise the results of analyses of the Type 1 error efficiency and power of standard parametric and non-parametric statistical tests when applied to non-normal data sets. We identify parametric and non-parametric methods that are robust to non-normality. We present an analysis of a large-scale software engineering experiment to illustrate their use. We illustrate the use of kernel density plots, and parametric and non-parametric methods using four different software engineering data sets. We explain why the methods are necessary and the rationale for selecting a specific analysis. We suggest using kernel density plots rather than box plots to visualise data distributions. For parametric analysis, we recommend trimmed means, which can support reliable tests of the differences between the central location of two or more samples. When the distribution of the data differs among groups, or we have ordinal scale data, we recommend non-parametric methods such as Cliff’s δ or a robust rank-based ANOVA-like method.

  Founder and Managing Director for fullSTEAMahead365.

  • facebook
  • twitter
  • linkedIn
  • tumblr
  • youtube
  • instagram

Leave a Reply

Scroll Up