The case for using the repeatability coefficient when calculating test-retest reliability

PLoS One. 2013 Sep 9;8(9):e73990. doi: 10.1371/journal.pone.0073990. eCollection 2013.

Abstract

The use of standardised tools is an essential component of evidence-based practice. Reliance on standardised tools places demands on clinicians to understand their properties, strengths, and weaknesses, in order to interpret results and make clinical decisions. This paper makes a case for clinicians to consider measurement error (ME) indices Coefficient of Repeatability (CR) or the Smallest Real Difference (SRD) over relative reliability coefficients like the Pearson's (r) and the Intraclass Correlation Coefficient (ICC), while selecting tools to measure change and inferring change as true. The authors present statistical methods that are part of the current approach to evaluate test-retest reliability of assessment tools and outcome measurements. Selected examples from a previous test-retest study are used to elucidate the added advantages of knowledge of the ME of an assessment tool in clinical decision making. The CR is computed in the same units as the assessment tool and sets the boundary of the minimal detectable true change that can be measured by the tool.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Assertiveness
  • Child
  • Empathy
  • Evidence-Based Medicine / standards*
  • Female
  • Humans
  • Male
  • Models, Theoretical
  • Reproducibility of Results*

Grants and funding

This project was funded by the first author's Doctoral scholarship provided by the Centre for Research into Disability and Society and the School of Occupational Therapy and Social Work, Curtin University, Perth, Australia. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.