AgreeStat360 is an App that implements
various methods for evaluating the
extent of agreement among 2 raters or more. These methods are discussed in details in the 4th edition of
the book "Handbook of Inter-Rater Reliability" by Kilem L. Gwet. The e-book version of this book
in the form of a printable PDF file can be obtained
For 2 raters, organize your data either as a contingency table (for categorical ratings only) or as a two-column table of raw data. For 3 raters or more, your data can be in the form of columns of raw scores (for categorical or quantitative ratings), or alternatively in the form of a distribution of raters by response category for categorical only. Check the "Load a test dataset" checkbox to populate the data grid with test data, and click on the Execute button.
Your data can be captured in 2 ways. You can key in the ratings directly in the data grid or import it from a CSV text file or MS Excel. Whichever method you choose, you can highlight a portion of the grid you want to analyze and click on the red action button. The selected data will be described below the associated red action button.
Kilem L. Gwet, PhD