AgreeStat360 is an App that implements
various methods for evaluating the extent of agreement among 2 or more raters.
These methods are discussed in details in 2 volumes that comprise the 5th
edition of the book "Handbook of Inter-Rater Reliability" by Kilem
L. Gwet. Both volumes are available in the form of printable PDF file and can
be obtained here among other books.
For 2 raters, organize your data either as a contingency table (for categorical ratings only) or as a two-column table of raw categorical or quantitative ratings. For 3 raters or more, your data can be in the form of columns of raw scores (for categorical or quantitative ratings), or alternatively in the form of a distribution of raters by response category for categorical only. Check the "Load a test dataset" checkbox to populate the data grid with test data, and click on the Execute button to see the results.
Your data can be captured in 2 ways. You can key in the ratings directly in the data grid or import it from a CSV text file or MS Excel. Whichever method you choose, you can highlight a portion of the grid you want to analyze and click on the red action button. The selected data will be described below the associated red action button.
Kilem L. Gwet, PhD