Calibrate Forms and Evaluations
Regardless of how well a form A collection of questions used to evaluate agent interactions is crafted, there may still be differences in interpretation. Calibration lets you validate the accuracy and usability of a form by testing out the form on several evaluators to see how closely they evaluate the same interaction.
When you want to perform a calibration flow, you select an interaction and distribute it together with a form to several evaluators to evaluate. You can choose an interaction that is new or has already been evaluated. The evaluators receive the form in their Tasks page in their My Zone application, and are not aware that they are participating in the calibration flow. They evaluate the interaction as though it is part of the regular evaluation flow.
You can view the results of the calibration flow in the Calibrations page and compare the scores between evaluators. If the deviation is high, then consider refining the form to increase its validity.

Here are two use cases for calibration:

Here, you distribute an interaction with a form that has never been used for evaluation to several evaluators. You can initiate this calibration as a small pilot study to check the validity of the form before using the form in a Quality Plan.

Here, you distribute an interaction that has already been evaluated, together with the form that was used to evaluate it to several evaluators.