ASSESSING AND IMPROVING INTER-RATER AND REFERENT-RATER AGREEMENT OF PILOT PERFORMANCE EVALUATION
thesisposted on 17.12.2018 by Allen Xie
In order to distinguish essays and pre-prints from academic theses, we have a separate category. These are often much longer text based documents than a paper.
The Federal Aviation Administration (FAA) has been promoting Advanced Qualification Program (AQP) for pilot training and checking at Federal Aviation Regulations (FAR) Part 121 and Part 135 air carriers. Regarding pilot performance evaluation, instructors and evaluators assign scores to a student based on specific grading standards. To ensure the best possible quality of training and the highest level of safety, it is vital for different instructors and evaluators to grade students based on the same standard. Therefore, inter-rater and referent-rater agreement are paramount in calibrating the performance evaluation among different instructors and evaluators. This study was designed to test whether a focused workshop could increase the level of inter-rater and referent-rater agreement. A pre-test post-test control group experiment was conducted on a total of 29 Certified Flight Instructors (CFIs) at Purdue University. Participants were asked to watch several pre-scripted video flight scenarios recorded in an Embraer Phenom 100 FTD and give grades to the student pilots in the videos. After a rater training workshop that consisted of Behavior-Observation Training, Performance-Dimension Training, and Frame-of-Reference Training, participants in the treatment group were able to achieve a significantly higher level of inter-rater and referent-rater agreement.