This study addresses the limitations of traditional disaster medicine course assessments, including single evaluation formats, delayed feedback mechanisms, and gaps in competency mapping, by developing a diversified assessment system leveraging the Rain Classroom platform. The system incorporates six interconnected evaluation components across the learning cycle: pre-class preparation, pre-class tests, case discussions, skills assessment, post-class tests, and post-class feedback, collectively forming a three-dimensional “cognitive-skill-attitude” assessment framework. In the assessment design, the weighting of practical skill evaluation is elevated to 40% to prioritize the development of students’ disaster response competencies. Additionally, an innovative multi-subject evaluation model (“self–peer–teacher”) is implemented within disaster scenario simulations, utilizing standardized scoring rubrics. This methodology not only enables comprehensive performance evaluation but also fosters critical teamwork and reflective practice. Implementation outcomes demonstrated that the system effectively evaluates learning progress through multi-modal assessments, enhances disaster rescue knowledge and skill proficiency, and successfully achieves predefined pedagogical objectives.
Keywords: Disaster medicine, rain classroom, competency evaluation, scenario simulation