Critics and Critical Analysis: Lessons from 19,000 P-12 Students in Candidates' Classrooms

From Section:
Theories & Approaches
Countries:
USA
Published:
Feb. 01, 2015
Winter 2015

Source: Teacher Education Quarterly, Winter 2015
(Reviewed by the Portal Team)

This study aimed to explore the impact on program improvement of systematically gathering P-12 student learning data over a 5-year period.

Methods
This study was conducted in a teacher preparation program in northwest Oregon. Data were collected from two student teaching experiences of teacher candidates during the fall and spring semesters of their practicum experience. The participants were completing either a 10-month master of arts in teaching (MAT) program or a 4-year, undergraduate licensure program.
At the end of each teaching experiences, candidates filled out a preformatted Excel spreadsheet that included information for each of the students in their classrooms on gender, ethnicity, identified learning needs, the pre-assessment score, and the summative assessment score.

Conclusions
The findings reveal that candidates can demonstrate a positive impact on student learning that is generally equivalent for P-12 students of all ethnicities and learning needs.
The authors did not identify statistically significant learning gain differences among P-12 students. The data also indicated that participants could differentiate instruction and meet the needs of all learners.
The authors argue that the implementation and use of these assessments has had numerous positive impacts. The assessments have helped candidates learn to differentiate instruction in their classrooms. They also have provided them with substantive data to demonstrate their success in the classroom. Furthermore, the data from the assessments have been an important part of program redesign and a focus for discussion within the faculty of program impact.

Finally, the authors discuss the methods in this study. They realize that these measures only compose one data source in the array of multiple assessment tools. However, the data refute voices that suggest candidate impact cannot be demonstrated in teacher preparation programs. The authors also claim that gathering these data has required iterative examination of the processes involved in assessing candidates and a focus on improving the quality of the assessments the candidates design and use. 


Updated: Feb. 25, 2018
Keywords:
Evaluation | Students’ needs | Preservice teachers | Program effectiveness