Is your APPR plan making a difference in classrooms?


On Board Online • March 10, 2014

By  David Hamilton
Superintendent
Penn Yann Central School District

The state Board of Regents has made policy changes that may prompt your district to review its Annual Professional Performance Review (APPR) plans. The key question for school boards: Is your teacher APPR plan making a difference in classrooms?

Any plan that your administration has developed collaboratively with the local union and  received approval from the State Education Department (SED) is an accomplishment. But in school districts we are always striving for excellence.

Ideally, our APPR plans should be invaluable assets. Each should be a well-developed system for observing, coaching and supporting educators. Use of that system should exert a measurable effect on students’ quality of education.

Feedback and reflection more important than paperwork
In Penn Yan, we looked at developing our teacher APPR plan as an opportunity to refine our processes for observing and providing data-driven feedback to teachers. This candid conversation would not have been possible without the willingness of our Board of Education as well as teacher and administrator associations to engage in collaborative dialogue. These open conversations lead to creative solutions that are now part of our revised APPR.  For example, we calculated that our prior observation process required more than six hours of paperwork compared with the approximately one hour of classroom observation and feedback.  This 6:1 ratio of paperwork to observation/reflection did not align with our philosophy of frequent classroom visits and feedback.

In our new system the only paperwork generated by an observation is the data collected in the classroom and a brief three- or four-sentence summary of the follow-up conversation. Observations can be as short as 15 minutes, and as frequent as one or two times per month.

Data collected in the classroom can take the form of a running script, a diagram of teacher-student interactions, photos, artifacts and/or video. These data are given to the teacher as soon after the observation as possible, in some cases before the lead evaluator leaves the classroom, so the teacher can reflect on the data before their next regularly scheduled meeting with their lead evaluator.

A coaching meeting often begins with a question from the lead evaluator, such as, “What do you see when you look at these data?” The process continues through the coaching cycle ending in a specific, bite-sized change or “look for” in the next observation.

Less paperwork doesn’t mean less rigorous observations. Lead evaluators should be expected to use the intensive evidenced-based observation skills they have learned. Reducing paperwork has maximized time in classrooms and depth of coaching conversations. These conversations are collegial, self-directed, and focused on the rubric language. A rapid cycle of observation, data collection and feedback allows teachers to implement small bite-sized changes and receive real-time feedback on their impact in the classroom. Paul Bambrick-Santoyo’s book Leverage Leadership was a major resource for us as we developed our new systems and I would recommend it highly to other districts.  
 
Simplifying student assessments
It is no surprise to educators in New York State that the part of the APPR which has received the most attention is the use of student test scores for teacher and principal evaluation. The State Education Department’s APPR Field Guidance Memo devotes 37 pages to student assessment measures and only five pages to observations, rubrics, and goals.

To the degree that assessments indicate student learning, they can be a powerful tool for teaching and re-teaching. But we have also seen the negative effects of high-stakes testing on students and teachers in schools around the country. In Penn Yan, we believe that the most important assessments are those used in the classroom to guide instruction. So we eliminated all pre-assessments; there are other ways to measure growth against Student Learning Objectives (SLOs). For instance, your district can use historical data as the baseline for calculating growth. In Penn Yan each required assessment is both a summative measure for the current year and baseline data for the following year.

While using school-wide measures for the local 20 points is common across the state, a handful of districts including Penn Yan are using school-wide measures for growth scores as well. A large number of districts opted for course-specific assessments using an SLO target-setting process, but the APPR regulations allow state tests to be used as a school-wide growth measure provided the method for calculating the scores is different.

For example, music teachers in our district chose not to give a music assessment tied to their APPR. Instead, they receive 20 points from a school-wide achievement measure and 20 points from a school-wide growth measure both based on the same state tests. The conversation with teachers has been focused on whether having some degree of control of their growth scores is important enough to warrant adding time consuming assessments. Ultimately these teachers chose to have all 40 points of their APPR score driven by school-wide assessments. A huge time-saving step Penn Yan has implemented is to calculate growth scores in a way that eliminates individual SLO documents and meetings. Our process allows us to calculate individual student growth from any two required summative assessments such as eighth-grade English and ninth- grade Global 1. We then use those individual student growth scores to calculate teacher growth scores. The distribution of growth scores using this method was similar to the distribution of scores we received from the State Education Department, which provided more consistency and predictability in overall APPR scores. What’s more, this process frees teachers and principals from SLO meetings, allowing them to focus on classroom observation, instructional feedback, and an ongoing review of student data from the classroom.

Building a culture of continuous improvement
One of the most difficult aspects of the APPR is the stigma associated with rating and labeling educators as “developing” or “ineffective.” And if virtually all teachers are rated “highly effective,” members of the public could raise issues of rigor and credibility.

But it’s not about ratings. It’s about growth and improvement. Our districts need to have a culture that values honesty in performance feedback, even when conversations are difficult, while also respecting and supporting educators.  Creating and maintaining a professional culture requires an APPR system focused on coaching, collaboration and continuous improvement.  




Back to top