Turns out silence isn't always golden: How Learning Walks reveal what Walkthroughs miss

If you lead a school or a math department, how would you respond to this question:

​“Do you give your math teachers high-quality feedback?”

​If you’re like most instructional leaders, your answer would be yes. And you’d mean it.

​But the results of a 2024 study published by the RAND Corporation offer a different perspective: 86% of administrators report confidence in their ability to provide high-quality math feedback to teachers, yet fewer than 24% of math teachers say they receive feedback that actually helps them improve their practice.

​Based on this, we can all agree that something is getting lost in translation. In my experience, the breakdown often happens well before leaders step into the classroom.

The Form You Use Shapes the Data You Collect

​Most school leaders receive formal training in conducting effective instructional walkthroughs. Instructional walkthroughs are efficient, commonplace, and designed to allow observers to move through classrooms quickly as they scan for alignment with school or district expectations. ​Built to measure the degree of compliance, the data collected answers this question: Is implementation happening?

Unfortunately, in a math classroom, that question alone will not improve instruction. The question that will improve math instruction is: Are students making sense of the mathematics? ​To answer this question, a different data form is required.

​Let’s make this concrete.

Two Classrooms. Same School. Same Day.

A few years ago, I was working as a math coach in a large school district. One day, I partnered with a principal to observe two math classrooms at the same school, in the same grade, and covering the same content.

  • In the first room, students were working in groups. As I moved around, I could hear them sharing strategies, debating which representations were most accurate, and pointing out each other’s mistakes. The process was messy, the volume was loud, and the class was alive with student reasoning.

  • In the second room, it was mostly silent. Students were all using the same strategy to arrive at correct answers, but no one was talking. Every so often, a student would lean over and point to something on a neighbor’s paper. They’d nod yes or no, then go back to working alone. When the timer went off, every eye went to the front of the room where the teacher stood ready to lead the class discussion.

​Later, I sat with the administrator to discuss what we had observed. The principal started the conversation like this:

“Wow! That was great. They’re both aligned to the pacing calendar.
And the classroom management is solid. Looks like both rooms hit the mark!”

I was intrigued. How could two very different realities receive the same praise?

​The principal wasn’t wrong; we weren't looking for the same type of data. We had no shared understanding of what high-quality math instruction looks like beyond students being on task and teachers being on pace.

The Observation Form Determines the Question You Can Answer

To understand why two leaders can observe the same classrooms and walk away with completely different takeaways, we have to look at the form they are using to collect data. Consider how a standard walkthrough form frames the observation. Most ask questions about teacher behavior: 

Does the teacher monitor student behavior?
Does the teacher pose questions?
Does the teacher encourage or diminish the student voice?
 

Even sections labeled "Student Ownership" are still evaluating students against a checklist of visible behaviors. The rating scale further solidifies this. Most walkthrough forms use frequency measures such as “Not Yet”, “Sometime”, “Mostly”, etc. That is designed to generate a compliance report. 

Now consider what happens when the observation is framed differently.

What if, before entering the classroom, the leader defined the specific focus of the visit, then captured what students did, said, and wrote as evidence of student understanding? This immediately shifts the observer's job from evaluating teacher behavior to interpreting student understanding. And that shift is the difference between collecting data and generating insight.

This is not a minor distinction. Most walkthrough forms are built for general instructional compliance and can be used in any classroom. A form built to surface mathematical reasoning cannot. It requires the observer to develop the skill of recognizing what mathematical sense-making looks and sounds like, and to use evidence of student thinking to determine whether it is happening.

That is what the principal in my story was missing. And honestly, so was I.  

I continued to listen and agreed with what we had both seen. For about ten minutes, we had a rich conversation about what was working in both classrooms. Then I pulled up a few assessment questions commonly used to gauge students' understanding of the content we had just observed. I said, "Now I'm wondering where student understanding is in relation to these questions."

The principal paused, then became just as curious as I was.

We decided to go back through our notes together and jot down two specific things: what we noticed, and what we wished we had noticed, that would have helped us answer that question about student understanding.

That list became the foundation for how I observe math classrooms today. Not so I can tell leaders to eradicate the use of instructional walkthroughs, but so we can recognize their strengths and limitations.

Once the principal saw that, the conversation shifted from compliance to curiosity. And that shift is exactly what high-quality math observation is supposed to produce, not just in teachers, but in the leaders who support them.

Learning Walks vs. Instructional Walkthroughs

Walkthroughs are designed to ensure alignment with school or district expectations. The data is used to monitor implementation and report patterns to leadership. In practice, they can feel evaluative, even when that is not the intent.

Learning walks are designed to observe patterns in how students are engaging with content. The data drives collaborative reflection between the leader and the teacher. The goal is not judgment. The goal is to build shared understanding and shared responsibility for student learning. In a math classroom, that distinction is everything because what you are looking for is not teacher behavior, it is student reasoning. You are looking to answer these questions:

What are students saying?
What are they writing?
What questions are students asking?

These questions lay the groundwork for high-quality math feedback that improves teacher practice. A scan-and-score walkthrough form cannot answer them, so it produces feedback that generally does not resonate with math teachers.

This Is Not About Knowing More Math

One of the most common concerns I hear from administrators is this: “I’m not a math person. How am I supposed to give math-specific feedback?” Here is the reframe: you do not need to be an expert in math content, you need to use your expertise to analyze math pedagogy.

​When you are trained to observe for student reasoning rather than teacher behavior, the evidence becomes far more accessible. You are listening to students explaining their thinking. You are noticing whether students are talking to each other or waiting for the teacher. You are looking for the moments when a student is on the cusp of understanding, and asking whether the instructional environment is positioned to address it.

​That does not require content expertise, it requires observation precision, and it is something any instructional leader can develop with the right partnership and framework.

One More Thing: Affirmation Is Not Optional

Even with the right observation tool, feedback can still fall flat if the conditions for receiving it have not been established.​According to the National Center for Urban School Transformation, the nature of feedback conversations determines whether they contribute to a culture of ongoing learning or become a mechanism of criticism and disappointment.

​I think of it this way: you probably should not throw a seed on the sidewalk and expect to see growth. Affirmations break the hard ground and prepare teachers to receive feedback from an asset-based mindset rather than a defensive one. Not to avoid hard conversations, but to earn the right to have them without sacrificing relationships. When teachers experience observation as something done with them rather than to them, defensiveness drops, trust builds, and growth becomes possible.

So, Where Does This Leave Us?

If your current observation practice is generating data without generating change, the issue is likely not your effort or your intent. You may be using a compliance tool in a context that requires a growth framework.

​The shift starts before you walk into any classroom. It starts with deciding: ​Am I going to check for compliance, or for math reasoning?

​That one question changes everything that follows.

If you are not sure where your current feedback practice stands,
the
Math Feedback Reset Quiz is a good place to start.

It takes less than five minutes and gives you a clear picture of where your process is strong and where the gaps are. Or reply and tell me: which classroom in the story above looks more familiar to you? I’d genuinely like to know.

Previous
Previous

We've known what high-quality feedback requires for decades: So why isn’t math instruction improving?

Next
Next

“Should math teachers use I Do, We Do, You Do?” Why this Debate Keeps Missing the Point