Already a member? Sign into members.mastery.org
Three shifts are critical to transforming our learning communities into mastery learning environments. They include:
Too often assessment is seen as a way to generate grades, and grades, in turn, as input that calculates GPA and produces report cards. Assessment is most often used as a means of measurement and, as a result, we tend to assess those things that are easiest to measure (we’ve all heard this before). The end product is that what we measure results in poor reporting practices, since we do not often report on the most meaningful information.
|Low-Level Measurement||Grades||Report Cards/Transcripts|
|Specific Feedback||Performance Growth
This will not serve anyone well in a mastery learning environment. Many of the transdisciplinary elements I have focused on in this series simply cannot be measured in the same way we might count the number of right and wrong answers on a test. They do not have a “hundred percent” point from which we can deduct incorrect responses. This will not be a new revelation for readers and those engaged in mastery-based learning. However, the shift in our approach to assessment is often under-appreciated and something that needs to be dealt with early and head on.
Many education researchers and thought leaders, John Hattie and Dylan William spring to mind, stress the importance of supportive feedback. Feedback should be timely, specific to the task and support the student’s movement toward mastery. It should focus on clearly articulated criteria and learning intentions that are central to the student’s success as a learner, for both traditional academic discipline and transdisciplinary goals. Also, it should include the student and serve as a regular part of our support for learning. Unlike our traditional approach to assessment, feedback is not punitive or focused on achievement at a static point in time. Nor does it mark the end of a learning process. “As in sports, the purpose of feedback is not to correct the last pitch or tackle but to improve future games.” (Wiliam & Scalise, 2021)
All of this may seem self-evident, but it is an area that I have seen schools struggle with. It is easy to say, “We want to do less assessment and provide more feedback,” but it is difficult for teachers to make the shift if they still believe that the primary purpose of assessment is to provide grades for reports and transcripts. We need to move on from this assumption.
One way to distinguish between formative and summative feedback is that the latter happens only when complex transfer tasks require the activation of both academic and transdisciplinary learning. Everything else is “formative” (I know . . . these distinctions are archaic) and provided as feedback to the student on their readiness, across the range of goals, to be successful at the complex transfer task that serves as a culminating demonstration of learning.
In other words, a math test is not summative anymore. It provides feedback to students regarding their level of readiness to use the specific math skills as part of a larger transfer task. We are still collecting evidence on students’ acquisition of the prerequisite elements for success, but we no longer mistake the test itself for “summative” assessment.
My friend and co-author, Jay McTighe, also uses a sports analogy to make the distinction effectively. When a basketball coach is working with her team, she helps players learn the elements of the game – dribbling, passing, strategy, the rules, the plays, and so on. She’ll provide feedback to her players to help them improve in these parts of the game, but the real “evidence of learning” is how well they use all of these elements on game day. She won’t give them a mark on these discrete elements, but will support improvement through feedback on their execution at each stage of skill development. If she did give her players a grade for all of the pieces and put them into a machine, the resulting “report” would not be a good indicator of success and nor would it replace an interpretation of each player’s real, in-game performance. Good transfer tasks are “the game” and everything else is preparation to perform within that context. This changes our focus towards an understanding that formative feedback is practice and summative feedback is the game.
This points to a larger shift in the traditional roles of teacher as deliverer of content and assessors of retention and recall of that content. These are really only a small part of our jobs now. Of course, we still need to do some of this (especially when it comes to “teaching” transdisciplinary strategies and tools as I will discuss in the final piece of this series). Nonetheless, our contemporary responsibility is to design the “games” that will allow students to provide observable and tangible demonstrations of the learning we seek. We need to think of ourselves as “experience designers”.
These experiences, when well-designed, focus on providing opportunities for students to produce tangible evidence of desired learning goals. Evidence from a test or essay prompts will not be sufficient. Designers work backwards from meaningful transfer goals for learning, and connect these with real-world scenarios which require the goals (and more) to be activated for a larger purpose.
None of these shifts represent new ideas. They have been swirling around the arena for a long time. Still, they seem to confound schools and teachers because they represent a change in belief about our role as educators, and create anxiety regarding implementation. My hope is that, in addressing these beliefs, the methodologies I will explore in the next piece of this series will stick as actionable strategies.
Greg Curtis’ MTC Insights series also includes:Article 1: Transforming Assessment, Curriculum, and Beyond