You Probably Misunderstand Feedback For Learning

Misunderstanding Feedback For Learning

By Grant Wiggins

Editor’s Note: The title was written by me, not Grant. ; ^ )

Who could argue with the idea that formative assessment is a good thing? Both common sense and the research make clear that more feedback and opportunities to use it enhances performance and achievement [(See Pollock (2012), Hattie (2008) and Marzano, Pickering & Pollock (2001)]; I argued this point thoroughly 14 years ago (Wiggins 1998).

Yet, even Hattie acknowledges that in spite of the fact that his research long ago clearly revealed that “feedback was among the most powerful influences on achievement [Hattie (2008), p. 173] he has “struggled to understand the concept ever since.” Some of the confusion, I think, is found in the fact that the view of feedback he proposes blurs important distinctions between feedback and advice. This blurring can be found in many current writings on feedback (See, for example Harvard Business Press (2006)]. Oddly enough, in some writings on the subject [e.g. Clarke (2001)] there isn’t even an attempt to define the term.

So, let’s look more closely at just what feedback is and isn’t. To do so will make it more likely that we use and invent far better formative assessment practices.

What Is Feedback?

Feedback is goal-related information about how we are doing or what we just did. We hit the tennis ball and see where it lands: in or out. We tell a joke and hear (as well as witness) audience reaction – they laugh or they don’t. We use the word processor, and the spell-checker underlines misspellings. We teach a lesson and we see that some students have their eyes riveted on us while others are nodding off. And did you notice that in all the examples no person offered verbal feedback? There was only a tangible result of actions taken.

All of these examples involve the “feeding back” of information while trying to reach a goal (hit the ball in, cause laughter, spell accurately, teach so that they are engaged and learning). Wikipedia defines it technically this way: “Feedback is information about actions returned to the source of the actions” – a definition that properly reflects the original science/systems meaning of the term (e.g. the squealing sound in loudspeakers), as well as the interpersonal meaning of the term.

Alas, though the word is thus properly used to describe information about what happens in performance in relation to a goal, it is unfortunately more often used to describe all kinds of comments made after the fact, including advice, praise, and evaluation – none of which is feedback, strictly speaking, and none of which is needed in a robust feedback system, as video games and learning to walk as a baby make clear.

Here are some other examples of feedback.

  • A friend says: “You know, when you put it that way and speak in that softer tone of voice, it makes me feel better.”
  • A baseball coach says: “Each time you swung and missed, you pulled your head out as you swung (and so didn’t really have your ‘eye on the ball’). The one you hit hard, you kept your head still and saw the ball.”
  • A teacher comments on a student’s short story that “the first few paragraphs kept my full attention: the scene painted was vivid and interesting. But then the dialogue was hard to follow as a reader: I was confused as to who was talking, and the sequence of actions was puzzling, so I became less engaged.”

Note that these examples, unlike the first three examples, involve human givers of feedback. In the first group, the performers simply had to take note of the effect of their actions, mindful of the point of their actions. In the 2nd group, however, all the examples involved deliberate and explicit giving of feedback by other people. (Whether it was right or wrong information is not the point; correct or not, it should be called feedback). Comments about the effects of my actions were “fed back” to me to better help me grasp what happened in light of my goals. Implied is that the feedback giver saw something that I may have missed – information I can thus use to improve should I so wish.

Regardless of the source of the feedback, the key point is that I received an account of what happened, not advice or evaluation. No one told me what to do differently or how “good” or “bad” my results were. You may think, perhaps that the reader of my writing in the v3rd comment was evaluating my work, but look at the words used again: she is simply playing back the effect of my writing on her: here’s where it was hard to follow as a reader. Strictly speaking, then, feedback is information about the effects of my goal-related actions, whether from people or observable consequences

What Isn’t Feedback – Even Though We Often Speak As If It Were?

Though feedback is best defined as goal-related effects, in everyday speech we use the word more loosely – unhelpfully so. Consider the following examples of so-called feedback and note how these examples do not reflect our definition – even if many people think of these comments as feedback:

  • “Good job!”
  • “You might want to use a lighter baseball bat”
  • “What was your reason for choosing that as the setting of your story?”
  • “I know you can try harder than that”
  • “That was a weak presentation”
  • “You need more examples in your report”
  • “This is a ‘C’ paper, alas.”
  • “You’ve done better in the past”
  • “I’m so pleased by your poster!”
  • “You should have some Essential Questions in your unit plan”

None of these is an example of feedback, if you apply the seven criteria we identified above.

There is no information about what happened. Rather, in all these examples there is a response to what happened: advice, evaluation, or questions.

Feedback vs. Advice

Consider these three statements taken from the list:

  • You need more examples in your report
  • You might want to use a lighter baseball bat
  • You should have some Essential Questions in your unit plan

This is not feedback. This is advice (sometimes called feed forward to make the contrast more clear). In response to a performance with apparently less than optimal effects, a teacher-coach gives guidance on how to improve the performance. In other words, advice or guidance is made after a result or effect (i.e. the feedback) has occurred, and where the performer may need assistance in acting on the feedback to improve.

However, such advice only makes sense to the performer (and, in my experience, is only truly welcomed by the performer) on the heels of specific feedback. For these pieces of advice to work, I need to first know the feedback upon which the advice is based. Far too many coaches, teachers and parents, however, jump right to advice without ensuring first that the learner grasps the feedback. And over time – if you count it up – most of us give far more advice than descriptive feedback.

My immediate internal question as a performer, upon hearing advice first – without feedback preceding it – would be to ask myself: Why are they suggesting this? Is this appropriate? Or is it just their pet thing? Advice out of the blue always seems at best a tangent and at worst unhelpful and annoying. “Why should I use ‘more’ examples? Is the point lots of examples or really good examples? Aren’t my examples OK? Or, is this really the teacher’s indirect way of saying my examples aren’t good enough?…”  That is often how the performer feels on the heels of lots of advice with little or no feedback preceding it.

What may be hard for habit-bound teachers, coaches, and parents to realize is that advice is often unhelpful unless I am ready for it. What does “ready for it” mean here? When we 1) share the same goal; 2) when I am helped to perceive and/or acknowledge what I am not perceiving as I perform – the feedback; 3) when I understand and we agree about the meaning of the feedback; and 4) when I am psychologically ready to confront the gap between my intent an the results, and take advice. No wonder mere advice is often ineffectual!

Recall the examples of pure feedback I gave at the outset of this essay.  In all of these instances the performer was not given any advice. Rather, any advice was self-given, based on the clarity and specificity of the feedback. Amazingly, learners can often give themselves any needed advice, even when they are relative novices, if they have seen models and been given good instruction. So, if your ratio of advice to feedback is too heavy on the advice, try asking the learner: “Given the feedback, do you have some ideas about how to improve?” If not, only then offer one piece of practical advice. This is how greater autonomy and confidence are built over the long haul.

Too much advice, rather than building autonomy and self-directed learning via feedback, easily undercuts it. Students then easily become insecure about their own judgment and more dependent upon the advice of others – and thus in a panic about what to do when different advice comes from different people or no advice is available at all. “Is this what you want? Is this right?” asked constantly is important feedback to you, the teacher, that the ratio of feedback to advice is not optimal.

Please don’t misunderstand. I am not saying don’t give advice. Good advice is certainly better than floundering around trying to find the best path that perhaps only an expert knows. I am saying that too much advice combined with too little feedback will not achieve the results we seek: high levels of performance and performer autonomy.

As a consequence, you will want to take the time to teach learners to seek, find, and learn from available feedback – teach them, in other words, to be less and less dependent upon you as advice-giver and feedback-giver, and more on themselves.  (That will also make them more ready to be coached when performance doesn’t improve.)

This advice about ratio is based on a well-known season-long study of the legendary basketball coach John Wooden, [See Nater & Gallimore (2005)] Wooden was famous for quickly and always calling attention to errors and suggesting how to correct them. Fully 10% of his entire output of words was labeled as a “Wooden” in a season of closely watching and coding everything he said. A “Wooden” involved quickly saying: “Here’s the right way to do x, here’s what you did, so instead you want to do x.” Or more generally: model-feedback-advice.

Finally, feedback is information. So, how could a question be feedback? (“What was your reason for choosing that as the setting of your story?”) Many educators are surprisingly confused by this point. “But questioning is a good thing! We are probing the student to get at their thinking!” True. Just don’t conflate asking questions with giving useful feedback. Your questioning isn’t giving the performer actionable information; it’s delaying it! You may want me to think about what I am doing but that won’t yield more feedback. I don’t want to keep writing badly, making people not laugh or striking out but only have Socrates for a coach just asking me questions! I want my coach to tell me what just happened that I might not have noticed, and perhaps how it might be improved.

Feedback vs. Evaluation

Let’s look at another group of questionable feedback to see what they have in common and how they differ from pure feedback (even if we loosely call such comments “feedback”):

  • Good job!
  • That was a weak presentation
  • This is a ‘C’ paper, alas.
  • I’m so pleased by your poster!

These comments make a value judgment. They rate, evaluate, praise or blame what was done. There is little or no feedback here, i.e. actionable information about what occurred. As performers, we only know that someone is either pleased or not, or that someone places a high or low value on what we did. Praise (and sometimes, blame) may motivate us in the short term but neither can get us better. Over time, both praise and blame have a corrosive effect, in fact (as Dweck [2008] shows in her research on the attitudes of achievers): such performers often become too extrinsically motivated. They then try primarily to please or avoid disapproval and have great difficulty once they start to fail as the challenge increases. Now they have lost sight of not only the feedback but the goal. “I am so pleased with your poster!” – no, my aim was not to please you but to satisfy myself and the performance requirements. The goal was not to make the teacher, coach, or parent feel better; the goal was to achieve a worthy result and in the long term build more autonomous self-regulating and self-improving performance.

Look back at the evaluative comments. How might those comments be re-cast to be useful feedback if the value words slipped out? Tip: always add a mental colon after each statement of value with what happened and what it means: “Good job: your use of words was more precise in this paper than in the last one, and saw I saw the scenes in my mind’s eye more clearly.” “Not so good on this essay: Almost from the first sentence, I was confused as to your initial thesis X and the evidence Y you were providing for it. In the 2nd paragraph you propose a somewhat different thesis Z and in the 3rd paragraph you don’t offer evidence, just beliefs A and B.” You will soon find that you can drop the evaluative language; it serves no useful function.

By extension, relying on grades as the key feedback is doomed to fail, if the aim is to cause significant self-regulated improvement in student achievement. Grades are so ubiquitous, so much a part of the landscape of schooling that we easily overlook their utter uselessness as feedback. Consider the following thought experiments to see why:

  • How would tennis players improve if all the coach did was shout out letter grades as they played, and the player never saw models or video of what they should be doing vs. what they are doing?
  • How would the public speaker become skilled and poised if there were never a real audience and if the judges merely filled out evaluation sheets and sent scores a week later to the speaker?
  • How would cross country runners improve if we graded them on their place of finish in each meet instead of reporting their times and how today’s times related to previous times?
  • How would piano players improve if they couldn’t hear themselves play and they received an accuracy and speed single-letter summative grade only from expert judges a week later?

Grades are here to stay, no doubt – but that doesn’t in any way condone a teacher relying on them as the main source of feedback.

Genuine vs. Phony Formative Assessment: Pacing

A grade on a test or paper given once is as little likely to improve student performance in the future as being given only a grade instead of split times, in response to your performance as a runner in the mile.  Thus, what are called “pacing guides” and “interim assessments” in schools and districts often undercut the idea of feedback as we have considered it.

My daughter runs the mile in track. At the end of each lap in races and practice races, the coaches are yelling out splits (i.e. the times for each lap) and bits of feedback (You’re not swinging your arms!”), followed sometimes by advice: “Priscilla, you’re on pace for 5:20. Pick it up: you need to take 3 seconds off this next lap!” is a common thing to hear at track meets (and other timed sports, like swimming and cross country). The ability to improve one’s result depends upon the ability to adjust one’s pace in light of ongoing feedback against the concrete long-term goal.

However, to say that a coach is giving pace-related information means that my daughter and her teammates are getting feedback about how they are performing now against the final desired time. The desired result for my daughter this season is a 5:00 mile. She has already run 5:09. The coach is telling her, therefore, that at the pace she ran in the first lap she is unlikely even to meet her best time so far this season, never mind her long-term goal. Then, he gives her a brief piece of concrete advice – take 3 seconds off the next lap – in order to make achievement of the goal more likely.

But this isn’t what most school district “pacing guides” and grades on “formative” tests tell you. They tell you the schedule of teaching, not what to look for in the emerging results! And they yield a grade against the most recent objectives, not useful feedback against the future performance standards. It’s as if at the end of the first lap of the mile race, Priscilla’s coach simply yells out B+ based on her having run the yards so far.

Advice for how to change this sad situation should be clear from the feedback in the analysis: score student work in the fall and winter against spring standards; use more pre- and post-assessments to measure progress; do item analysis to note the kinds of errors that need to be worked on for better future performance, etc. What is also implied but well beyond the scope of this paper is the need to test for student performance in using content, not content knowledge per se, since transfer of learning is the ultimate goal of learning – and is actually what all harder test questions demand. [See Wiggins (2011)]

 “But there’s no time!” The universal teacher lament of there being no time for such feedback and opportunities to use it should now seem somewhat questionable to you as a reader, if I have succeeded in making my case. For, as common-sense such a lament first appears, it betrays a great misunderstanding. “No time to give and use feedback” means you are saying that “there is no time to cause learning since I have so much teaching to do.”

This confusion also overlooks the fact that most useful feedback is picked up by the performer directly from what happened in the situation at hand, while trying to accomplish a task; and there are numerous ways – through technology, peers, and other teachers – that students can get the feedback they needs. This is also why self-assessment specifically and meta-cognition in general correlate so highly with academic success.[3]

Conclusion

What I have said may hopefully seem like be a helpful look at a key idea but what I am saying is hardly new. William James, over one hundred years ago, wrote that effective education requires that we “receive sensible news of our behavior and its results.  We hear the words we have spoken, feel our own blow as we give it, or read in the bystander’s eyes the success or failure of our conduct.  Now this return wave…pertains to the completeness of the whole experience.”[4]

Alas, teaching too much is our Achilles heel – we can’t help ourselves as “teachers.” So, make an effort to try it: less teaching, more feedback. And its corollary: less advice, more tangible feedback. Less feedback that requires you to give it all, and more feedback designed into the performance itself (as in music, sports and video games). Finally, send me some feedback on this article: [email protected].

References

Bransford et al (2001) How People Learn. National Academy Press.

Clarke, Shirley (2001) Unlocking Formative Assessment: Practical Strategies for Enhancing Pupils’ Learning in the Primary Classroom. Hodder Murray.

Dweck, Carol (2007) Mindset: The New Psychology of Success, Ballantine.

Gilbert, Thomas (1978) Human Competence.  McGraw Hill.

Harvard Business School Press, Giving Feedback (2006)

Hattie, John (2008) Visible Learning: A Synthesis of Over 800 Meta-Analyses Relating to Achievement, Routledge.

James, William (1899/1958) Talks to Teachers. W W Norton.

Marzano, R ; Pickering, D & Pollock J (2001) Classroom Instruction That Works: Research-Based Strategies for Increasing Student Achievement, ASCD.

Mazur, Eric (1996) Peer Instruction: A User’s Manual. Benjamin Cummings.

Nater, Sven & Gallimore R (2005) You Haven’t Taught Until They Have Learned: John Wooden’s Teaching Principles and Practices. Fitness Info Tech.

Pollock, Jane (2012) Feedback: the hinge that joins teaching and learning, Corwin Press.

Wiggins, Grant (2010) “Time to Stop Bashing the Tests,” Educational Leadership March 2010 | Volume 67 | Number 6

Wiggins, Grant (1998) Educative Assessment, Jossey-Bass.


[1] To be published in the September 2012 issue of Educational Leadership. Please do not disseminate without permission.

[2] Human Competence, Thomas Gilbert (1978), p. 178

[3] See Bransford et al (2001), pp. xx.

[4] James, William (1899/1958), p. 41.

This article was originally published on Grant’s blog. Grant can be found on twitter here; image attribution flickr user josekevo; You Probably Misunderstand Feedback for Learning