Why Assessments Don’t Really Measure Understanding

Assessments Don’t Really Measure Understanding

by Terry Heick

ed note: this has been updated and republished from a previous post

Assessing understanding might be the most complex thing an educator or academic institution has to do.

Unfortunately, professional development gives a lower level of attention to developing quality assessments, training that is rarely commensurate with this complexity.

The challenge of assessment is no less than figuring out what a learner knows, and where he or she needs to go next. In other words, what do they know, and how should I respond?

This in itself is an important shift from the days when curriculum was simply delivered regardless of the student’s content knowledge. Among the big ideas Richard and Rebecca DuFour brought to the educational mainstream consciousness was a shift from teaching to learning, a subtle but critical movement.

But even with this shift from curriculum, instruction and teacher actions, and toward data, assessment and learning, there remains uncomfortable confusion.

Planning for Learning

In a traditional (and perhaps utopian) academic structure, learning objectives are identified, prioritized, mapped and intentionally sequenced. Pre-assessments are given as tools to provide data to revise planned instruction.

Next, in a collaborative group (PLCs and their data teams being the current trendy format), teachers together disaggregate data, perform item analyses, identify trends and possibility, and differentiate powerful and compelling instruction for each learner with research-based instructional strategies.

Then student understanding is re-assessed, deficiencies are further remediated — rinse, repeat — until the learner demonstrates acceptable evidence of understanding. But even this Herculean effort — which incredibly leaves gaps nonetheless — is often not enough because of the nature of understanding itself.

Defining Understanding

In their seminal Understanding by Design series, Grant Wiggins and Jay McTighe discuss the evasiveness of the term “understanding” by referencing Harold Bloom’s Taxonomy of Educational Objectives: Cognitive Domain, a book project finished in 1956 by Dr. Benjamin Bloom and colleagues.

Quoted by Wiggins and McTighe, Dr. Bloom explains:

” . . . some teachers believe their students should ‘really understand,’ others desire their students to ‘internalize knowledge,’ still others want their students to ‘grasp the core or essence.’ Do they all mean the same thing? Specifically, what does a student do who ‘really understands’ which he does not do when he does not understand? Through reference to the Taxonomy . . . teachers should be able to define such nebulous terms.”

Wiggins and McTighe go on to say that “two generations of curriculum writers have been warned to avoid the term ‘understand’ in their frameworks as a result of the cautions in the taxonomy.”1 Of course, the Understanding by Design (UbD) series is in fact built on a handful of key notions, among them taking on the task of analyzing understanding, and then planning for it through backward design. But to pull back and look at the big picture is a bit troubling.

There are so many moving parts in learning: assessment design, academic standards, underpinning learning targets for each standard, big ideas, essential questions, instructional strategies — and on and on and on in an endless, dizzying dance. Why so much ‘stuff’ for what should be a relatively simple relationship between learner and content? Because it’s so difficult to agree on what understanding is — what it looks like, what learners should be able to say or do to prove that they in fact understand.

Wiggins and McTighe go on in the UbD series to ask, “Mindful of our tendency to use the words understand and know interchangeably, what worthy conceptual distinctions should we safeguard in talking about the difference between knowledge and understanding?”2 Alternatives to Bloom’s Taxonomy Wiggins and McTighe also helpfully provide what they call “6 Facets of Understanding,” a sort of alternative (or supplement) to Bloom’s Taxonomy. In this system, learners prove they “understand” if they can:

  1. Explain
  2. Interpret
  3. Apply
  4. Have perspective
  5. Empathize
  6. Have self-knowledge

Robert Marzano also offers up his take on understanding with his “New Taxonomy,” which uses three systems and the Knowledge Domain:

  1. Self-System
  2. Metacognitive System
  3. Cognitive System
  4. Knowledge Domain

The Cognitive System is closest to a traditional taxonomy, with verbs such that describe learner actions such as recall, synthesis and experimental inquiry.

At TeachThought, we’ve even created a learning taxonomy of our own that offers dozens of ways students can demonstrate that they understand.

3 Strategies To Clarify Understanding

Of course, there is no solution to all of this tangle, but there are strategies educators can use to mitigate the confusion — and hopefully learn to leverage this literal cottage industry of expertise that is assessment.

1) The first is to be aware of the ambiguity of the term “understands,” and not to settle for just paraphrasing it in overly-simple words and phrases like “they get it” or “proficiency.” Honor the uncertainty by embracing the fact that not only is “understanding” borderline indescribable, but it is also impermanent.

And the standards? They’re dynamic as well. And vertical alignment? In spots, clumsy and incomplete.

This is reality.

2) Secondly, help learners and their families understand that it’s more than just politically correct to say that a student’s performance on a test does not equal their true “understanding;” it’s actually true. If communities only understood how imperfect assessment design can be — well, they may just run us all out of town on a rail for all these years of equating test scores and expertise.

3) But perhaps the most powerful thing that you can do to combat the slippery notion of understanding is to use numerous and diverse assessment forms. And then — and this part is important — honor the performance on each of those assessments with as much equity as possible.

A concept map drawn on an exit slip is no less evidence of understanding than an extended response question on a state exam. In fact, I’ve always thought of planning, not in terms of quizzes and tests, but as a true climate of assessment, where “snapshots” of knowledge are taken so often that it’s truly part of the learning process.

This degree of frequency and repetition also can reduce procedural knowledge, and allow for opportunities for metacognitive reflection post-assessment, such as the “So? So What? What now?” sequence.

If you are able to show all assessment results — formal and informal — for the most visible portion of the learning process, the letter grade itself, learners may finally begin to see for themselves that understanding is evasive, constantly changing, and as dynamic as their own imaginations.

1Understanding by Design, Expanded 2nd Edition (9780131950849): Grant Wiggins, Jay McTighe: Books. Web. 07 May 2012. 2In fact, in Stage 2 of UbD design process, the task is to “determine what constitutes acceptable evidence of competency in the outcomes and results (assessment),” deftly avoiding the term “understanding” altogether.

 This article was originally written by Terry Heick for edutopia; adapted image attribution flickr user skokienorthshoresculpturepark; The Problem With Understanding; This post was (unwittingly) published twice–once last year, and this. Same content, just an updated visual. Sorry.