Friday, September 5, 2014

Take That, Reverse It

I've always loved Gene Wilder's depiction of Willy Wonka, and quoting him for the title of this blog post about backward design is perfect. Based on the work of Grant Wiggins and Jay McTighe, and sliding in pieces from Marzano's Cognitive System, Marzano's Assessment Strategies, and the Michigan Assessment Consortium, we have developed an Assessment-Curriculum-Instruction Blueprint to assist teachers with their collaborative, intentional planning of units. It takes the model of teaching first, drafting the assessment last and reversing it.

The Blueprint itself is a 6 hour professional learning session that will be offered later this fall for State Continuing Education Clock Hours, but this blog post will serve as a birds-eye view of the process. As we move into Domains 2 and 4 of Marzano’s Instructional Framework (“The Art and Science of Teaching”), the intentionality and collaborative work are key. Thus, assume all pieces are done in a collaborative setting. The first step is stating the purpose of the end-of-unit common assessment. Teachers ask themselves two questions – why are we creating the assessment, and what is the desired outcome? This is closely related to the second step – stating each Standard/Strand from the curriculum that will be the subject of the assessment. Depending on the content area, those standards may be from the Michigan HSCEs or GLCEs, the CCSS as adopted by Michigan, AP/College Board, unique FHPS curriculum, or other nationally-recognized standards.

Those first two steps usually do not take too much time.  Step three is where the rubber meets the road – looking at rigor.  Using Marzano’s Cognitive System, teachers take time to reflect upon what level of rigor is best suited for students to demonstrate proficiency in the identified Standards/Strands. Within the Blueprint, teachers are asked to attach one or more rigor types of each of the identified Standards/Strands. Marzano’s Cognitive System has four levels:

1) Knowledge Retrieval – recalling information from permanent memory; students are merely calling up facts, sequences, or processes exactly as they have been stored.
2) Comprehension – identifying what is important to remember and placing it into appropriate categories; students use synthesis to identify the most important parts of concepts and delete any that are insignificant or extraneous.
3) Analysis – engaging students to use what they know to create new insights and invent ways of using what they have learned in new situations.
4) Knowledge Utilization – using the highest level of cognitive processes, examples include weighing options to determine the most appropriate course of action, experimental inquiry, and problem-solving when an obstacle is encountered.

Building upon that work, teachers then move to step four and begin selecting or drafting assessment items. As the process unfolds, teachers reflect upon whether the types of items and number of items comprise a balanced assessment. In addition, teachers contemplate whether there is enough rigor and depth, given that the rigor levels possible for each assessment type are dependent on the content of the item itself:

1) Selected Response (Multiple Choice, True/False, Matching) -- the rigor is equivalent to Knowledge Retrieval, and samples students’ mastery of knowledge elements.

      2) Constructed Response (Diagram, Fill in Blank, Short Answer, Web, Concept Map, Flowchart, Graph, Table, Matrix, Illustration) -- the rigor is equivalent to Knowledge Retrieval or Comprehension, dependent on content, and samples students’ mastery of knowledge elements and suggests understanding of relationships; brief descriptions of simple problem solutions provide a window, albeit it shallow, into reasoning proficiency.
    
      3) Extended Constructed Response (Essay, Research Report, Lab Report) -- the rigor is equivalent to Knowledge Retrieval, Comprehension, Analysis, or Knowledge Utilization, dependent on content, and taps students’ understanding of relationships among elements of knowledge; longer descriptions of complex problem solutions may provide a deeper window into reasoning proficiency.
     
     4)  Performance (Presentation, Movement, Science Lab, Athletic Skill, Dramatization, Enactment, Project, Debate, Model, Exhibition, Performance Task, Portfolio) -- the rigor is equivalent to Knowledge Retrieval, Comprehension, Analysis, or Knowledge Utilization, dependent on content, and infers students’ reasoning proficiency from direct observation; evaluates skills as they are being performed; assesses both proficiency in carrying out steps in product development as well as attributes of product itself.

     5)  Observations/Conversations (Oral Questioning, Observation, Interview, Conference, Process Description, Checklist, Rating Scale, Journal Sharing, Thinking Aloud A Process, Student Self-Assessment, Peer Review) -- the rigor is equivalent to Knowledge Retrieval, Comprehension, Analysis, or Knowledge Utilization, dependent on content, and explores students’ mastery selectively but in depth; infers reasoning proficiency more deeply by asking students to think aloud ot through focused, follow-up questions; assesses skill in oral communication directly; probes knowledge of procedures and attributes of quality but not product quality itself.

Again, step threes and fours require substantial intentional planning and thought by teachers, but it is being done in a collaborative setting. It also reveals whether there is a “stretch” in the assessment to challenge students, whether every item on the assessment is tied to an identified Standard/Strand, and whether the numerical value of the item in the assessment is a weight that aligns to the rigor. In addition, teachers collaboratively build a rubric to score the assessment.

Now that the assessment is built, teachers move to planning their instruction with the Instructional Framework. The first move is within DQ1 – Communicating Learning Goals and Feedback – all three elements. As teachers then move through DQs 2-9, reflections include how and when the content represented in the assessment will be taught to ensure learning for all students, determining resources that will be needed, deciding methods and strategies to engage all learners, and what common formative assessments might be employed along the way. 

Once the assessment is administered and collaboratively scored, what to do with the data? The Blueprint suggests five beginning questions: 1) what did you notice in the data? 2) what surprised you in the data? 3) what feels good to see and is affirming? 4) is there anything that raised questions for you? 5) what do you need or want to know more about? If teams request it, an instructional coach will be present to help process the data analysis.

The data generated also has to be shared and acted upon. How will results be shared with students and parents? In what form and for what purpose? Are there any other stakeholders that should be given the data? Then, how and when will the collaborative teachers address students that did not demonstrate proficiency on the essential learning embodied in the Standards/Strands? How and when will the collaborative teachers address students who demonstrated advanced proficiency? These are collective group decisions to be made with the professional learning community context.

Finally, there is a last column on the Blueprint that is for personal reflection. It asks teachers to contemplate what they might do differently the next time in creating the assessment, planning the instruction, and teaching the content to improve student learning. Not only does this help out students, but it aids our own professional growth.

Whew.  It's a lot, we know. But we're here to work and learn right along with you. As indicated, a 6 hour session will be offered later this fall to learn more about the Blueprint and hands-on practice in a collaborative setting. If you have any comments in the interim, please post them below.

J. Walton

No comments:

Post a Comment