Monday, September 22, 2014

“Clarity precedes competence.”

The title quote is from DuFour, DuFour, Eaker (Solution Tree 2014, p. 90), and nowhere may it ring more true than the school improvement process.

Each school in our district has a School Success Team (SST). It is a formal group of teachers that meet regularly with the principal to help lead the school improvement process. In turn, those SST members meet regularly with smaller teacher teams. Throughout the process, the School Improvement Plan is drafted, implemented, evaluated, and amended, if need be. As we enter our fourth week of school, SSTs are working to bring their respective school improvement goals "to life." But what does all of that really mean, especially for the teacher who is not at that meeting? We need “clarity” before we can be “competent.”

In Michigan, every public school must draft and submit an SIP to the Michigan Department of Education. It encompasses requirements for both state and federal mandates. In addition, each district also submits a District Improvement Plan. The SIP is intended to be an ongoing, continuous cycle:




The SIP is done in a three-year cycle, but an updated version is submitted to the state each June, for the following school year. The SIP is typically drafted by the SST and principal, but some schools also bring other teachers into the drafting process.  Goals for improvement are articulated (e.g., all students will be proficient in writing), and a measurable objective is stated (e.g., 70% of the bottom 30% will demonstrate proficiency in writing in ELA by June 2015, as measured by pre-assessment and post-assessment). The measurable objective is formulated by past performance on state benchmark assessments and the bar all schools must meet by 2022. Then, one or more strategies are chosen to achieve the goal (e.g., use of clear learning goals and monitoring progress), as well as activities within each strategy (e.g., teachers will learn how to draft clear learning goals and how to effectively monitor them). Each strategy is research-based.

The number of goals per school varies, but is generally determined by the SST, principal, and other teachers brought into the process. However, all Title I schools must have a goal in each of the four core content areas. In addition, some schools have chosen to have a goal in each grade level or department. Every school must post a copy of its SIP on either its own website or the district website.

Regardless of the number of goals, the structure is based on the premise of collective responsibility: “A shared belief that the primary responsibility of each member of the organization is to ensure high levels of learning for every child” (Buffum, Mattos, & Weber, 2012, p.15).  Within the smaller teacher teams, usually based on grade level or content area, the teachers work collaboratively and assume collective responsibility to ensure that every student learns the essential knowledge and skills for that grade or course. The Instructional Framework, now in its second year of implementation, is the vehicle through which so much of this work is done. Domains 2 and 4 (intentional planning and collaboration) help bring Domain 1 to life. That type of work can only be done by those smaller teacher teams (often formed as professional learning communities), for some very good reasons:

·         The teachers are highly trained and credentialed in their subject.
·         The teachers know the content best.
·         The teachers, through creation, administration, and scoring of common formative assessments, have the freshest assessment data.
·         The teachers know their students best.
(Buffum, Mattos, & Weber, 2012, p.33)

In other words, even if there is not a goal explicitly written for every content area or grade level, there are strategies and activities within each goal that pertain to all teachers, and every teacher is implementing the Instructional Framework.

As the school year progresses, collaborative teacher teams and the SST will review data to monitor progress toward the school improvement goals. The data is best gathered through the use of common formative assessments (also part of the Instructional Framework), so that there can be a change in strategy is needed, and thus, a chance to improve, for both students and teachers. In addition, a school may decide to amend its SIP within the school year, to clarify goals, strategies, and/or activities to better serve students.


So, to hopefully bring some clarity to the process, school improvement is not a one-time event, or even a weekly meeting. It is an ongoing process that lives every day, in every classroom. It is also our commitment to measure success by results, not intentions. Go back and look at the center of the cycle graphic: student achievement. School improvement is about every student, and making decisions that are best for them, to help each one of them learn at high levels.

Friday, September 5, 2014

Take That, Reverse It

I've always loved Gene Wilder's depiction of Willy Wonka, and quoting him for the title of this blog post about backward design is perfect. Based on the work of Grant Wiggins and Jay McTighe, and sliding in pieces from Marzano's Cognitive System, Marzano's Assessment Strategies, and the Michigan Assessment Consortium, we have developed an Assessment-Curriculum-Instruction Blueprint to assist teachers with their collaborative, intentional planning of units. It takes the model of teaching first, drafting the assessment last and reversing it.

The Blueprint itself is a 6 hour professional learning session that will be offered later this fall for State Continuing Education Clock Hours, but this blog post will serve as a birds-eye view of the process. As we move into Domains 2 and 4 of Marzano’s Instructional Framework (“The Art and Science of Teaching”), the intentionality and collaborative work are key. Thus, assume all pieces are done in a collaborative setting. The first step is stating the purpose of the end-of-unit common assessment. Teachers ask themselves two questions – why are we creating the assessment, and what is the desired outcome? This is closely related to the second step – stating each Standard/Strand from the curriculum that will be the subject of the assessment. Depending on the content area, those standards may be from the Michigan HSCEs or GLCEs, the CCSS as adopted by Michigan, AP/College Board, unique FHPS curriculum, or other nationally-recognized standards.

Those first two steps usually do not take too much time.  Step three is where the rubber meets the road – looking at rigor.  Using Marzano’s Cognitive System, teachers take time to reflect upon what level of rigor is best suited for students to demonstrate proficiency in the identified Standards/Strands. Within the Blueprint, teachers are asked to attach one or more rigor types of each of the identified Standards/Strands. Marzano’s Cognitive System has four levels:

1) Knowledge Retrieval – recalling information from permanent memory; students are merely calling up facts, sequences, or processes exactly as they have been stored.
2) Comprehension – identifying what is important to remember and placing it into appropriate categories; students use synthesis to identify the most important parts of concepts and delete any that are insignificant or extraneous.
3) Analysis – engaging students to use what they know to create new insights and invent ways of using what they have learned in new situations.
4) Knowledge Utilization – using the highest level of cognitive processes, examples include weighing options to determine the most appropriate course of action, experimental inquiry, and problem-solving when an obstacle is encountered.

Building upon that work, teachers then move to step four and begin selecting or drafting assessment items. As the process unfolds, teachers reflect upon whether the types of items and number of items comprise a balanced assessment. In addition, teachers contemplate whether there is enough rigor and depth, given that the rigor levels possible for each assessment type are dependent on the content of the item itself:

1) Selected Response (Multiple Choice, True/False, Matching) -- the rigor is equivalent to Knowledge Retrieval, and samples students’ mastery of knowledge elements.

      2) Constructed Response (Diagram, Fill in Blank, Short Answer, Web, Concept Map, Flowchart, Graph, Table, Matrix, Illustration) -- the rigor is equivalent to Knowledge Retrieval or Comprehension, dependent on content, and samples students’ mastery of knowledge elements and suggests understanding of relationships; brief descriptions of simple problem solutions provide a window, albeit it shallow, into reasoning proficiency.
    
      3) Extended Constructed Response (Essay, Research Report, Lab Report) -- the rigor is equivalent to Knowledge Retrieval, Comprehension, Analysis, or Knowledge Utilization, dependent on content, and taps students’ understanding of relationships among elements of knowledge; longer descriptions of complex problem solutions may provide a deeper window into reasoning proficiency.
     
     4)  Performance (Presentation, Movement, Science Lab, Athletic Skill, Dramatization, Enactment, Project, Debate, Model, Exhibition, Performance Task, Portfolio) -- the rigor is equivalent to Knowledge Retrieval, Comprehension, Analysis, or Knowledge Utilization, dependent on content, and infers students’ reasoning proficiency from direct observation; evaluates skills as they are being performed; assesses both proficiency in carrying out steps in product development as well as attributes of product itself.

     5)  Observations/Conversations (Oral Questioning, Observation, Interview, Conference, Process Description, Checklist, Rating Scale, Journal Sharing, Thinking Aloud A Process, Student Self-Assessment, Peer Review) -- the rigor is equivalent to Knowledge Retrieval, Comprehension, Analysis, or Knowledge Utilization, dependent on content, and explores students’ mastery selectively but in depth; infers reasoning proficiency more deeply by asking students to think aloud ot through focused, follow-up questions; assesses skill in oral communication directly; probes knowledge of procedures and attributes of quality but not product quality itself.

Again, step threes and fours require substantial intentional planning and thought by teachers, but it is being done in a collaborative setting. It also reveals whether there is a “stretch” in the assessment to challenge students, whether every item on the assessment is tied to an identified Standard/Strand, and whether the numerical value of the item in the assessment is a weight that aligns to the rigor. In addition, teachers collaboratively build a rubric to score the assessment.

Now that the assessment is built, teachers move to planning their instruction with the Instructional Framework. The first move is within DQ1 – Communicating Learning Goals and Feedback – all three elements. As teachers then move through DQs 2-9, reflections include how and when the content represented in the assessment will be taught to ensure learning for all students, determining resources that will be needed, deciding methods and strategies to engage all learners, and what common formative assessments might be employed along the way. 

Once the assessment is administered and collaboratively scored, what to do with the data? The Blueprint suggests five beginning questions: 1) what did you notice in the data? 2) what surprised you in the data? 3) what feels good to see and is affirming? 4) is there anything that raised questions for you? 5) what do you need or want to know more about? If teams request it, an instructional coach will be present to help process the data analysis.

The data generated also has to be shared and acted upon. How will results be shared with students and parents? In what form and for what purpose? Are there any other stakeholders that should be given the data? Then, how and when will the collaborative teachers address students that did not demonstrate proficiency on the essential learning embodied in the Standards/Strands? How and when will the collaborative teachers address students who demonstrated advanced proficiency? These are collective group decisions to be made with the professional learning community context.

Finally, there is a last column on the Blueprint that is for personal reflection. It asks teachers to contemplate what they might do differently the next time in creating the assessment, planning the instruction, and teaching the content to improve student learning. Not only does this help out students, but it aids our own professional growth.

Whew.  It's a lot, we know. But we're here to work and learn right along with you. As indicated, a 6 hour session will be offered later this fall to learn more about the Blueprint and hands-on practice in a collaborative setting. If you have any comments in the interim, please post them below.

J. Walton