1 2 3 we are still interested in assessing a depth and
play

1 2 3 We are still interested in assessing a depth and breadth of - PDF document

1 2 3 We are still interested in assessing a depth and breadth of the NGSS and other 3-D / Framework-based standards, but we (as a field) need to define what that will mean and what that will look like in this context because it will be


  1. 1

  2. 2

  3. 3

  4. We are still interested in assessing a depth and breadth of the NGSS and other 3-D / Framework-based standards, but we (as a field) need to define what that will mean and what that will look like in this context – because it will be different from what we’ve typically expected. 4

  5. 5

  6. How will we know if new assessments – ones that purport to assess Framework- based standards – do what they purport to do (for purposes of this conversation – specifically in terms of the complexity question). 6

  7. Again, we certainly need to define what is “acceptable” and what is appropriate. But the goals of the NGSS and other 3-D standards are truly compromised if we neglect to hold assessments accountable for assessing students at an appropriate level of complexity – as corresponds to the expectations of the standards. 7

  8. DOK is successfully used to interpret standards and add clarity to conversations about complexity in our education system. Based on work with hundreds and hundreds of reviewer panels comprised of thousands of educators, we have evidence (in the form if interrater reliability and other statistical measurements) that the DOK language system can be used effectively and reliably to evaluate the complexity of educational materials. 8

  9. DOK is a tool that educators use to inform the process of interpretation and implementation of standards – to ensure that the complexity of engagement expected by the standards follows through in the other components of the system. It is used throughout the US as well as beyond our borders. 9 9

  10. Our program recently hosted a 2-day meeting to gather stakeholders from all different parts of the system (and who are taking all different approaches to working with NGSS complexity) to try to calibrate our understanding and interpretation of complexity within the NGSS and other Framework-based 3-D standards. 10

  11. It’s important to acknowledge that there’s not necessarily a shared understanding within the education world in general -- nor within the science education world in particular – about what complexity is all about. W/r/t NGSS, we’ve heard a broad range of perspectives on and interpretations of complexity – this was evident from discussions in our NGSS / DOK stakeholder summit. This is a key part of what is so powerful about DOK (or any common language system) – it allows all of the stakeholders in different parts of the system to communicate. 11

  12. From the DOK perspective, content complexity is multidimensional; multiple factors contribute to the complexity of engagement required (by a learning expectation or task, etc). The ideas of “knowledge in use” and “sense-making” – often cited in 3-D science standard-related materials – relate strongly to some of these factors, e.g. suggesting an expectation that there is some degree of processing of concepts and skills in authentic contexts. The “context” factor is directly related to the selection of a phenomenon: different contexts afford different opportunities for engagement. –observing organelles in the context of a cause-effect relationship or some other fairly routine interaction may offer different opportunities for engagement than a novel problem that requires students to grapple with a variety of information and data to figure out how to even start to approaching making sense of the problem. W/r/t novelty – a main consideration for 3-D C&I and assessment is the phenomena used. if we are using a phenomenon in the classroom that is already hugely familiar to students then the opportunities for engagement that we think we are offering might not actually be providing the opportunities we think we are providing – same on assessments – if we are providing a very familiar vs novel context, it affects the complexity of the subsequent questions b/c it might be that instead of working through the question during the assessment, students are actually recalling the answer. ========= 12

  13. We find that difficulty and complexity are often conflated. The are certainly related, but are different. It is important to differentiate between these as we consider NGSS (and other 3-D) assessments. 13

  14. Clarification: This “DOK wheel” (AKA “the wheel of misfortune”) is NOT related to Norman Webb and/or the WebbAlign program. We strongly disagree with the message of this wheel – we emphasize that it is NOT possible to rely on a verb to interpret the complexity of a task. We noted that we sorted tasks that all used the same verb (e.g. “draw”) at all different levels of complexity. “Defining” a word is different from “defining” policy. Etc. This wheel can be a good thesaurus but is not related to DOK. 14

  15. 15

  16. When working with educators and content developers, we find that expectations for [cognitive] complexity are not always clearly interpreted, even when using the language of Appendix F. For example, the two SEP statements on this slide are ones that we see applied at [what we would call] DOK 1, 2, 3, (and even 4). This is an experience I’ve heard echoed multiple times in the last day and a half of this conference. 16

  17. This is something that was recognized by the writers of the framework – in this quote here, they are talking about the range of types of models – note that they are talking about the complexity of the MODEL and I’m talking about the complexity of ENGAGEMENT with a model – but a more complex model likely provides more opportunities for complex engagement. So this is exactly how we use DOK – to add clarity to the work we’re doing, to have a tool we can use to reflect on whether a task is doing what I’m intending it to do. 17

  18. We can’t rely on Appendix F to tell us about complexity because the language can be interpreted in very different ways. And similarly we can’t rely on Appendix F when we are evaluating the complexity of tasks because the application of any practice depends on the context and specifics of the task. 18

  19. In practice, we have found that practitioners and even folks who were involved with Framework-influenced standards are able to differentiate between and among the expectations (for assessment) of the different PEs/Standards. But if we accept this assertion (that everything is/should be maximum complexity), then we need to reconcile it with another common assertion that we want to assess the NGSS (and other Framework-based standards) at a “range of complexity.” 19

  20. Per state directive, the standards used for a recent alignment study were in this format. Georgia educator expertise contributed to the interpretation of the standards, but “the standards” were presented as the statements shown here. A panel of 4 internal educators (Georgia-based) and 4 external educators (from various states) went through a consensus discussion process to determine the DOK of each of 24 high school biology standards. Panelists’ consensus coding for the subset of standards shown on this slide is included above. For 5.b, for example, the panelists thought that this was an expectation for students to really demonstrate a conceptual understanding of the cycling of matter and flow of energy – to make sense of the relationships and interactions in the system. They expected students to use typical models, typical representations of these processes. They did NOT think that it was an expectation to engage in abstract and non-routine thinking to develop a novel representation of these concepts. The panel decided that 16 of the 24 standards used in the alignment analysis were DOK 2, emphasizing engagement with questions and tasks that build conceptual understanding of science, including developing one’s own understanding of relationships and interactions between and among ideas and concepts, etc. Two standards were considered to be DOK 4. Six standards were considered to be DOK 3, requiring hypothetical and strategic thinking about non-routine problems. 20

  21. Although those closest to the standards agree that the PE statements or any statement in isolation should not be used or considered as consistent with the vision of the framework, it is anyway happening in some states -- and as we know there are all sorts of reasons for why we do things – there’s practical and political reasons, for example. NOTE that per WA correction post-session: although the OSPI RFP stated the above, the foundation boxes and other aspects of the standards were supposed to be considered for alignment judgment and NOT “the specific language in the PE.” 21

  22. Speaking of political reasons…we heard in our recent meetings that this statement that PEs might be adopted in isolation was something that “had to happen” but wasn’t the intent – but – in any case -- there’s lots of messaging that the statements of the PEs are intended to guide assessment (see underlined statement here, e.g.) and next slide. 22

Recommend


More recommend