2 Content Structures

This chapter describes the key assessment structures that support the Dynamic Learning Maps® (DLM®) Alternate Assessment System’s purpose and program goals.

The DLM System is based on large, fine-grained learning maps. These learning maps are highly connected representations of the pathways for how academic content is acquired, as reflected in research literature. The DLM maps consist of nodes that represent discrete knowledge, skills, and understandings in either English language arts (ELA) or mathematics, as well as important cognitive skills that support the acquisition of the learning targets associated with grade-level content standards. Connections between nodes represent the development of the knowledge, skills, and understandings. With approximately 1,900 nodes in the ELA map, 2,400 nodes in the mathematics map, and 150 foundational nodes Foundational nodes represent basic skills that are required across subjects and are important precursors to developing competency in learning targets associated with grade-level academic standards. that are associated with both subjects, the DLM learning maps go beyond traditional learning progressions to include multiple pathways by which students with the most significant cognitive disabilities may acquire the academic content.

Seen in its entirety, the DLM learning maps are highly complex. A closer look at smaller sections of the DLM learning maps reveals how the discrete nodes are described and connected. Figure 2.1 provides an illustration of a small segment of the DLM ELA learning map. The learning maps are read from the top down, moving from the least to most complex knowledge, skills, and understandings.

Figure 2.1: Sample Excerpt From the DLM English Language Arts Learning Map

A small section of the ELA learning map showing nodes and their connections.

Extensive, detailed work was necessary to establish and refine the DLM learning maps in light of the Common Core State Standards (CCSS, National Governors Association Center for Best Practices and Council of Chief State School Officers, 2010) and the needs of the student population. Guided by in-depth reviews of literature and research, as well as extensive input from experts and practitioners, the DLM learning maps are the conceptual and content basis for the DLM System.

This chapter begins a description of the Essential Elements (EEs), which are the learning targets for the DLM assessments. We also describe the development process for the EEs. Then we describe the development process of the DLM learning maps and how they were linked to the EEs. The EEs and the learning maps were developed concurrently, but separately, and then integrated. We then explore how the learning map is organized into claims and conceptual areas and describe the evaluation of the learning map structure. The chapter then describes the development of the DLM assessment blueprints defining EEs for assessment by grade and subject. Finally, the chapter concludes with a description of an external alignment study.

2.1 Essential Elements

EEs are specific statements of knowledge, skills, and understandings in a subject. The purpose of the EEs is to build a bridge from grade-level college and career readiness content standards to academic expectations for students with the most significant cognitive disabilities for both instruction and assessment. They are based on the grade-level general education content standards but are at reduced complexity, linking the grade-level general education content standards to academic expectations that are at an appropriate level of challenge for students with the most significant cognitive disabilities. In other words, EEs are the alternate content standards of the grade-level college and career readiness content standards used in general education assessments.

The progression of grade-level EEs across years of instruction reflects the changing priorities for instruction and learning as students move from grade to grade. The differences between EEs at different grade levels are subtler than what is typically seen in content standards for general education; the grade-to-grade differences in the EEs may consist of added knowledge, skills, and understandings that are not of obvious increasing rigor compared to the grade-to-grade differences found in the general education content standards. However, to the degree possible, the knowledge, skills, and understandings represented by the EEs increase in complexity across the grades, with clear links to the shifting emphases at each grade level in the general education content standards.

An example of three related EEs from the CCSS ELA “Key Ideas and Details” strand is shown in Table 2.1. The content shown is from elementary (Grade 3), middle (Grade 7), and high school (Grades 9–10). There is an increase in what students are asked to do as grade levels increase. Tested EEs for each grade are available on the DLM website for ELA and mathematics.

Table 2.1: Example of Increasing Complexity in Related Essential Elements Across Grades
Essential Element Grade Essential Element Description
ELA.EE.RI.3.2 Grade 3 Identify details in a text.
ELA.EE.RI.7.2 Grade 7 Determine two or more central ideas in a text.
ELA.EE.RI.9-10.2 Grades 9–10 Determine the central idea of the text and select details to support it.

The EEs specify grade-level learning targets, and the DLM learning maps clarify how students can reach those learning targets. For each EE, critical junctures on the path toward the EE’s learning target(s) are identified by one or more nodes in the DLM learning maps. Nodes are also identified past the EE’s learning target(s) to give students an opportunity to grow toward the learning targets of grade-level general education content standards.

These critical junctures of one or more related nodes are called linkage levels. The Target linkage level aligns to the EE’s grade-level learning target(s). There are three linkage levels below the Target (i.e., Initial Precursor, Distal Precursor, and Proximal Precursor) and one linkage level beyond the Target (i.e., Successor). Table 2.2 shows the increasing complexity from linkage level to linkage level for the same EEs shown in Table 2.1. This example provides an illustration of how complexity increases both across linkage levels and across grade levels. The linkage levels are the unit of assessment for the DLM System.

Table 2.2: Example of Increasing Complexity of Skills in Related Linkage Levels for Three Essential Elements Across Grades
Linkage level ELA.EE.RI.3.2 ELA.EE.RI.7.2 ELA.EE.RI.9-10.2
Initial Precursor The student can demonstrate an understanding that absent objects still exist despite not being visible by searching for objects that are hidden or not visible. When provided with a picture of an object, or other symbolic representation of that object, the student can correctly match the picture with the real object. The student can identify concrete details, such as individuals, events, or ideas, in a familiar informational text.
Distal Precursor When provided with language cues, the student can pay attention to the entire object, a characteristic of the object, or an action the object can perform. After hearing or reading a beginner-level informational text, the student can identify a concrete detail in the text. After reading or hearing an informational text, the student can identify the topic of the text and textual details that are related to the topic.
Proximal Precursor When provided with illustrations that are related and unrelated to a familiar text, the student can identify the illustrations that relate to aspects of the familiar text, such as people, places, things, and ideas. After hearing or reading an informational text, the student can identify the implicit main idea of the text and identify the relationships between concrete details. After reading or hearing an informational text, the student is able to summarize the information from the text.
Target After hearing or reading a beginner-level informational text, the student can identify a concrete detail in the text. After reading or hearing an informational text, the student can identify more than one main idea in the text. After reading or hearing an informational text, the student can identify the central idea of the text and the details that contribute to the understanding of the central idea.
Successor After hearing or reading an informational text, the student can identify explicit details that are key to the information in the text. After reading or hearing an informational text, the student can demonstrate an understanding of the summary of the text by identifying an accurate summary or expressing the main ideas of the text. The student can identify both the implicit and explicit meaning of an informational text by identifying specific details and citations within the text that support the meaning.

2.2 Development of the Essential Elements

As previously mentioned, the EEs and the learning maps were developed concurrently, but separately. The development of the EEs involved DLM project staff; Edvantia, Inc., a DLM subcontractor; the DLM Governance Board; and content experts and educators of students with significant cognitive disabilities who were recruited by the DLM Governance Board. Initial planning meetings were held February 2011 to ensure that governance board members were in agreement with the process designed by Edvantia and the goals of the EEs. At the initial meeting and throughout the development process, stakeholders and DLM staff prioritized EEs that increased in complexity across grades and reflected high academic expectations aligned to college and career readiness standards for students with significant cognitive disabilities.

Stakeholder meetings were held via webinar in March 2011 to prepare materials for development meetings. A series of subject-specific webinars were conducted in April 2011 to train panelists before meeting face-to-face to draft the EEs in ELA and mathematics in April–May 2011.

Led by Edvantia, representatives from each of the then DLM partner state education agencies (SEAs) and the selected educators and content specialists developed the original draft of the DLM EEs. The first meeting was held in Kansas City, Missouri, in April 2011, to draft the ELA EEs from kindergarten through twelfth grade. More than 70 participants participated, representing 12 of the 13 states that were working to develop the DLM assessments at that time. A similar meeting was held to draft the mathematics EEs in May 2011, with more than 70 participants representing all 13 member states.

Drafts of the EEs developed at the meetings were compiled and released to participants for review and feedback. Panelists and other stakeholders took part in webinars from July through October 2011 to review drafts. The last drafts were reviewed by SEA and content experts in November 2011. The finalized version was released for state approval in February 2012 and, when approved, was released online in March 2012.

Additional revisions to the EEs were made during the concurrent development of the learning maps, as described in section 2.3.5. Final documents are available publicly on the DLM website for ELA and mathematics.

2.3 Development of the Learning Maps

Learning maps are a type of cognitive model composed of multiple interconnected learning targets and other critical knowledge, skills, and understandings supporting student learning. The development of the DLM assessment system’s learning maps began with a review of the existing literature on learning progressions, a widely accepted and similar approach to assessing student progress over time (Confrey et al., 2014, 2017; Shepard, 2018). Learning progressions depict student learning of the critical concepts related to a topic and depicted in grade-level content standards (Bechard et al., 2012) through multiple, sequenced building blocks that precede and support their mastery (Hess, 2012; Popham, 2011). These building blocks increase in complexity over time and provide different intermediary goals that students may target toward acquiring the learning targets (Bechard et al., 2012). Formative assessments often draw on learning progressions to assist educators in understanding the gap between current student performance and a learning target. Despite these benefits, learning progressions often represent only one individual, straightforward pathway depicting how the average general education student commonly learns the content.

Due to this feature, learning progressions may have difficulties in representing the learning of students with the most significant cognitive disabilities (Hess, 2012; Kearns et al., 2011). The singular and linear pathway of these learning progressions toward a grade-level content standard cannot accurately and sufficiently represent the complexity and diversity in the learning among all of these students (e.g., acquiring writing skills with limited mobility or learning to read with hearing impairments). Students with the most significant cognitive disabilities have a range of sensory differences that may require demonstrating their knowledge, skills, and understandings in different ways than those represented in learning progressions for students who do not use assistive technology.

DLM maps expanded on existing notions of learning progressions by accounting for the multiple pathways students may use to acquire the same learning targets. They also depict the hypothesized connections and interactions described in the research literature between conceptually related content standards within and across topics and grade levels. These changes formed the learning maps guiding the development of the DLM assessment. The resulting maps represent a web-like network of connected learning targets and the critical knowledge, skills, and understandings supporting them (Bechard et al., 2012). The supporting knowledge, skills, and understandings can depict intermediary learning targets advancing students toward acquiring individual content standards. To complement the progression of grade-level learning targets, the DLM learning maps also include the early cognitive skills developing between birth and school entry that form the basis for all subsequent content learning. Finally, the multiple pathways through the DLM learning maps provide accessibility for students with the most significant cognitive disabilities to the learning targets and the critical supporting knowledge, skills, and understandings (Erickson & Karvonen, 2014).

Project staff developed the DLM learning maps in ELA and mathematics, both of which begin with a common set of basic cognitive skills that provide a basis for academic skill development. The project staff consisted of individuals with expertise in cognitive psychology, literacy, and mathematics, as well as individuals with experience with students with significant cognitive disabilities, among other areas. To create these interconnected DLM learning maps, map developers followed a four-step process.

  1. Identification and representation of learning targets
  2. Identification and representation of critical supporting knowledge, skills, and understandings
  3. Development of connections between nodes and building accessible pathways
  4. Linking the learning maps to the EEs

In each step, the synthesized literature review informed development activities, which were followed by rounds of internal review to ensure the learning maps were consistent with the guidelines outlined within each step. These four steps are described in the subsequent sections.

2.3.1 Identification and Representation of Learning Targets

The first step in the development of the DLM learning maps was to identify learning targets. The DLM learning maps consist of two basic elements: nodes and connections. The nodes are essential, unique, observable, and testable knowledge, skills, and understandings. Nodes can either directly represent learning targets or may represent the critical knowledge, skills, and understandings supporting the acquisition of the learning targets. The second element, connections, forms the relationship between nodes.

Because the DLM assessments measure student achievement of the alternate grade-level expectations—the EEs, aligned to college and career-readiness content standards—the CCSS served as a starting place for node development. Specifically, grade-level CCSS became individual nodes within the DLM learning maps. The CCSS were initially used in early map development. The EEs were later integrated into the DLM learning maps as an additional set of learning targets that largely preceded the CCSS learning targets.

During initial map development, when a content standard contained multiple knowledge, skills, and understandings unsuitable to be combined into a single node, these different knowledge, skills, and understandings were represented as distinct nodes in the DLM learning maps. These nodes representing the grade-level learning targets are called learning target nodes. Once the learning target nodes had been created, they were arranged in the DLM learning maps according to grade-level(s).

2.3.2 Identification and Representation of Critical Supporting Knowledge, Skills, and Understandings

After identifying the learning targets, the next step in the development of the DLM learning maps focused on identifying and representing the critical knowledge, skills, and understandings supporting the acquisition of the learning targets and filling the gaps between the learning target nodes in the DLM learning maps.

The nodes representing these critical supporting knowledge, skills, and understandings are called supporting nodes. The results from a systematic literature review provided the primary input for creating these supporting nodes. Given that the CCSS for kindergarten begins at a relatively complex cognitive and language level, the map developers employed a bottom-up approach in the literature search, looking initially for research concerning early cognitive development (e.g., attending to object characteristics due to language cues) and then building toward the more advanced grade-level learning targets (e.g., answering wh-questions about details in a narrative). Wherever possible, the map developers used empirical research to drive the development and sequence of the supporting nodes. Table 2.3 provides an example of the alignment between a research-based learning progression and the supporting nodes based on them.

Table 2.3: Example Alignment Between Learning Progression in Research Literature and Learning Map Nodes
Stages of spelling development Nodes in the learning map
Precommunicative spelling Spells words by including random letters
Semiphonetic spelling Spells words by partially using letters to represents individual sounds
Phonetic spelling Spells words by using letters to represent individual sounds
Transitional spelling Spells words by using letters based on common letter patterns found in print
Conventional spelling Spells words correctly using knowledge of letter-sound relationships and common spelling patterns
See Gentry (1982) for details.

This step also involved making hypotheses and logical analyses about potential supporting nodes to fill in sections of the learning maps where the literature review provided no input. The map developers used common instructional practices, other curricular information, and expert judgment to provide ideas for the supporting knowledge, skills, and understandings that complete the gaps between learning target nodes. Regardless of their origin, each supporting node had a succinct name that summarizes the knowledge, skill, or understanding; an extended description that provides additional detail; and an observation that describes a context in which students could demonstrate their learning of the node’s content.

Despite the DLM System’s focus on students with the most significant cognitive disabilities, the empirical literature on the acquisition of academic skills by typical students primarily provided the basis for developing the learning maps in ELA and mathematics. Systematic literature reviews revealed a dearth of research related to academic skill development among students with the most significant cognitive disabilities. As a result, the map developers focused on first building a “super highway” to represent typical development with multiple pathways to learning targets. The map developers then adapted the learning maps by adding any additional pathways specific to students with significant cognitive disabilities as needed.

2.3.2.1 Critical Sources

Node development was based on a systemic literature review of articles, books, and book chapters summarizing the developmental research in a domain area. Book chapters and research syntheses broadly surveying the literature in a given domain were most useful to the map developers in developing the learning maps in ELA and mathematics. The grade-level content standards provided the parameters to guide the literature search. Node development began with the map developers identifying key terms within the content standards and locating relevant research handbooks or edited chapter books. These broad literature reviews were of the greatest utility because they often synthesized research findings into a developmental learning trajectory of the academic skills pertinent to the domain (e.g., Nippold, 2007; Sarama & Clements, 2009). Additionally, the map developers identified individual studies that were considered seminal to a particular domain, which could be used when building supporting nodes for a specific section of the DLM learning maps. If a particular researcher’s empirical work was sought out, the map developers looked for articles summarizing a series of findings into a developmental sequence (often using “acquisition” as a search term). The map developers also identified articles reporting the findings of longitudinal and cross-sectional samples that provide insight into the developmental acquisition of academic skills. When these sources were unavailable or did not cover the entire learning map area, the map developers synthesized the findings from multiple empirical studies to generate appropriate supporting nodes.

2.3.2.2 Nodes Reflect the Products of Learning and Cognitive Advancement

The learning maps depict learning between birth and school entry. The supporting nodes developing between birth and school entry reflect the learning and cognitive growth that occurs during this period by becoming increasingly more complex. For example, early developing cognitive skills, such as seeking the attention of others, provide the basis for more complex and later-developing ones, such as using words to request, comment, and command.

The supporting knowledge, skills, and understandings that develop after school entry provide steps that help students meet grade-level learning targets. Nodes throughout the learning maps reflect increased cognitive skills (e.g., improving logical and analytical thought and increasing declarative, procedural, and metacognitive knowledge), resources (e.g., increasing working memory span and attention), and instruction (e.g., learning basic abstract symbols following exposure and explicit instruction). Figure 2.2 represents an example of the gradually increasing complexity of supporting nodes in the DLM learning maps. These shifts in the knowledge, skills, and understandings depicted in the supporting nodes complete the framework established by the grade-level learning targets.

Figure 2.2: Example Progressions of Supporting Nodes Toward the Target Node

A progression of node complexity: 1) Can identify feeling states in self, 2) Demonstrate receptive understanding of common feeling words, 3) Demonstrate receptive understanding of feeling words, and 4) Identify feeling words that describe personal emotional states.

Note. The target node for the Essential Element is the orange circle.

2.3.2.3 Foundational Nodes

Even in the early grades, learning targets associated with grade-level content standards require the application of basic cognitive skills. These basic cognitive skills are required across subjects and include such things as attention, self-regulation, language, and categorization. Foundational nodes represent these basic cognitive skills in ELA and mathematics. Some students with the most significant cognitive disabilities must be taught learning targets associated with foundational nodes to work toward learning targets associated with grade-level content standards (Kleinert et al., 2009).

2.3.2.4 Node Development Criteria

The nodes representing the critical knowledge, skills, and understandings supporting the acquisition of grade-level learning targets met specific criteria before inclusion. These criteria focus on the content and accessibility of the nodes and ensure that the learning maps consisted of only the nodes that were critical to represent student content learning. The criterion were that the node should be (1) essential, (2) unique, (3) appropriately sized, (4) accessible, and (5) observable and measurable. The first criterion for node development was whether the nodes were essential for students to advance toward one or more grade-level learning targets in the DLM learning maps. The results of the systematic literature review and map developer judgment guided the process of determining whether to include the node.

The second criterion of node development required nodes to not duplicate the knowledge, skills, and understandings depicted in the surrounding nodes. In other words, the map developers created nodes distinct from other nodes by extending the knowledge, skills, and understandings covered in the preceding node(s) and contributing to those associated with the succeeding node(s). If a new node was not unique, the map developers combined its content with a current node in the DLM learning maps, as shown in Figure 2.3.

Figure 2.3: Combining Two Indistinct Nodes Into One Combined Node

Node 1 (Determine the meaning of words and phrases alluding to familiar narrative) and Node 2 (Determine the meaning of words and phrases alluding to unfamiliar narratives) into a single node (Determine the meaning of words and phrases alluding to other narratives).

The third criterion was that a node should represent only a single concept or skill in node development. This ensured that a new node was of a similar grain size to the surrounding nodes in the maps. The map developers divided nodes that were too large into multiple, more reasonably sized nodes (see Figure 2.4) or combined nodes that were too small into a single node. The characteristics of students with the most significant cognitive disabilities guided the identification of the appropriate node size.

Figure 2.4: Dividing One Node Into Multiple, Reasonably Sized Nodes

One large node (Categorize, select, and sequence evidence to include in a text) is split into three nodes: 1) Categorize evidence into groups, 2) Select evidence to include in a text, and 3) Sequence evidence in order of presentation.

The fourth criterion of node development required the map developers to generate nodes accessible to students with the most significant cognitive disabilities. Accessible nodes reflect the principles of Universal Design for Learning (Center for Applied Special Technology, 2018) and are free of barriers for students with specific sensory, mobility, or communication disabilities. More specifically, they should account for variability among students, increasing the range of access to the node’s content. When provided with the necessary support, all students, regardless of their disability, should have the opportunity to demonstrate their learning. Inaccessible nodes require students to demonstrate their understanding of the node’s content through only one modality or format. The map developers made nodes more accessible by allowing multiple modalities or formats (see Table 2.4).

Table 2.4: Examples of the Accessible Node Criterion for Students With the Most Significant Cognitive Disabilities
Accessible nodes Nonaccessible nodes
Introduce the topic or book being written about and then state an opinion on it when writing an opinion piece Introduce the topic or book being written about and then state an opinion on it when writing an opinion piece using pencil and paper
Sort information related to a topic from print or digital sources into given categories Sort visual information related to a topic from print or digital sources into given categories

The final criterion for development was that the nodes were observable and measurable. If nodes are observable and measurable, students should have the opportunity to demonstrate their learning of the node’s content. These nodes require some form of expression that reflects the knowledge, skill, or understanding depicted in the node’s content, allowing educators and test developers to gauge student mastery. Nodes lack observability and measurability when they only occur within the student’s mind or require inferences from other behaviors.

In summary, the learning maps contain only nodes meeting the above requirements, and each node contains critical supporting knowledge, skills, and understandings for the acquisition of the learning targets.

2.3.3 Development of Connections Between Nodes

After the learning target and supporting nodes were identified, they were arranged and connected according to their developmental acquisition, based on the empirical literature or in order of common instructional or curricular practices. An individual connection represents the directionality of the relationship between two nodes—the origin node and the destination node. For inclusion in the DLM learning maps, a connection should be logical, appropriate, and accessible. The map developers built logical origin-to-destination connections that represent increased cognitive progressions, where origin nodes are cognitively less complex than the destination nodes (see Figure 2.5A). Origin nodes precede and are hypothesized to develop before the destination nodes. Connections should not represent an overly large leap in complexity between the origin and destination nodes (see Figure 2.5B). The map developers built accessible origin-to-destination connections that do not contain any barriers limiting the access to the origin and destination nodes’ content for students with the most significant cognitive disabilities (see Figure 2.5C).

Figure 2.5: Examples of Criteria for Making Connections Between Nodes

Three figures showing examples of the connection criteria described in the text.

Note. Panel A: Example and Nonexample of a logical connection between two nodes. Panel B: Example and Nonexample of an appropriate connection between two nodes. Panel C: Example and Nonexample of an accessible connection between nodes.

As an example, small sections of the DLM learning map are provided in Figure 2.6 and Figure 2.7. This map section covers the nodes leading up to the spelling of untaught words phonetically. Figure 2.6 illustrates the structure of the DLM learning maps, including multiple pathways, while Figure 2.7 displays the nodes used for assessment. Both figures highlight a specific pathway to demonstrate the interconnected nature of the DLM learning maps. The pathway depicts how a student would progress from one node (i.e., can identify the first letter of their own name) to a later node (i.e., can spell words with inflectional endings). Items used in the DLM assessment system have been created to measure some of the nodes in this pathway, and these nodes have been color-coded to identify their location within the map section.

The colored nodes in both figures represent tested nodes, which are nodes that have been identified as making a significant contribution to the acquisition of the learning target by the map developers and content experts. These nodes typically precede or directly follow the learning target node (e.g., can spell words phonetically using letter-sound knowledge and common spelling patterns), but they are not the only nodes contributing to the acquisition of the learning target, nor do they prescribe the only route that can be taken toward acquiring it.

Figure 2.6: Section of the Learning Map Representing Basic Cognitive and Academic Skills

An image of the learning map showing multiple pathways between the tested nodes.

Note. The color-coded nodes in the map section represent a pathway of tested nodes, with the learning target for this Essential Element highlighted in purple.

Figure 2.7: Pathway of Nodes Covering Essential Element ELA.EE.L.6.2.b

A simplified version of the previous figure showing only the tested nodes.

Note. The color-coded nodes in the map section represent a pathway of tested nodes, with the learning target for this Essential Element highlighted in purple.

2.3.4 Accessibility of Nodes and Pathways

Creating appropriate learning targets alone does not sufficiently provide all students with the most significant cognitive disabilities access to the content in the DLM learning maps. Some students exhibit sensory, mobility, or communication disabilities that require different means for representation and action or expression to provide evidence of mastery for some nodes in the DLM learning maps. A critical step in making the DLM learning maps accessible to all students involves verifying that nodes and pathways are appropriate for all students and their sensory, mobility, and communication needs. These pathways allow all students to achieve grade-level learning targets when provided with appropriate support and access to instruction and assessments based on the principles of Universal Design for Learning (Center for Applied Special Technology, 2018).

The map developers, in partnership with the Center for Literacy and Disability Studies (CLDS) at the University of North Carolina, enhanced accessibility of the DLM learning maps for students with specific disabilities. The CLDS team reviewed each node and considered whether the node was accessible to individuals with learning differences across four primary areas: vision, hearing, mobility, and communication. Nodes flagged during this process were most likely inaccessible to students with specific sensory, mobility, or communication disabilities even when considering potential accommodations. For example, many early writing nodes involve skills like scribbling before students can produce letters and numerals. For students with mobility differences, the writing acquisition process will include learning to use assistive technology to select letters and numbers. In this example, an accommodation allowing the students to choose scribbles would be inappropriate. As a result, the CLDS team flagged as inaccessible the early writing nodes related to scribbling because the cognitive process of learning to write involves some fundamental differences for students using assistive technology to communicate. Clusters of these flagged nodes represented a section within the DLM learning maps that posed challenges for students with specific sensory, mobility, or communication disabilities.

For example, in Figure 2.8, students with mobility impairments would demonstrate their writing knowledge and skills in different ways than students who do not use assistive technology. In this example, the learning target may be ELA-1394, “Can produce the first letter in their own name.” The green pathway describes how students who do not use assistive technology might acquire the knowledge and skills for this target (e.g., drawing scribbles, diagonal lines, circles). Conversely, students with mobility impairments may instead acquire the skills to use assistive technology, such as selecting using the technology to select letters first at random and then in a pattern, as shown in the orange pathway. These nodes represent the cognitive steps involved with learning to use writing methods different from those used by students without mobility impairments.

Figure 2.8: Hypothesized Pathways in Writing for Students With and Without Mobility Disabilities

A section of the learning map showing and accessible pathway for acquiring writing skills for students with mobility impairments.

Note. Green shading indicates the writing development of mobility-typical students. Orange shading suggests an accessible path for students with mobility impairments using assistive technology.

2.3.5 Linking the Learning Maps to the Essential Elements

Because the primary goal of DLM assessments is to measure what students with the most significant cognitive disabilities know and can do, EEs were created to reflect more accurately the knowledge, skills, and understandings that are appropriately challenging grade-level learning targets for the students. Within each subject, the EEs were derived from CCSS to represent similar academic skill development sequences as the CCSS.

EEs were first written based on the CCSS, independent of the learning maps development process in 2012 (see section 2.2 for a description of the EE development). At the same time that the EEs were being developed, map developers were actively engaged in building the maps in ELA and mathematics. Because the development of the EEs and the learning maps happened simultaneously, alignment between the EEs and the learning maps was not possible until the fall of 2012. The process of evaluating the alignment between the EEs and the DLM learning maps involved reconciling the content of the EEs to the content represented in the nodes and connections of the DLM learning maps in ELA and mathematics.

Teams of content experts worked together to revise the initial 2012 version of the EEs and the DLM maps to ensure appropriate alignment of these two elements of the assessment system. Alignment involved horizontal alignment of the EEs with the CCSS and vertical alignment of the EEs with meaningful progressions of skills represented by nodes in the DLM maps. The process of aligning the maps and the EEs began by identifying nodes in the maps that represented the EEs in mathematics and ELA. This process revealed areas in the maps where additional nodes were needed to account for incremental increases in expectations across related EEs from one grade to the next. Areas were also identified in which an EE was out of place developmentally with other EEs in the same or adjacent grades according to research that was incorporated into the maps. For example, adjustments were made when an EE related to a higher-grade map node appeared earlier on the map than an EE related to a lower-grade map node (e.g., a fifth-grade skill preceded a third-grade skill). Finally, the alignment process revealed EEs that were actually written as instructional tasks rather than learning outcomes. These EEs were revised to represent knowledge and skills rather than instructional tasks. These revisions were compiled and reviewed by the governance board in early 2013, with an approved final version of the EEs published in May 2013. Final documents are available publicly on the DLM website for ELA and mathematics.

This process of aligning the EEs and learning maps also resulted in significant revisions to the DLM learning maps to ensure that the nodes and connections represented a solid framework from which assessments could be developed. Depending on the complexity of the EE, one or more nodes in the DLM learning maps were aligned to the EE as learning targets. If no existing node(s) corresponded to the content of the EE, new nodes were created and placed in the DLM learning maps at appropriate locations according to their content. New nodes were placed in the DLM learning maps by analyzing the existing map structure to identify precursor and successor nodes to the new node. Once identified, the map developers proposed placements of new nodes and connections based on literature reviews and expert judgment.

2.4 Organizing the Learning Maps: Claims and Conceptual Areas

Large sections of the DLM learning maps are too complex to depict in a manageable map view or describe on a node-by-node basis. Instead, the larger sections are described by the claims and conceptual areas they represent. This organizational structure was designed to articulate where the content standards are located and their relationships to important cognitive concepts. Organization of the academic content in the DLM assessment system is illustrated conceptually in three layers (claims, conceptual areas, and EEs), as shown in Figure 2.9.

Figure 2.9: Layers of Content in the DLM Alternate Assessment System

A figure showing that sections of the learning maps can be grouped into claims, which are subdivided into conceptual areas. Within each conceptual area are Essential Elements.

Modern test development approaches, such as evidence-centered design (Mislevy et al., 2003; Mislevy & Riconscente, 2005), are founded on the idea that test design should start with specific claims about what students know and can do and the evidence needed to support such claims. While evidence-centered design is multifaceted, it starts with a set of claims regarding the major knowledge, skills, and understandings in the domains of interest (i.e., ELA and mathematics), as well as how it is acquired.

The DLM System divides both ELA and mathematics content into four broad claims, which are subdivided into nine conceptual areas for each subject. Claims are overt statements of what students with the most significant cognitive disabilities are expected to learn and be able to demonstrate when mastering the knowledge, skills, and understandings within a broad section of the DLM learning maps. As broad statements about expected student content learning, claims focus the scope of the assessment. Each claim includes two or three conceptual areas.

Conceptual areas are clusters of related concepts made up of multiple conceptually associated EEs and nodes that support, represent, and extend beyond the EE’s learning target(s). Conceptual areas further define the knowledge, skills, and understandings required to meet the broader claims and serve as models of how students may acquire and organize these knowledge, skills, and understandings in a subject. The DLM claims and conceptual areas apply to all grades in the system.

The four claims and nine conceptual areas for each subject are shown in Table 2.5. Following Table 2.5, Figure 2.10 provides an example of a conceptual area in the learning map.

Table 2.5: DLM Claims and Conceptual Areas
Claim Conceptual area
ELA.C1: Students can comprehend text in increasingly complex ways. ELA.C1.1: Determine Critical Elements of Text
ELA.C1.2: Construct Understandings of Text
ELA.C1.3: Integrate Ideas and Information From Text
ELA.C2: Students can produce writing for a range of purposes and audiences. ELA.C2.1: Use Writing to Communicate
ELA.C2.2: Integrate Ideas and Information in Writing
ELA.C3: Students can communicate for a range of purposes and audiences. ELA.C3.1: Use Language to Communicate With Others
ELA.C3.2: Clarify and Contribute in Discussion
ELA.C4: Students can engage in research/inquiry to investigate topics and present information. ELA.C4.1: Use Sources and Information
ELA.C4.2: Collaborate and Present Ideas
M.C1: Number Sense: Students demonstrate increasingly complex understandings of number sense. M.C1.1: Understand Number Structures (Counting, Place Value, Fractions)
M.C1.2: Compare, Compose, and Decompose Numbers and Sets
M.C1.3: Calculate Accurately and Efficiently Using Simple Arithmetic Operations
M.C2: Geometry: Students demonstrate increasingly complex spatial reasoning and understanding of geometric principles. M.C2.1: Understand and Use Geometric Properties of Two- and Three-Dimensional Shapes
M.C2.2: Solve Problems Involving Area, Perimeter, and Volume
M.C3: Measurement Data and Analysis: Students demonstrate increasingly complex understanding of measurement, data, and analytic procedures. M.C3.1: Understand and Use Measurement Principles and Units of Measure
M.C3.2: Represent and Interpret Data Displays
M.C4: Algebraic and Functional Reasoning: Students solve increasingly complex mathematical problems, making productive use of algebra and functions. M.C4.1: Use Operations and Models to Solve Problems
M.C4.2: Understand Patterns and Functional Thinking

Figure 2.10: Section of the English Language Arts Learning Map for Conceptual Area ELA.C1.2

A large section of the ELA learning map describing an entire conceptual area.

Note. Red circles indicate nodes aligned to Essential Elements.

Within each claim and conceptual area, smaller regions of the DLM learning maps are displayed in mini-maps for single EEs, as shown in Figure 2.11. Mini-maps identify which nodes are assessed at each linkage level for an EE, as well as untested nodes between the assessed nodes to support educators’ instructional decisions.

Figure 2.11: Example Mini-Map for an English Language Arts Essential Element

A small portion of the ELA learning map showing nodes, their connections, and which nodes are tested in linkage levels.

Note. Blue boxes indicate nodes organized into linkage levels. IP = Initial Precursor, DP = Distal Precursor, PP = Proximal Precursor, T = Target, S = Successor, UN = Untested node.

In summary, the DLM learning maps represent the paths students with the most significant cognitive disabilities may take to acquire the knowledge, skills, and understandings within a subject, claim, or conceptual area. EEs (the DLM learning targets) within a particular claim or conceptual area link to one another. Linkage levels are small collections of nodes which represent critical junctures on the path toward and beyond the learning target. The Target linkage level reflects the grade-level expectation aligned directly to the EE. The DLM claims and conceptual areas provide a framework for organizing nodes on the DLM learning maps and, accordingly, the EEs.

2.5 System Structure

Table 2.6 shows examples of the relationship between DLM system elements in ELA and mathematics. Assessment system elements are listed from broadest to most specific (i.e., from claim to learning map node).

Table 2.6: Assessment System Elements With Examples for English Language Arts and Mathematics
Element Description
English language arts
Claim ELA.C1 Student can comprehend text in increasingly complex ways.
Conceptual area ELA.C1.1 Determine Critical Elements of Text
Essential Element ELA.EE.RL.3.1 Answer who and what questions to demonstrate understanding of details in a text.
Target linkage level The student can answer who and what questions about details in a story.
DLM learning map node Can answer who and what questions about details in a narrative.
Mathematics
Claim M.C1 Number Sense: Students demonstrate increasingly complex understanding of number sense.
Conceptual area M.C1.3 Calculate Accurately and Efficiently Using Simple Arithmetic Operations
Essential Element M.EE.6.NS.2 Apply the concept of fair share and equal shares to divide.
Target linkage level Demonstrate understanding of division by splitting a set into an equal number of subsets and communicating the quotient as the number of equal subsets (e.g., a set consisting of 15 objects has three subsets, each containing 5 objects).
DLM learning map node Demonstrate the concept of division.

The overall structure of the DLM System has four key relationships between system elements (see Figure 2.12; the numbers below are indicated in Figure 2.12):

  1. College and career readiness content standards and EEs for each grade level
  2. An EE and its target-level node(s)
  3. An EE and its associated linkage levels
  4. DLM learning map nodes within a linkage level and assessment items

Figure 2.12: Relationships in the DLM Alternate Assessment System

A flowchart showing that EEs are derived from the college and career readiness content standards. The EEs are are associated with five linkage levels and the target-level nodes. The linkage levels are measured by items organized into testlets.

2.6 Evaluation of the Learning Map Structure

Once developed, the first evaluation of the DLM learning maps consisted of educator and expert review. Subsequent empirical analyses of the structure of the DLM learning maps have also been conducted. Each of these evaluations are described in turn.

2.6.1 Educator and Expert Review

By 2014, the DLM learning maps underwent three major external reviews by educators and experts. The first two review panels (K–5 and 6–12) leveraged the content expertise of general educators in ELA and mathematics, identified by state education agency (SEA) personnel from members of the DLM Governance Board, to examine the nodes and connections in the DLM learning maps by grade level. For each node, the external content reviewers considered

  1. the appropriateness of cognitive complexity,
  2. the relationship to the CCSS, and
  3. the properties of the node (e.g., grain size and redundancy).

The external content reviewers then reviewed individual origin-to-destination connections for appropriateness (e.g., is the connection from skill A to skill B logical?). If the external content reviewers identified a node or connection they disagreed with, found illogical, or contained a gap, they stated the reason for their decision and attempted to provide evidence supporting it. The external content reviewers then offered potential solutions for improving the problematic node or connection, such as adding a new or current node between the two nodes in a connection. During these reviews of the DLM learning maps, the external content reviewers focused on only the typical progression of the average student in acquiring the grade-level learning targets. Following the K–5 and 6–12 external content reviews, the map developers revised the DLM learning maps to incorporate the external content reviewers’ feedback.

Additionally, as described in section 2.3.4 of this chapter, collaborators from CLDS at the University of North Carolina identified multiple sections in the DLM learning maps in which students with specific sensory, mobility, or communication disabilities might have difficulty demonstrating the node’s content. SEA personnel from the participating states identified accessibility experts across various disabilities to participate in an external accessibility review. Special educators and related service providers reviewed specific sections of the DLM learning maps to make their content and structure accessible to all students with the most significant cognitive disabilities, regardless of any sensory, mobility, or communication disabilities. The external accessibility reviewers also evaluated areas flagged by CLDS and recommended pathways to make them more accessible.

In some cases, the implementation of the principles of Universal Design for Learning (e.g., flexibility and equitability of use, Center for Applied Special Technology, 2018) could make the node’s content accessible by altering the manner of their assessment (i.e., allowing for multiple ways to demonstrate skills). When possible, the resulting nodes did not depend on information exclusively available through only one sense. In other cases, some students need to acquire unique cognitive skills to achieve a learning target (see the writing example provided in section 2.3.4). The external accessibility reviewers then identified additional pathways to support multiple means of instruction and assessment that were accessible to all students. In summary, the external accessibility reviewers proposed node and connection revisions to increase the accessibility of the DLM learning maps’ content and structure when considering assistive technology for students with specific sensory, mobility, or communication disabilities.

2.6.2 Empirical Analyses of the Learning Maps

Although items on the DLM assessments are written to measure specific nodes, For a description of item writing practices, see Chapter 3 of this manual. the linkage level is the unit of analysis for the diagnostic classification model (DCM) used to score the assessment and the unit of reporting for student score reports. See Chapter 5 and Chapter 7 of this manual for descriptions of the psychometric model used to score DLM assessments and score reporting, respectively. Because of this test design, students typically complete 3–5 items for each assessed linkage level, but may complete as few as 0 or 1 item for each node (i.e., not all nodes that comprise a linkage level are measured on every testlet). Therefore, a direct evaluation of the learning map structure is not possible with the available data, as the DCM cannot be identified at the node level (Fang et al., 2019; Xu, 2019; Xu & Zhang, 2016).

However, it is possible to evaluate the learning map structures by using the linkage levels as a proxy for the underlying maps. As described above, the linkage levels represent clusters of nodes, which are the learning targets for each EE. The five linkage levels are hypothesized to follow a linear hierarchy (i.e., more advanced linkage levels cannot be mastered if the student has not also mastered the prerequisite skills). Thus, we can begin to evaluate the learning map structure by evaluating the linear ordering of the linkage levels.

Consistent with the scoring model for DLM assessments, the linkage level ordering is evaluated using DCMs. The framework for evaluating a learning map structure using DCMs is detailed by W. J. Thompson & Nash (2022). The framework consists of three methods that each evaluate a proposed structuring of attributes (i.e., linkage levels):

  1. Patterns of Mastery Profiles. The first method is the most direct test of an attribute structure. In this method, two DCMs are estimated: the log-linear cognitive diagnostic model (LCDM, Henson et al., 2009) and a hierarchical DCM (HDCM, Templin & Bradshaw, 2014). The LCDM allows students to have any profile of attribute (i.e., linkage level) mastery. In the HDCM, the model is constrained to only allow profiles of mastery that conform with the hierarchical structure (i.e., the five ordered DLM linkage levels). The two models can then be evaluated for model fit and compared directly using relative fit indices such as the widely applicable information criterion (WAIC, Watanabe, 2010) or Pareto-smoothed importance sampling leave-one-out cross validation (Vehtari et al., 2022; PSIS-LOO, Vehtari et al., 2017). EEs are flagged if the HDCM fits significantly worse than the LCDM, which indicates that the subset of mastery profiles included in the HDCM are insufficient for representing the range of mastery profiles demonstrated by students.

  2. Patterns of Attribute Mastery. In this method, each attribute is estimated and scored separately as a 1-attribute LCDM (i.e., master or nonmaster of each individual linkage level). The patterns of mastery are then compared to the patterns that are expected, given the hypothesized structure (i.e., a student should not be able to master the Target level without also mastering the Proximal Precursor level). Any EE where more than 25% of students exhibit an unexpected pattern of attribute mastery is flagged for possible violations of the hierarchical linkage level ordering.

  3. Patterns of Attribute Difficulty. In the final method, students are placed into cohorts based on their complexity band. See Chapter 4 for a description of how complexity bands are determined. We then calculate the percentage of items answered correctly (p-value) by each cohort for each linkage level. If the hierarchical structure is correct, the p-values for any one cohort should decrease as the linkage level increases (i.e., the items should get harder as the linkage level and underlying nodes become more complex). A standard error is calculated for each p-value, and 95% confidence interval is created based on the standard error. An EE is flagged if the p-value for a higher linkage level is higher (i.e., items are easier) than for the lower linkage level, and the confidence intervals do not overlap. That is, the p-value is in the opposite direction of our expectation, beyond what might be expected due to noise in the data.

As with any analysis evaluating the statistical relationship between variables, the methods outlined in the W. J. Thompson & Nash (2022) framework require data on each variable from the same participant. For the purposes of evaluating linkage level ordering, this means students must be assessed on multiple linkage levels for the same EE. Under the current assessment administration design, there are limited opportunities for collecting these cross-linkage-level data. These data can only be collected from students in Instructionally Embedded model states for whom educators choose to administer the same EE at multiple linkage levels, students in Year-End model states who participate in the optional instructionally embedded assessment window, or through targeted field testing. For example, in 2017–2018 and 2018–2019, rather than administering new content during the spring field-test window, operational forms were administered at a linkage level adjacent to the level that was tested during the operational assessment. This was done for the expressed purpose of collecting additional cross-linkage-level data to support an evaluation of the linkage level ordering. Although the targeted field test was limited to a single EE in each grade, the combined data from all sources have allowed for preliminary empirical evaluations of linkage level ordering.

Initial findings supporting the linkage level order are described in full by W. J. Thompson & Nash (2019). The patterns of mastery profiles method had limited results due to the limited availability of cross-linkage-level data. In the patterns of attribute mastery method, over 80% of EEs received no flag, indicating that the patterns of attribute mastery were consistent with the hierarchical ordering of linkage levels. Similarly, the patterns of attribute difficulty method also showed over 80% of EEs supporting the hierarchical structure of linkage levels. For a complete description of results, see W. J. Thompson & Nash (2019). Notably, most flags from the second and third methods were due to potential reversals among the higher linkage levels. This is expected, as the skills measured by the higher linkage levels are closer conceptually than the lower linkage levels. For example, in Figure 2.7, there are gaps of untested nodes between the earlier linkage levels, whereas the later tested nodes are consecutive. Therefore, it is more likely that we might observe a reversal in this situation. Although the initial findings show positive support for the linkage level ordering, additional research is planned to refine the methods and flagging criteria for identifying potential violations of the hierarchical structure. As the method are further refined, flagged EEs will undergo qualitative reviews by the test development and learning maps teams to inform potential revisions to the map structure. Updated analyses will be included in future technical manual updates as additional cross-linkage-level analyses and reviews are conducted.

2.7 Development of Assessment Blueprints

The DLM assessment blueprints specify the EEs, organized by conceptual area, on which students are assessed in each grade and subject.

Blueprint development began with a proposed plan in October 2013. The test development teams in each subject area developed blueprint options following several guiding principles. State representatives and subject matter experts then reviewed multiple iterations of blueprints, as did the senior DLM staff and psychometricians, through September 2014. The governance board adopted finalized blueprints for the spring 2015 administration.

The assessment blueprints were subsequently revised in 2019–2020, with additional feedback from the governance board and the DLM Technical Advisory Committee (TAC). In the following sections, we describe the initial development of the assessment blueprints, followed by the revision process.

2.7.1 Original Blueprint Development Process

The DLM Governance Board identified three overarching needs when developing the original blueprints: the blueprints should have broad coverage of academic content, emphasize connections across grades, and limit administrative burden. The learning maps described above were used to prioritize EEs for inclusion in the blueprint in each subject. EEs were evaluated by determining the position within the maps of EE-aligned nodes. EEs selected for inclusion in the blueprint had the potential to maximize student progress in academic skills across grades. The general principles that guided the use of the DLM maps to develop the blueprints were to:

  • prioritize interrelated content to allow for opportunities to learn ELA and mathematics skills and conceptual understandings within and across grades,
  • use knowledge of academic content and instructional methods to prioritize content considered important by stakeholders,
  • maximize the breadth of content coverage of EEs within each grade and subject,
  • balance a need for representativeness across grades with the need to prioritize a narrower range of interconnected content to allow students the opportunity to demonstrate growth within and across grade levels, and
  • select an appropriate number of EEs in a grade to prevent excessive time for administration of an assessment to students with significant cognitive disabilities.

In both subjects, some EEs were not included on the blueprint. Reasons for excluding EEs from the blueprint included:

  • the EE would be very difficult to assess in a standardized, computer-based assessment,
  • the EE content relied on specific sensory information (e.g., an EE that was excluded because it would likely provide a barrier to access for students with visual impairments is ELA.EE.RL.3.7, “Use information gained from visual elements and words in the text to answer explicit who and what questions.”), In this case, a different EE in the same grade, describing a similar construct, ELA.EE.RL.3.1, “Answer who and what questions to demonstrate understanding of details in a text,” was included on the blueprint, as it did not require specific attention to visual elements. and
  • the EE content was more aligned to instructional goals (e.g., demonstrating understanding of text while engaged in group reading of stories) than to an assessment.

These principles were applied when making decisions about the EEs that were included in the blueprint. For instance, in ELA, the decision was made, in consultation with the governance board, to only assess Claim 1 (reading) and Claim 2 (writing). It is important to recognize that these principles were not implemented as rigid rules, but as guidelines for prioritization of the content of the EEs within and across the grades.

Test development teams for ELA and mathematics produced initial blueprint drafts by conducting a substantive review of each EE in conjunction with the location of the EE within the learning maps. The processes for ELA and mathematics differed slightly given the structural differences in the way the EEs were grouped thematically, These structural differences in groupings refer to the use of strands in ELA and clusters in mathematics. These elements were used in the CCSS and maintained in the EEs. but adhered to these basic steps:

  1. Review the content of the EE and its relationship to the associated grade-level content standard.
  2. Review the location of the node(s) associated with the Target content of the EE in the maps.
  3. Review the location of the node(s) associated with the Proximal, Distal, and Initial Precursors for each EE.
  4. Review the location of the node(s) associated with the Successor for each EE.
  5. Examine the relative location in the maps of all linkage levels associated with the EE to the location of related EEs in the preceding grade.
  6. Examine the relative location in the maps of the contents of the EE to the location of related EEs in the following grade.
  7. Using the map locations, prioritize EEs that were most interconnected with EEs in the same grade level.
  8. Using the map locations, prioritize EEs that were most interconnected with EEs at the preceding and following grade levels.

Initial drafts of test blueprints were reviewed by the DLM Governance Board and TAC members in early 2014.

2.7.1.1 Original English Language Arts Blueprint

In the original ELA blueprint, after seeking input and consent from the governance board, content in the areas of Claim 1 (reading) and Claim 2 (writing) was prioritized for inclusion. In addition to a variety of reading testlets at each grade level, all students complete structured writing assessments in which test administrators engage students in a writing activity that addresses between one and six EEs in Claim 2. The EEs selected for the blueprint have:

  • a broad range of potential application in novel contexts,
  • the most connections to content at subsequent grade levels, and
  • content that is relevant to a conceptual pathway in ELA that has applications in multiple domains or contexts.

2.7.1.2 Original Mathematics Blueprint

Like the original ELA blueprint, the breadth of mathematics EEs available on the original blueprint for assessment was deliberately broad. In each grade, the original blueprint addressed all four claims and each conceptual area relevant to the grade. All but a few EEs were included in the blueprint, excluding only those EEs that are very difficult to represent in a computer-based assessment environment. In addition to implementing these general guidelines, the mathematics blueprint reflected additional attempts to streamline the assessment across the grades to:

  • avoid unnecessary redundancy in what is tested from year to year,
  • highlight concepts and skills that provide students knowledge and skills for future mathematical learning during and beyond school, and
  • acknowledge mathematical learning trajectories that connect the EEs over the course of several grades.

2.7.2 Blueprint Revision

Discussions began in the summer of 2016 with the DLM Governance Board about the need to review and revise the ELA and mathematics blueprints. At the December 2016 governance meeting, the governance board agreed that a reduction in the number of EEs on the ELA and mathematics Year-End blueprints was necessary to support more fine-grained reporting in student score reports (i.e., provide the Learning Profile in individual student score reports See Chapter 7 of this manual for a complete description of score reports.). As part of this change, the governance board also agreed to change test specifications to increase the number of items that assess a single EE, as existing testlets typically measured each EE with one or two items. Additionally, because the testing times were lower than originally expected, the governance board agreed to increase the number of items administered per grade and subject. Thus, the governance board agreed to reduce the total number of EEs on the blueprint so that the number of items measuring each EE retained on the blueprint could be increased without over burdening educators and test administrators. These changes would allow reporting of fine-grained information in the Learning Profile. For a description of score reports, see Chapter 7 of this manual. The revised blueprints were adopted for use beginning with the 2019–2020 academic year.

2.7.2.1 Blueprint Revision Process

The revision process began with a content review of the original blueprints and a comparison to the DLM Instructionally Embedded model blueprints, which required fewer EEs per conceptual area and claim.

Three principles guided the selection of EEs for the revised blueprints, based on the criteria used for developing the original blueprints in 2014. Below, “goal(s)” included in each principle describe outcomes related to each principle that will meet the guidelines stated above.

Principle 1: The blueprint should have broad coverage of academic content as described by the EEs. The goals related to this principle are to provide appropriate breadth of content coverage of EEs within each grade and subject, select EEs for the blueprint that represent useful and valuable content for students, keep proportional coverage of claims and/or conceptual areas close to or identical to the original blueprint, and select a number of EEs in each grade that are more consistent with the number of EEs required in the Instructionally Embedded model blueprint.

Principle 2: The blueprint should emphasize connections in skills and understandings from grade to grade. The goals related to this principle are to select EEs in each grade that are conceptually related to EEs in later grades and use the learning map structure to inform grade-to-grade decisions to provide a connected, continuous delivery of content across all grades.

Principle 3: The revised blueprint should allow for a testlet design where each EE is assessed by 3-5 items for a total of 35-45 items in each subject in each grade. The goal related to this principle is to reduce the number of EEs and increase the number of items per EE, supporting the delivery of fine-grained mastery information without exceeding the maximum allowable assessment length, as specified by the DLM Governance Board.

2.7.2.2 Overview of Blueprint Revision

This section provides a content overview of the revisions to the blueprints in ELA and mathematics for grades 3–8 and high school. The revised blueprints prioritize a set of EEs using a set of rationale categories to provide appropriate breadth and depth of content coverage in each discipline. The rationale categories used in the development of this version of the blueprints are:

Category 1: Include EEs that introduce or extend critical academic skills to form particular learning pathways in a topic/subject across grades. The EEs under Category 1 introduce an important academic skill, are a crucial turning point in a topic/subject, combine multiple critical academic skills, and expand on critical academic skills acquired in a previous grade.

Category 2: Include EEs that maintain representative conceptual area or content coverage. The EEs under Category 2 maintain complete coverage of all conceptual areas; provide equivalent coverage, across grades, of similar academic skills in each conceptual area; are the initial or last EE on a topic/subject across grades; address unique skills; and have few critical linkage level skills shared with other EEs.

Category 3: Exclude EEs that have a high degree of similarity with another EE that will remain on the revised blueprints, within or across grades. The EEs under Category 3 provide preferential coverage of the same academic skills in only one conceptual area and do not significantly expand on academic skills acquired in one or more of the surrounding grades.

Category 4: Exclude EEs that allow for additional coverage or more learning opportunities to critical academic skills.

Table 2.7 lists the number of EEs approved for the original and revised ELA blueprint, and Table 2.8 lists all the EEs approved for the original and revised mathematics blueprint.

Table 2.7: Number of Essential Elements Approved for the Original and Revised English Language Arts Blueprint, by Conceptual Area
Original blueprint
Revised blueprint
Grade C1.1 C1.2 C1.3 C2.1 C2.2 Total C1.1 C1.2 C1.3 C2.1 C2.2 Total
  3 7   5 2 2 16 4 3 1 2 10
  4 7   6 1 3 0 17 3 4 1 3 0 11
  5 3   8 4 2 0 17 1 5 2 2 0 10
  6 1 10 3 2 0 16 1 6 1 3 0 11
  7 1   8 4 5 0 18 1 4 3 5 0 13
  8 0   9 3 5 0 17 0 6 2 5 0 13
  9 0   9 3 3 2 17 0 5 3 4 2 14
10 0   9 3 3 2 17 0 5 3 4 2 14
11 0   8 4 4 2 18 0 5 3 4 2 14
Table 2.8: Number of Essential Elements Approved for the Original and Revised Mathematics Blueprints, by Conceptual Area
Original blueprint
Revised blueprint
Grade C1.1 C1.2 C1.3 C2.1 C2.2 C3.1 C3.2 C4.1 C4.2 Total C1.1 C1.2 C1.3 C2.1 C2.2 C3.1 C3.2 C4.1 C4.2 Total
  3 3 1 0 1 2 1 2 1 11 2 1 0 1 1 1 1 1 8
  4 2 2 1 3 1 3 1 2 1 16 1 0 1 1 1 3 0 0 1 8
  5 2 3 2 2 1 3 1 1 15 1 2 1 2 1 0 1 0 8
  6 1 2 2 2 1 3 11 0 2 0 2 1 2 7
  7 2 1 3 3 1 2 1 1 14 1 0 3 1 1 0 1 0 7
  8 1 1 2 4 1 1 1 3 14 1 0 1 1 1 1 1 2 8
  9 3 2 1 0 0 2 0   8 3 2 0 0 0 2 0 7
10 1 1 0 1 2 2 2   9 1 1 0 1 2 1 2 8
11 2 1 0 0 1 0 5   9 2 1 0 0 1 0 2 6

EEs were included or excluded from the revised blueprints based on the four categories listed above. In both subjects, test development teams determined which EEs to exclude from the revised blueprint, while maintaining the three guiding principles. Some reasons for excluding EEs from the revised blueprint were a high degree of node overlap with other EEs included in the blueprint or similar skills to other EEs included in the blueprint. Additionally, some EEs were excluded to target other EEs representing skills that receive less coverage in the blueprint than other skills or to allow for additional coverage to be provided to the critical foundational nodes associated with other EEs included in the blueprint. Finally, some EEs were excluded if they did not significantly extend on the skills represented in the standards in the surrounding grades.

Table 2.9 shows how many EEs from the original blueprints in ELA and mathematics were either included or excluded from the revised blueprint, by category. The higher number of exclusions for the mathematics blueprints are due to differences in how writing EEs are assessed in ELA. In each grade and subject, the maximum test length was approximately 40 items. The DLM assessments are administered in testlets, and, with the exception of writing, each testlet measures 1 EE with 3–5 items. See Chapter 3 of this manual for a complete description of testlets and assessment content. Thus, in general, the blueprints contain 7–8 EEs, which allows for up to 40 items. As previously noted, writing is an exception. All writing EEs in a grade are assessed on a single testlet. See Chapter 3 of this manual for a complete description of writing testlets. Thus, it is possible to include more EEs on the ELA blueprint, while staying under the desired test length of 40 items. This can be seen in Table 2.7, where is number of EEs is consistently higher than than the 7–8 EEs that are included on each mathematics blueprint (Table 2.7).

Table 2.9: Number of Essential Elements (EEs) Included or Excluded on the Revised Blueprint, by Categories
Category English language arts Mathematics
1 (Included EEs of critical skills) 40 11
2 (Included EEs to maintain coverage) 54 56
3 (Excluded similar EEs) 35 20
4 (Excluded EEs that allow for additional coverage)   6 20
Note. Category 1 and 2 for English language arts (ELA) do not add up to the sum of the ELA EEs in Table 3.3 because Grades 9 and 10 share the same EEs and two EEs were added that were not on the original blueprint.

2.7.2.3 Breadth and Depth of Node Coverage

As described in section 2.1 and section 2.4 of this chapter, EEs can be represented as mini-maps, where each linkage level is comprised of one or more nodes that represent a critical juncture of knowledge or skills on the pathway toward the grade-level learning target(s). This section describes how the revisions to the assessment blueprints maintain a proportional coverage of nodes from the DLM maps compared to the original blueprints. The revision provided consistent breadth and depth of node coverage for the grade-level EEs within each subject.

Table 2.10 and Table 2.11 present the node coverage for the revised blueprints compared to the original blueprints. Test development team revisions retained coverage of between 57% and 79% of nodes in ELA and 50% and 99% of nodes in mathematics outright. Of the nodes that were not retained, between 8% and 21% in ELA and 0% and 55% in mathematics were covered in other grades. The revised blueprints only reduced node coverage by between 2 and 8 nodes in ELA and 0 and 17 nodes in mathematics per grade.

Table 2.10: Node Overlap Between Original and Revised English Language Arts Blueprints and Coverage of Nodes Not Included in the Revised Grade-Level Blueprint
Grade Nodes in original blueprint (n) Nodes in revised blueprint (n) Nodes retained (%) Uncovered nodes in grade level (n) Nodes covered in other grades (n) Uncovered nodes (n) Uncovered nodes (%)
3 54 38 70.4 16 10 6 11.1
4 53 36 67.9 17   9 8 15.1
5 61 35 57.4 26 21 5   8.2
6 64 40 62.5 25 17 8 12.5
7 61 40 65.6 21 17 4   6.6
8 53 40 75.5 13 11 2   3.8
9–10 62 42 67.7 20 16 4   6.5
11–12 53 42 79.2 11   8 3   5.7
Note. English language arts Essential Elements for high school are organized into two grade bands (9–10 and 11–12) instead of individual grades.
Table 2.11: Node Overlap Between Original and Revised Mathematics Blueprints and Coverage of Nodes Not Included in the Revised Grade-Level Blueprint
Grade Nodes in original blueprint (n) Nodes in revised blueprint (n) Nodes retained (%) Uncovered nodes in grade level (n) Nodes covered in other grades (n) Uncovered nodes (n) Uncovered nodes (%)
  3   99 83 83.8 16 16   0   <0.1  
  4 144 91 63.2 53 47   6   4.2
  5 143 71 49.7 72 55 17 11.9
  6   98 66 67.3 32 29   3   3.1
  7 133 76 57.1 57 50   7   5.3
  8 115 71 61.7 44 27 17 14.8
  9   92 85 92.4   7   5   2   2.2
10   72 71 98.6   1   0   1   1.4
11   76 61 80.3 15   4 11 14.5
Note. Because the mathematics Essential Elements for high school are not divided into grades, the high school Essential Elements are organized into three grade-level integrated mathematics courses: Math 9, Math 10, and Math 11.

In summary, the revised blueprints provide a connected, continuous delivery of content across all grades. Furthermore, the revisions to the assessment blueprints allowed for an increase in the number of items covered in each EE while simultaneously collecting finer-grained student mastery information necessary to create informative and useful student reports. The complete blueprints for ELA and mathematics are available on the DLM website, and are also included in Appendix B.1 of this manual.

2.8 Alignment

ACERI Partners, LLC, conducted an external alignment study of the DLM operational assessment system (Flowers & Wakeman, 2016b) to investigate the relationships between the content structures in the DLM System. A modification of Links for Academic Learning alignment methodology (Flowers et al., 2009) was used to evaluate the coherence of the DLM System. The alignment study focused on the following relationships (as illustrated by the corresponding numbers in Figure 2.12):

  1. College and career readiness content standards and EEs
  2. An EE and its target-level node(s)
  3. The vertical articulation of the linkage levels associated with an EE
  4. DLM learning map nodes within a linkage level and assessment items

For a complete description of the original alignment study, see Flowers & Wakeman (2016a). Following the revision to the assessment blueprint (see section 2.7.2 above), DLM staff reanalyzed the data collected from the original study to reflect the changes to the blueprint EEs and the items administered.

This section describes the results of the first three relationships, reflecting the updated alignment study based on the revised assessment blueprints. For a discussion of the alignment of assessment items, see Chapter 3 of this manual.

In ELA, a total of 79 testlets (25% of the pool that was randomly sampled for the original alignment report) and 304 items (20%) were examined for alignment. In mathematics, 70 (29%) testlets and 192 (21%) items were evaluated.

Content and performance centrality were the primary measures of alignment. Content centrality is a measure of the degree of fidelity between the item/element and the content of the academic grade-level target. Specifically, it measures the degree of fidelity between the college and career readiness standard and the EE or between the EE and the target-level node(s). Panelists rated each pair as having no link, a far link, or a near link. Performance centrality represents the degree to which the operational assessment item and the corresponding academic grade-level content target contain the same performance expectation. Specifically, performance centrality measures the degree to which the performance expectation matches between the college and career readiness standard and the EE or between the EE and the target-level node(s). Panelists rated the degree of performance centrality between each pair as none, some, or all. If panelists identified a relationship that did not meet criteria for alignment (e.g., no link for content centrality), they provided additional feedback. When evaluating items, panelists used the DLM cognitive process dimension taxonomy to identify the category for the highest cognitive dimension required of the student when responding to the item.

The following sections provide a brief summary of findings from the external alignment study. Full results are provided in the updated technical report (Flowers & Wakeman, 2020).

2.8.1 Alignment of College and Career Readiness Standards and Essential Elements

All EEs identified in the assessment blueprints were included in these analyses. Table 2.12 and Table 2.13 display the results of content centrality and performance centrality ratings, respectively. For content centrality, EEs were defined as “Met” if they were given a “Far” or “Near” rating. For performance centrality, EEs were defined as “Met” if they were given a “Some” or “All” rating. The level of acceptable content centrality for alternate assessments is 80% (Flowers et al., 2009). No recommended acceptable level of performance centrality is provided for alternate assessments. In ELA, 82% of EEs were rated as maintaining fidelity to the content and the grade-level college and career readiness standards, and 91% of EEs of were rated as having the same performance expectation. Similarly, in mathematics, 80% of EEs were rated as maintaining fidelity, and 74% were rated as having the same performance expectation. This is an acceptable level of alignment given the rigor of grade-level standards and the need to provide access for all students with the most significant cognitive disabilities. EEs that were rated “No” for maintaining fidelity or “None” for having the same performance expectation were forwarded to the test development teams for further review.

Table 2.12: Content Centrality of College and Career Readiness Standards to Essential Elements
Subject Total N No (%) Far (%) Near (%) Met (%)
English language arts   96 17 (18) 69 (72) 10 (10) 79 (82)
Mathematics 100 20 (20) 73 (73)   7   (7) 80 (80)
Note. Gray shading indicates acceptable level of alignment.
Table 2.13: Performance Centrality of College and Career Readiness Standards to Essential Elements
Subject Total N None (%) Some (%) All (%) Met (%)
English language arts   96   9   (9) 75 (78) 12 (13) 87 (91)
Mathematics 100 26 (26) 57 (57) 17 (17) 74 (74)
Note. Gray shading indicates acceptable level of alignment.

2.8.2 Alignment of Essential Element and Target-Level Nodes

Table 2.14 and Table 2.15 display the content and performance centrality of the alignment of EEs to target-level node(s), which are the node(s) that reflect the grade-level expectation in the EE. The number of EEs in Table 2.14 and Table 2.15 is different from Table 2.12 and Table 2.13 because some EEs corresponded to more than one target-level node. All EEs were rated as aligned to the target-level nodes, with most EEs rated as near the target-level node. Similar results were found for performance centrality. All EEs were rated as meeting some or all of the performance expectations found in the target-level node. These findings suggest a strong alignment between EEs and target-level nodes.

Table 2.14: Content Centrality of Essential Elements to Target-Level Nodes
Subject Total N No (%) Far (%) Near (%) Met (%)
English language arts   96 0 (0)   4   (4)   92 (96)   96 (100)
Mathematics 145 0 (0) 28 (19) 117 (81) 145 (100)
Note. Gray shading indicates acceptable level of alignment.
Table 2.15: Performance Centrality of Essential Elements to Target-Level Nodes
Subject Total N None (%) Some (%) All (%) Met (%)
English language arts   96 0 (0) 12 (13)   84 (88)   96 (100)
Mathematics 145 0 (0) 32 (22) 113 (78) 145 (100)
Note. Gray shading indicates acceptable level of alignment.

2.8.3 Vertical Articulation of Linkage Levels for Each Essential Element

Panelists evaluated linkage levels to see if they reflected a progression of knowledge, skills, and understandings. Table 2.16 shows the results of the vertical articulation of the linkage levels for the EEs at each grade level for ELA and mathematics. For ELA, 95 linkage levels were reviewed by panelists and 76 (80%) were rated as showing a clear progression from precursor to successor nodes. The low rating for seventh grade was due to panelists reporting that the Initial Precursor was not clearly part of the progression in the ordered nodes. For mathematics, 66 linkage levels were reviewed and 64 linkage levels (97%) were rated as demonstrating a clear progression in the ordered nodes. EEs that were rated as not reflecting a clear progression were forwarded to the test development teams for further review.

Table 2.16: Vertical Articulation of Linkage Levels for Essential Elements
Grade Total N Clear progression (%)
English language arts
3 10   9   (90)
4 11   9   (82)
5 10   8   (80)
6 11   9   (82)
7 13   7   (54)
8 13 10   (77)
9–10 14 12   (86)
11–12 13 12   (92)
Total 95 76   (80)
Mathematics
3   8   8 (100)
4   8   7   (88)
5   8   8 (100)
6   7   6   (86)
7   6   6 (100)
8   8   8 (100)
9   7   7 (100)
10   8   8 (100)
11   6   6 (100)
Total 66 64   (97)

2.9 Learning Maps for the Operational Assessment

Table 2.17 includes the overall statistics describing the DLM learning maps as implemented for the operational assessment. This version of the set of ELA, mathematics, and foundational DLM learning maps is the basis for the operational assessments. Foundational nodes support both ELA and mathematics maps.

Table 2.17: Number of Nodes and Connections in the Learning Maps
Node category Number of nodes Number of connections
Foundational    150      277
English language arts 1,919   5,045
Mathematics 2,399   5,200
Total 4,468 10,522

2.10 Conclusion

The DLM assessments are built on learning maps that describe pathways for students to acquire knowledge, skills, and understandings. Different areas of the map are associated with conceptual areas and EEs, which are the learning targets for the DLM assessments. To ensure all students have access to grade-level academic content, each EE is available at five linkage levels, which are small collections of nodes that represent critical junctures on the path toward and beyond the learning target. Both the learning maps and EEs were developed by subject-matter experts synthesizing the research literature with stakeholder feedback. Procedural evidence and preliminary empirical evidence both support the structure of the DLM learning maps. Finally, the external alignment study provides evidence of the connections between DLM System components and the college and career readiness standards, via EEs, learning map nodes, and linkage levels.