4 Assessment Delivery

Chapter 4 presents the processes and procedures used to deliver the Dynamic Learning Maps® (DLM®) Alternate Assessment System in 2021–2022. As described in earlier chapters, the DLM System uses adaptive computer-delivered alternate assessments that provide the opportunity for students with the most significant cognitive disabilities to show what they know and can do in English language arts (ELA) and mathematics. DLM assessments are administered in small groups of items called testlets. The DLM assessment system incorporates accessibility by design and is guided by the core beliefs that all students should have access to challenging, grade-level content and that educators adhere to the highest levels of integrity in providing instruction and administering assessments based on this challenging content.

This chapter begins with an overview of the general features of assessment administration, including the Kite® Suite used to assign and deliver assessments, testlet formats (computer-delivered and educator-administered), and accessibility features. Next, we describe the key features of the Year-End assessment model. We explain how a student’s First Contact survey is used to assign the first testlet in each subject and the adaptive routing algorithm that is used to assign subsequent testlets. We also describe administration resources and materials available to test administrators and district users, followed by test administrator responsibilities and procedures and test security. We then provide evidence from the DLM System, including administration time, device usage, linkage level selection, evaluation of blueprint coverage, and accessibility support selections. We also present evidence from assessment administration monitoring, including test administration observations, formative monitoring, and data forensics reports. Finally, we present evidence from test administrators, including user experience with the DLM System, students’ opportunity to learn, ratings of items on the First Contact survey, and educator cognitive labs.

4.1 Overview of General Administration Features

Based on students’ support needs, DLM assessments are designed to be administered in a one-on-one, student/test administrator format. Most test administrators are the special education educators of the students, as they are best equipped to provide the most conducive conditions to elicit valid and reliable results. Assessment administration processes and procedures also reflect the priorities of fairness and validity through a broad array of accessibility tools and features that are designed to provide access to assessment content and materials as well as limit construct-irrelevant variance.

This section describes the key, overarching features of DLM assessment administration, including the online testing platform, the Kite Suite, the two assessment delivery modes, and accessibility features.

4.1.1 The Kite Suite

The DLM alternate assessments are managed and delivered using the Kite Suite, which was designed and developed to meet the needs of the next generation of large-scale assessments for students with significant cognitive disabilities. Educators and students use the following applications: Kite Educator Portal and Kite Student Portal. The Kite Suite was developed with IMS Global Question and Test Interoperability item structures and Accessible Portable Item Protocol tagging on assessment content to support students’ Personal Needs and Preferences (PNP) Profiles (see the Accessibility section below) and World Wide Web Consortium Web Content Accessibility Guidelines. Kite Student Portal and supported browsers for Kite Educator Portal are published on the DLM website and in the Technology Specifications Manual (Dynamic Learning Maps Consortium, 2022b) linked on each state’s DLM webpage.

4.1.1.1 Kite Educator Portal

Kite Educator Portal is the administrative application where district staff and educators manage student data, assign optional instructionally embedded assessments, access resources needed for each assigned testlet, and retrieve reports.

  • Assessment administrators, who are usually educators, use Kite Educator Portal to manage all student data. They are responsible for checking class rosters of the students who are assigned to take DLM testlets and for completing the PNP and First Contact surveys for each student (see the respective Accessibility and Linkage Level sections below for more information on the PNP and First Contact surveys, respectively).
  • Essential Elements (EEs) are administered in a pre-determined, fixed sequence. The linkage level for the first testlet is determined by responses to the First Contact survey, and subsequent testlets are determined by an adaptive routing algorithm. After the EE and linkage level are assigned, the test administrator retrieves information to support instruction on the associated nodes. See section 4.2 on key administration features of the Year-End model for more information on testlet assignment.
  • After each testlet is assigned to a student, the system delivers a Testlet Information Page (TIP) through Kite Educator Portal. The TIP, which is unique to the assigned testlet, is a PDF that contains any instructions necessary to prepare for testlet administration. See section 4.3.1.2.1 of this chapter for more information.
  • During optional instructionally embedded assessments, the Instruction and Assessment Planner displays information about student mastery for assessed EEs and linkage levels. Educators can also download or print reports on demand, including the student’s history of instructional plans created in the Instruction and Assessment Planner as well as a report that shows the EEs and linkage levels for which the student has completed a testlet or a testlet assignment is pending.

4.1.1.2 Kite Student Portal

Kite Student Portal is the platform that allows students to log in and complete assigned testlets. Practice activities and released testlets are also available to students and test administrators through Kite Student Portal (see Chapter 3 of this manual for more information). Kite Student Portal prevents students from accessing unauthorized content or software while taking assessments. Kite Student Portal is supported on devices running Windows or macOS (OSX), on Chromebooks, and on iPads.

Kite Student Portal provides students with a simple, web-based interface with student-friendly and intuitive graphics. The student interface used to administer the DLM assessments was designed specifically for students with the most significant cognitive disabilities. It maximizes space available to display content, decreases space devoted to tool-activation buttons (i.e., read aloud), and minimizes the cognitive load related to test navigation and response entry. An example of a screen used in an ELA testlet is shown in Figure 4.1. The blue BACK and green NEXT buttons are used to navigate between screens. The octagonal EXIT DOES NOT SAVE button allows the user to exit the testlet without recording any responses. The READ button plays an audio file of synthetic speech for the content on screen. Synthetic read aloud is the only accessibility feature with a tool directly enabled through each screen in the testlet. Further information regarding accessibility is provided in section 4.1.3 of this chapter.

Figure 4.1: An Example Screen From the Student Interface in Kite Student Portal

Kite student portal showing and introductory screen for an item with a back, next, and exit button.

4.1.1.3 Local Caching Server

During DLM assessment administration, schools with unreliable network connections have the option to use the Local Caching Server (LCS). The LCS is a specially configured machine that resides on the local network and communicates between the testing machines at the testing location and the main testing servers for the DLM System. The LCS stores testing data from Kite Student Portal in an internal database; if the upstream network connection becomes unreliable or variable during testing, students can still continue testing, and their responses are transmitted to the Kite servers as bandwidth allows. The LCS submits and receives data to and from the DLM servers while the students are taking tests. The LCS must be connected to the internet between testlets to deliver the next testlet correctly.

4.1.2 Assessment Delivery Modes

The DLM System includes testlets designed to be delivered via computer directly to the student and testlets designed for the test administrator to administer outside the system and record responses in the system. The majority of testlets were developed for the computer-delivered mode because evidence suggested the majority of students with the most significant cognitive disabilities are able to interact directly with the computer or are able to access the content of the assessment on the computer with navigation assistance from a test administrator (Nash et al., 2016). Educator-administered testlets include all testlets at the Initial Precursor linkage level, some higher linkage level mathematics testlets requiring manipulatives, some alternate forms for students who are blind or who have visual impairments, and all writing testlets. A brief overview of the two types of testlets is included in the following sections. See Chapter 3 of this manual for a complete description of DLM testlets.

4.1.2.1 Computer-Delivered Assessments

Most DLM alternate assessments are delivered directly to students by computer through the Kite Suite. Computer-delivered assessments were designed so students can interact independently with the computer, using special assistive technology devices such as alternate keyboards, touch screens, or switches as necessary.

The computer-delivered testlets include various item types, including single-select multiple choice with three response options and text or images as response options, multiple choice multi-select with text or images as response options, matching items from two lists, sorting objects into categories, and highlighting selected text.

4.1.2.2 Educator-Administered Assessments

Some testlets were designed to be administered directly by the test administrator outside the Kite Suite. The Kite Suite delivers the testlet, but the test administrator is responsible for setting up the assessment, delivering it to the student, and recording student responses in Kite.

There are three general categories of educator-administered testlets.

  1. Testlets with content designed for students who are developing symbolic understanding or who may not yet demonstrate symbolic understanding (Initial Precursor and some Distal Precursor).
  2. Some mathematics testlets at higher linkage levels for which representing the content online would make the task too abstract and introduce unnecessary complexity to the item. Manipulatives are often used in this case, especially for students with blindness or visual impairment.
  3. All writing assessments.

All three types of educator-administered testlets have some common features, which are described in Chapter 3 of this manual.

4.1.3 Accessibility

The DLM System was designed to be optimally accessible to diverse learners through accessible content (see Chapter 3 of this manual) as well as through initialization and routing driven by the First Contact survey and prior performance (see section 4.2 of this chapter for details). The interface in the Kite Suite was also designed to be easy to use to support accessibility. Consistent with the DLM learning map and item and test development practices described in earlier chapters (see Chapter 2 and Chapter 3, respectively), principles of universal design for assessment were applied to administration procedures and platforms. Decisions were largely guided by universal design for assessment principles of flexibility of use and equitability of use through multiple means of engagement, multiple means of representation, and multiple means of action and expression.

In addition to these considerations, a variety of accessibility supports are made available in the DLM assessment system. The Accessibility Manual (Dynamic Learning Maps Consortium, 2021a) outlines a six-step process for test administrators and Individualized Education Program (IEP) teams to use in making decisions about accessibility supports. This process begins with confirming the student meets the DLM participation guidelines and continues with the selection, administration, and evaluation of the effectiveness of accessibility supports. Test administrators select supports for each student in the PNP. The PNP can be completed any time before beginning testing. It can also be changed during testing as a student’s needs change. Once updated, the changes appear the next time the student is logged in to the Kite Suite. All test administrators are trained in the use and management of these features. See Chapter 9 for a complete description of test administrator training.

4.1.3.1 Overview of Accessibility Supports

Accessibility supports considered appropriate to use during administration of computer-delivered and educator-administered testlets are listed in the Accessibility Manual (Dynamic Learning Maps Consortium, 2021a). A brief description of the supports is provided here (see the Accessibility Manual for a full description of each support and its appropriate use). Supports are grouped into three categories: those provided through the PNP, those requiring additional tools or materials, and those provided outside the system. Additional techniques that are traditionally thought of as accommodations are considered allowable practices in the DLM assessment system. These are described in a separate section below.

4.1.3.1.1 Category 1: Supports Provided Within the DLM System via the PNP

Online supports include magnification, invert color choice, color contrast, and overlay color. Educators can test these options in advance to make sure they are compatible and provide the best access for students. Test administrators can adjust the PNP-driven accessibility during the assessment, and the selected options are then available the next time the student logs in to Kite Student Portal.

  • Magnification. Magnification allows educators to choose the amount of screen magnification during testing.
  • Invert color choice. In invert color choice, the background is black and the font is white.
  • Color contrast. The color contrast allows educators to choose from several background and lettering color schemes.
  • Overlay color. The overlay color is the background color of the test.
4.1.3.1.2 Category 2: Supports Requiring Additional Tools or Materials

These supports include braille, switch system preferences, iPad administration, and use of special equipment and materials. These supports are all recorded in the PNP even though the one-switch system is the only option actually activated by the PNP.

  • Uncontracted braille. Uncontracted braille testlets are available during the testing window for grades 3–5 at the Target and Successor levels and for grades 6 through high school at the Proximal Precursor, Target, and Successor levels. The standard delivery method is to deliver braille-ready files electronically to the school or district for local embossing as each testlet is assigned. The Kite Suite also delivers the identical general testlet form. After the student takes the testlet in its embossed form, the test administrator transfers the student’s answers into Kite Student Portal.
  • Single-switch system. Single-switch scanning is activated using a switch set up to emulate the Enter key on the keyboard. Scan speed, cycles, and initial delay may be configured.
  • Two-switch system. Two-switch scanning does not require any activation in the PNP. Kite Student Portal automatically supports two-switch step scanning.
  • Administration via iPad. Students may take the assessment via iPad.
  • Adaptive equipment used by student. Test administrators may use any familiar adaptive equipment needed for the student.
  • Individualized manipulatives. Individualized manipulatives are suggested for use with students rather than requiring educators to have a standard materials kit. Recommended materials and rules governing materials selection or substitution are described in the TIP (see section 4.3.1.2.1 of this chapter for more information on TIPs). Having a familiar concrete representation ensures that students are not disadvantaged by objects that are unfamiliar or that present a barrier to accessing the content.
  • BVI forms. Alternate forms for students who are blind or have visual impairments (BVI) but do not read braille were developed for certain EEs and linkage levels. BVI testlets are educator-administered, requiring the test administrator to engage in an activity outside the system and enter responses into Kite Student Portal. The general procedures for administering these forms are the same as with other educator-administered testlets. Additional instructions include the use of several other supports (e.g., human read aloud, test administrator response entry, individualized manipulatives) as needed. When onscreen materials are being read aloud, test administrators are instructed to (1) present objects to the student to represent images shown on the screen, and (2) change the object language in the testlet to match the objects being used. Objects are used instead of tactile graphics, which are too abstract for the majority of students with the most significant cognitive disabilities who are also blind. However, test administrators have the option to use tactile graphics if their student can use them fluently.
4.1.3.1.3 Category 3: Supports Provided Outside the DLM System

These supports require actions by the test administrator, such as reading the test, signing or translating, and assisting the student with entering responses.

  • Human read aloud. The test administrator may read the assessment to the student. Test administrators are trained to follow guidance to ensure fidelity in the delivery of the assessment. This guidance includes the typical tone and rate of speech, as well as avoiding emphasizing the correct response or important information that would lead the student to the correct response. Test administrators are trained to avoid facial expressions and body language that may cue the correct response and to use exactly the words on screen, with limited exceptions to this guideline, such as the use of shared reading strategies on the first read in ELA testlets. Finally, guidance includes ensuring that answer choices are always read in the same order as presented on the screen, with comprehensive examples of all item types. For example, when answer choices are in a triangle order, they are read in the order of top center, bottom left, and bottom right. In most cases, test administrators are allowed to describe graphics or images to students who need those described. Typically, this additional support is provided to students who are blind or have visual impairments. Alternate text for graphics and images in each testlet is included in the TIP as an attachment after the main TIP information. Test administrators who need to read alternate text have the Kite Suite open and the TIP in front of them while testing so they can accurately read the alternate text provided on the TIP with the corresponding screen. Human read aloud is allowed in either subject. The reading EEs included in the blueprints focus on comprehension of narratives and informational texts, not decoding. The read aloud support is available to any student who can benefit from decoding support in order to demonstrate the comprehension skills in the tested EEs.
  • Sign interpretation of text. If the student requires sign language to understand the text, items, or instructions, the test administrator is allowed to use the words and images on the screen as a guide while signing for the student using American Sign Language, Signed Exact English, or any individualized signs familiar to the student. The test administrator is also allowed to spell unfamiliar words when the student does not know a sign for that word and accept responses in the student’s sign language system. Sign is not provided via human or avatar video because of the unique sign systems used by students with the most significant cognitive disabilities who are also deaf/hard of hearing.
  • Language translation of text. The DLM assessment system does not provide translated forms of testlets because of the unique cognitive and communication challenges for students taking DLM alternate assessments and because students who are English learners speak such a wide variety of languages; providing translated forms appropriate for all DLM-eligible students to cover the entire blueprint would be nearly impossible. Instead, test administrators are supplied with instructions regarding supports they can provide based on (1) each student’s unique combination of language-related and disability-related needs, and (2) the specific construct measured by a particular testlet. For students who are English learners or who respond best to a language other than English, test administrators are allowed to translate the text for the student. The TIP includes information about exceptions to the general rule of allowable translation. For example, when an item assesses knowledge of vocabulary, the TIP includes a note that the test administrator may not define terms for the student on that testlet. Unless exceptions are noted, test administrators are allowed to translate the text for the student, simplify test instructions, translate words on demand, provide synonyms or definitions, and accept responses in either English of the student’s native language.
  • Test administrator enters responses for student. During computer-delivered assessments, if students are unable to physically select their answer choices themselves due to a gap between their accessibility needs/supports and the Kite Suite, they are allowed to indicate their selected responses to the test administrator through their typical communication modes (e.g., eye gaze, verbal). The test administrator then enters the response. The Test Administration Manual provides guidance on the appropriate use of this support to avoid prompting or misadministration. For example, the test administrator is instructed not to change tone, inflection, or body language to cue the desired response or to repeat certain response options after an answer is provided. The test administrator is also instructed to ensure the student continues to interact with the content on the screen.
  • Partner-assisted scanning. Partner-assisted scanning is a commonly used strategy for students who do not have access to or familiarity with an augmentative or communication device or other communication system. These students do not have verbal expressive communication and are limited to response modes that allow them to indicate selections using responses such as eye gaze. In partner-assisted scanning, the communication partner (the test administrator in this case) “scans” or lists the choices that are available to the student, presenting them in a visual, auditory, tactual, or combined format. For test items, the test administrator might read the stem of an item to the student and then read the answer choices aloud in order. In this example, the student could use a variety of response modes to indicate a response. Test administrators may repeat the presentation of choices until the student indicates a response.

4.1.3.2 Additional Allowable Practices

The Kite Student Portal user interface was specially designed for students with the most significant cognitive disabilities. Testlets delivered directly to students via computer were designed to facilitate students’ independent interaction with the computer, using special devices such as alternate keyboards, touch screens, or switches as necessary. However, because computerized testing was new to many students using the DLM alternate assessment, the DLM Governance Board recognized that students would need various levels of support to interact with the computer. Test administrators are provided general principles for the allowable practices when the supports built into the system do support a student’s completely independent interaction with the system.

To help make decisions about additional supports for computer-delivered testlets, test administrators receive training to follow two general principles. First, students are expected to respond to the content of the assessment independently. No matter which additional supports IEP teams and test administrators selected, all should be chosen with the primary goal of student independence at the forefront. Even if more supports are needed to provide physical access to the computer-based system, students should be able to interact with the assessment content and use their normal response modes to indicate a selection for each item. Second, test administrators are to ensure that students are familiar with the chosen supports. Ideally, any supports used during assessment are also used consistently during routine instruction. Students who have never received a support prior to the testing day are unlikely to know how to make the best use of the support.

In order to select the most appropriate supports during testing, test administrators are encouraged to use their best professional judgment and to be flexible while administering the assessment. Test administrators are allowed to use additional supports beyond PNP options. The supports detailed below in Table 4.1 are allowed in all computer-delivered and educator-administered testlets unless exceptions are noted in the TIP.

Table 4.1: Additional Allowable Practices
Practice Explanation
Breaks as needed Students can take breaks during or between testlets. Test administrators are encouraged to use their best judgment about the use of breaks. The goal should be to complete a testlet in a single session, but breaks are allowed if the student is fatigued, disengaged, or having behavioral problems that can interfere with the assessment. Kite Student Portal allows for up to 90 minutes of inactivity without timing out so that test administrators and students can pause for breaks during testlet administration. In cases in which administration begins but a short break is not sufficient for the student, the EXIT DOES NOT SAVE button can be used to exit the testlet (see Figure 4.1). The test administrator and student can then return to it and start over at another time.
Individualized student response mode The nodes assessed in the educator-administered testlets do not limit responses to certain types of expressive communication; therefore, all response modes are allowed. Test administrators can represent answer choices outside the system to maximize the student’s ability to respond. For example, for students who use eye gaze to communicate, test administrators can represent the answer choices in an alternate format or layout to ensure the student can indicate a clear response.
Use of special equipment for positioning For students who need special equipment to access the test material such as a slant board for positioning or Velcro objects on a communication board, test administrators are encouraged to use the equipment to maximize the student’s ability to provide a clear response.
Navigation across screens For students who have limited experience with, motor skills for, and/or devices for interacting directly with the computer, the test administrator can assist students to navigate across screens or enter the responses.
Use of interactive whiteboard If the student has a severe visual impairment and needs larger presentation of content than the highest magnification setting provides, the test administrator can use an interactive whiteboard or projector or a magnification device that works with the computer screen to enlarge the assessment to the needed size.
Represent the answer options in an alternate format Representing the answer options in an alternate format is allowed as long as the representation does not favor one answer choice over another. For instance, if the test administrator is presenting the answer choices to a student on a communication board or using objects to represent the answer choices, the correct answer choice cannot always be closest to the student or in the same position each time.
Use of graphic organizers If the student is accustomed to using specific graphic organizers, manipulatives, or other tools during instruction, the use of those tools is allowable during the DLM alternate assessment.
Use of blank paper If the student requires blank, lined, or unlined paper, this can be provided. Once there is any writing on the paper, it becomes a secure testing document and needs to be disposed of and shredded at the conclusion of the testing session.
Generic definitions If the student does not understand the meaning of a word used in the assessment, the test administrator can define the term generically and allow the student to apply that definition to the problem or question in which the term is used. Exceptions to this general rule are noted in the TIP for specific testlets.
Allowed using speech, sign, or language translation unless prohibited for a specific testlet.

Although there are many supports and practices allowable for computer-delivered and educator-administered testlets, there are also practices that test administrators are trained to avoid, including the following:

  • Repeating the item activity again after a student has responded or in any other way prompting the student to choose a different answer
  • Using physical prompts or hand-over-hand guidance to the correct answer
  • Removing answer choices or giving hints to the student
  • Rearranging objects to prompt the correct answer—for example, putting the correct answer closer to the student

Test administrators are encouraged to ask any questions regarding whether a support is allowable via the DLM Service Desk or through their state education agency.

4.2 Key Features of the Year-End Assessment Model

As briefly described in Chapter 1, the DLM assessment system has two available models. This manual describes the Year-End assessment model. Consistent with the DLM Theory of Action described in Chapter 1, the DLM assessment administration features reflect multidimensional, non-linear, and diverse ways that students learn and demonstrate their learning. Test administration procedures therefore use multiple sources of information to assign testlets, including student characteristics, prior performance, and educator judgment.

In the Year-End model, the DLM System is designed to assess student learning at the end of the year. All testlets are administered in the spring assessment window; however, optional instructionally embedded testlets are available throughout the fall and winter. The instructionally embedded assessments, if administered, do not contribute to summative scoring. This assessment model yields summative results based only on testlets completed during the spring assessment window.

With the exception of writing testlets, each testlet contains items for one EE and one linkage level. In reading and mathematics, items in a testlet are aligned to nodes at one of five linkage levels for a single EE. Writing testlets cover multiple EEs and are delivered at one of two levels: emergent (which corresponds with Initial Precursor and Distal Precursor linkage levels) or conventional (which corresponds with Proximal Precursor, Target, and Successor linkage levels).

This section describes the features of the Year-End assessment model, including initialization, adaptive routing, and test administration windows.

4.2.1 Testlet Assignment

This section describes how testlets are assigned to students during the spring assessment window. Educators complete the First Contact survey, which is used to assign the linkage level of the first testlets in each subject. The linkage level for subsequent testlets is determined by an adaptive routing algorithm.

4.2.1.1 First Contact Survey

The First Contact survey is a survey of learner characteristics that covers a variety of areas, including communication, academic skills, attention, and sensory and motor characteristics. A completed First Contact survey is required for each student prior to the assignment of testlets.

The items on the First Contact survey are categorized into the following sections:

  • Special Education
  • Sensory Capabilities
  • Motor Capabilities and Health
  • Computer Instruction
  • Communication (Expressive and Receptive)
  • Language
  • Academics

Four sections of the First Contact survey are used to assign students to complexity bands in reading, mathematics, and writing: Expressive Communication, Reading Skills, Mathematics Skills, and Writing Skills. For expressive communiction, reading, and mathematics, there are four complexity bands (from lowest to highest): Foundational, Band 1, Band 2, and Band 3. In writing, there are two complexity bands (from lowest to highest): Emergent and Conventional. First Contact survey items used for determining complexity bands are included in Appendix C.1. Based on the educator’s responses, the student’s assigned complexity band is automatically calculated and stored in the system.

  • For the ELA reading testlets, Kite Suite uses the responses from the Expressive Communication and Reading Skills questions to assign a student to one of four complexity bands.
  • For the mathematics testlets, Kite Suite uses the responses from the Expressive Communication and Math Skills questions to assign a student to one of four complexity bands.
  • For writing testlets, Kite Suite uses the responses from the Writing Skills question to assign a student to one of two complexity bands.

For reading and mathematics, if a different complexity band is indicated between the two sets of questions (Expressive Communication and the subject area questions), the system selects the lower band. The goal is to present a testlet that is approximately matched to a student’s knowledge, skills, and understandings. That is, within reason, the system should recommend a testlet that is neither too easy nor too difficult and that provides a positive experience for the student entering the assessment. The correspondence among common student characteristics indicated on the First Contact survey, the corresponding First Contact complexity bands, and the recommended linkage levels are shown in Table 4.2. For a description of linkage levels, see Chapter 2 of this manual.

Table 4.2: Correspondence Among Student Characteristics Recorded on First Contact Survey, Complexity Bands, and Linkage Levels
Common First Contact survey responses about the student First Contact complexity band Linkage level
Does not use speech, sign, or augmentative and alternative communication; does not read any words when presented in print (reading); or does not sort objects (math) Foundational Initial Precursor
Uses one word, sign, or symbol to communicate; recognizes symbols (reading); or sorts symbols (math) Band 1 Distal Precursor
Uses two words, signs, or symbols to communicate; reads at the primer to second grade level (reading); or adds/subtracts up to 80% of the time (math) Band 2 Proximal Precursor
Regularly combines three or more spoken words to communicate for a variety of purposes; able to read print at the third-grade level or above (reading); or regularly add/subtract and form groups of objects (math) Band 3 Target

The writing First Contact item is used to assign the two types of writing testlets: emergent and conventional. Students whose educators indicated they wrote by scribbling, copying or using word bands, or writing words corresponding to some sounds are assigned an emergent-level testlet. Students whose educator indicated they wrote words or simple phrases, sentences or complete ideas, or paragraph-length text without copying and using spelling are assigned the conventional writing testlet.

4.2.1.2 Initialization and Adaptive Routing

Each student is assigned as few as six to as many as nine testlets per subject during the spring assessment window. The number of testlets is determined by the assessment blueprint. For a description of the assessment blueprints, see Chapter 2 of this manual. In mathematics, each testlet measures a single EE, so the number of testlets a student is assigned is equal to the number of EEs on the blueprint for the student’s grade. The same is true for ELA, except that all writing EEs are measured on a single writing testlet. Thus, the number of testlets a student is assigned in ELA is equal to the number of non-writing EEs on the blueprint, plus one additional writing testlet. The system determines the linkage level for each testlet. The assignment is adaptive between testlets. Each spring testlet is packaged and delivered separately, and the test administrator determine when to schedule each testlet within the larger window.

The linkage level of the first testlet assigned to a student is based on First Contact survey responses. The correspondence between the First Contact complexity bands and first assigned linkage levels is shown in Table 4.3. Additionally, the level of the writing testlet for a student (i.e., emergent or conventional), is also assigned based on the First Contact survey, using the writing complexity band.

Table 4.3: Correspondence of Complexity Bands and Linkage Level
First Contact complexity band Linkage level
Foundational Initial Precursor
Band 1 Distal Precursor
Band 2 Proximal Precursor
Band 3 Target

The second and subsequent testlets are assigned based on the student’s previous performance. That is, the linkage level associated with the next testlet a student receives is based on the student’s performance on the previously administered testlet. The goal is to maximize the match of student knowledge, skills, and understandings to the appropriate linkage level content. Specific explanations of this process are as follows:

  • The system adapts up one linkage level if students responded correctly to 80% or more of the items measuring the previously tested EE. If testlets were already at the highest level (i.e., Successor), they would remain there.
  • The system adapts down one linkage level if students responded correctly to less than 35% of the items measuring the previously tested EE. If testlets were already at the lowest level (i.e., Initial Precursor), they would remain there.
  • Testlets remain at the same linkage level if students responded correctly to between 35% and 80% of the items measuring the previously tested EE.

Threshold values for routing were selected with the number of items included in a testlet (typically three to five items) in mind. In a testlet that contains three items measuring the EE, if a student responds incorrectly to all items or correctly answers only one item (proportion correct less than .35), the linkage level of the testlet is likely too challenging. To provide a better match for the student’s knowledge, skills, and understandings, the student would be routed to a lower linkage level. A single correct answer could be attributed to either a correct guess or true knowledge that did not translate to the other items measuring the EE. Similarly, if a student responds correctly to at least four items on a testlet with five items (proportion correct greater than .80) measuring the EE, the linkage level of the testlet is likely too easy. The student would be routed to a higher linkage level to allow the student the opportunity to demonstrate more advanced knowledge or skill. However, if the student responds to two of the three items correctly or three of five items correctly (proportion correct between .35 and .80), it cannot be assumed the student has completely mastered the knowledge, skills, or understanding being assessed at that linkage level. Therefore, the student is neither routed up nor down for the subsequent testlet.

Figure 4.2 provides an example of testlet adaptations for a student who completed five testlets. In the example, on the first assigned testlet at the Distal Precursor level, the student answered all of the items correctly, so the next testlet was assigned at the Proximal Precursor level. The next two testlets adapted up and down a level, respectively, whereas the fifth testlet remained at the same linkage level as the previous testlet.

Figure 4.2: Linkage Level Adaptations for a Student Who Completed Five Testlets

A flowchart showing a student taking their first testlet at the Distal Precursor level, then routing to the Proximal Precursor, Target, Proximal Precursor, and Proximal Precursor on testlets 2 through 5, respectively.

4.2.2 Assessment Administration Windows

Assessments are administered in the spring assessment window for operational reporting. Optional assessments are available during the instructionally embedded assessment window for educators to administer for formative information.

4.2.2.1 Instructionally Embedded Assessment Window

During the instructionally embedded assessment window, testlets are optionally available for test administrators to assign to their students. When choosing to administer the optional testlets during the instructionally embedded assessment window, educators decide which EEs and linkage levels to assess for each student. The assessment delivery system recommends a linkage level for each EE based on the educator’s responses to the student’s First Contact survey, but educators can choose a different linkage level based on their own professional judgment. In 2021–2022, the instructionally embedded assessment window occurred between September 13, 2021, and February 23, 2022. States were given the option of using the entire window or setting their own dates within the larger window. Across all states, the instructionally embedded assessment window ranged from 4–23 weeks.

4.2.2.2 Spring Assessment Window

During the spring assessment window, students are assessed on all of the EEs on the assessment blueprint in ELA and mathematics. The linkage level for each EE is determined by the system. In 2021–2022, the spring assessment window occurred between March 14, 2022, and June 10, 2022. States were given the option of using the entire window or setting their own dates within the larger window. Across all states, the spring assessment window ranged from 3–13 weeks.

4.3 Resources and Materials

Test administrators, school staff, district staff, and IEP teams are provided with multiple resources to support the assessment administration process.

Resources are provided on the DLM website and in the Kite Suite. Some states provide additional materials on their own customized landing page (i.e., dynamiclearningmaps.org/{statename}) of the DLM website and on their own department of education website. Test administrators are made aware of their state-specific webpage through training, manuals, webinars, and replies from Service Desk inquiries. The About DLM tab of the website includes information about topics related to the DLM System as a whole and may be of interest to a variety of audiences. To provide updates and reminders to all participating states, the DLM website also features a Test Updates section of the homepage. This is a newsfeed-style area that addresses timely topics such as assessment deadlines, resource updates, and system status. Additionally, the Test Updates page offers educators the option to subscribe to an electronic mailing list to automatically receive the same message via email without visiting the website. The DLM website also provides resources that cover assessment administration training information; student and roster data management; test delivery protocols and setup; and accessibility features, protocols, and documentation.

This section provides an overview of resources and materials available for test administrators and district-level staff.

4.3.1 Test Administrator Resources

While some resources for test administrators are available in the Kite Suite, the majority of DLM resources are available on the DLM website.

4.3.1.1 Test Administrator Resources Provided on the DLM Website

The DLM website provides specific resources designed for test administrators. These resources are available to all states (Table 4.4) to promote consistent assessment administration practices.

Table 4.4: DLM Resources for Test Administrators and States
Resource Description
About Testlet Information Pages Provides guidance for test administrators on the types and uses of information in the Testlet Information Pages provided for each testlet.
Accessibility Manual (PDF) Provides guidance to state leaders, districts, educators, and Individualized Education Program (IEP) teams on the selection and use of accessibility supports available in the DLM System.
Guide to DLM Required Test Administrator Training (PDF) Helps users access DLM Required Test Administrator Training in Moodle.
Guide to Practice Activities and Released Testlets (PDF) Supports the test administrator in accessing practice activities and released testlets in Kite Student Portal.
Instructional Resources on the DLM Website Provides links to additional resources for test administrators, including lists of EEs, a list of materials commonly needed for testlets, professional development modules supporting EEs, guidance on using mini-maps to plan instruction, accessing and using familiar texts, and released testlets and sample Testlet Information Pages.
Test Administration Manual (PDF) Supports the test administrator in preparing themselves and students for testing.
Test Updates Page (webpage) Breaking news on assessment administration activities. Users can sign up to receive alerts when new resources become available.
Training Video Transcripts (PDF) Links to transcripts (narrator notes) for the DLM Required Test Administrator Training modules.

In addition, there are several helplet videos available on the DLM website to support assessment administration:

  • Accessibility in DLM Assessments
  • Completing the First Contact Survey and PNP Profile
  • DLM Instructionally Embedded Assessments
  • DLM Writing Testlets
  • Getting Started in Educator Portal
  • Monitoring the Assessment Using Extracts
  • More About Initial Precursor Items
  • Overview of DLM ELA Testlets
  • Overview of DLM Mathematics Testlets
  • Test Tickets and TIPs in the Spring Window
  • Using Kite Student Portal
  • Using the DLM Instruction and Assessment Planner
  • Verifying Rosters for Teachers
  • Verifying Student Data for Teachers

4.3.1.2 Test Administrator Resources Provided in Kite Suite

The resources for test administrators that are provided in the Kite Suite include the TIPs as well as the practice activities and released testlets.

4.3.1.2.1 Testlet Information Pages

TIPs provide test administrators with information specific to each testlet. Test administrators receive a TIP in Educator Portal for each testlet after it is assigned to a student, and they are instructed to review the TIP before beginning the student’s assessment.

Each TIP states whether a testlet is computer-delivered or educator-administered and indicates the number of items on the testlet. The TIP also provides information for each testlet regarding the materials needed, including substitute materials allowed.

The TIP also provides information on the exceptions to allowable supports. While a test administrator typically uses all appropriate PNP features and other flexibility tools described in the Allowable Practices section of the Test Administration Manual, the TIP indicates when it is not appropriate to use a support on a specific testlet. This may include limits on the use of definitions, translation, read aloud, calculators (for mathematics testlets), or other supports.

If there are further unique instructions for a given testlet, they are provided in the TIP. For test administrators who deliver human read aloud that includes descriptions of graphics, alternate text descriptions of images are provided.

TIPs for ELA testlets also provide the name of the text used in the testlet, identify the text as informational or literature, and label the text as familiar or unfamiliar. They also include the name of the grade-level text that the DLM text is associated with and note if assessment administration time is expected to be longer than usual because the linkage level requires a comparison between two texts. TIPs for mathematics testlets also provide information on specific mathematics terminology.

Testlets that require special setup before assessment administration begins, such as mathematics testlets designed for students with blindness or visual impairments, have additional instructions.

4.3.1.2.2 Practice Activities and Released Testlets

Practice activities and released testlets are available to support test administrators and students as they prepare for testing.

  • The educator practice activity is designed to teach test administrators how to deliver educator-administered testlets, while the student practice activity is designed to teach students about the testlets and item features in the Kite Suite.
  • The released testlets are similar to operational DLM testlets in content and format and are designed to be used for practice.

For more information on practice activities and released testlets, see Chapter 3 of this manual.

4.3.2 District-Level Staff Resources

Resources are available for three district-level supporting roles: Assessment Coordinator, Data Manager, and Technology Personnel. The Assessment Coordinator oversees the assessment process, which includes managing staff roles and responsibilities, developing and implementing a comprehensive training plan, developing a schedule for test implementation, monitoring and supporting test preparations and administration, and developing a plan to facilitate communication with parents or guardians and staff. The Data Manager manages educator, student, and roster data. Technology Personnel verify that network and testing devices are prepared for assessment administration.

Resources for each of these roles are made available on the state’s customized DLM webpage. Each role has its own manual. A prerecorded training addressing each role and a FAQ compiled from Q&A sessions are also provided. Each role is also guided to supporting resources for other roles where responsibilities overlap. For example, Data Managers are guided to the Test Administration Manual to support data-related activities that are assigned to the test administrator and connect to troubleshooting data issues experienced by the test administrator. Technology Personnel are also guided to the Kite and Educator Portal webpage for information and documents connected to Kite Student Portal, Local Caching Server use, supported browsers, and bandwidth requirements. Assessment Coordinators are also guided to resources developed for the Data Manager, Technology Personnel, and test administrators for specific information and supplemental knowledge of the responsibilities of each of those roles. Some of those resources include the Guide to DLM Required Test Administrator Training, the Test Administration Manual, the Test Updates webpage, and electronic mailing lists.

Descriptions of training for district-level roles are provided in Chapter 9 of this manual.

4.4 Test Administrator Responsibilities and Procedures

The Test Administration Manual (DLM Consortium, 2021) describes procedures for test administrators, which are organized into four sets of tasks for different parts of the school year: (1) before assessments, (2) during the instructionally embedded assessment window, (3) during the spring assessment window, and (4) while preparing for the next year.

4.4.1 Before Beginning Assessments

Test administrators are directed to perform multiple steps to prepare for student testing, including confirming student eligibility to participate in the DLM alternate assessment and sharing information about the assessment with parents to prepare them for their child’s testing experience. Test administrators are also directed to review the Test Administration Manual and become familiar with available resources, including state webpages, practice activities and released testlets, and procedures for preparing to give the assessment.

  1. The manual directs test administrators to prepare for the computer-delivered aspects of the assessment system. Test administrators must activate their Kite Educator Portal account, complete the Security Agreement in Kite Educator Portal, and complete the DLM Required Test Administrator Training (see Chapter 9 of this manual). Test administrators review their state’s guidance on required and recommended professional development modules.
  2. Test administrators are also directed to review the Accessibility Manual (Dynamic Learning Maps Consortium, 2021a) and work with IEP teams to determine what accessibility supports should be provided for each student taking the DLM assessments. Test administrators record the chosen supports in the PNP in Kite Educator Portal. Test administrators are also directed to review their state’s requirements for documentation of DLM accessibility supports as testing accommodations and adjust the testing accommodations in the IEP as necessary.
  3. Test administrators are also tasked with reviewing student data, including student demographic information and roster data in Kite Educator Portal, for accuracy. Test administrators also must ensure that the PNP and the First Contact survey are updated and complete in Kite Educator Portal. Test administrators must ensure that the Kite Student Portal is installed on testing devices. They must also make sure that they are familiar with their role as test administrator and the students are familiar with DLM testlets by utilizing the practice activities and released testlets. Finally, test administrators must check student devices for compatibility with Kite Student Portal.

4.4.2 Administration in the Instructionally Embedded and Spring Assessment Windows

In the optional instructionally embedded assessment window, test administrators choose appropriate EEs for instruction, retrieve instructional information for the EE, and select the EE and linkage level for the student in the Instruction and Assessment Planner. Test administrators deliver instruction until they determine the student is ready for assessment. Test administrators then confirm test assignment in the Instruction and Assessment Planner, retrieve the TIP, and gather necessary materials before beginning testing. They follow this step for each EE and linkage level they wish to assess during the optional instructionally embedded assessment window.

In the spring assessment window, EEs are delivered in a pre-specified order until all EEs on the blueprint have been assessed, with the linkage level determined by the system. Testlets can be administered throughout the assessment window, and test administrators are encouraged to administer the testlets when the students are ready to engage with the content.

In both windows, after the testlet has been assigned, test administrators assess the student on the testlet. While testing, users can go forward and backward within a testlet as much as needed before submitting answers. Student usernames and passwords are checked so that the students can access the assessments in Kite Student Portal.

4.4.3 Preparing for Next Year

Educators are directed to prepare for the following year by evaluating students’ accessibility supports (PNP settings) with IEP teams and making decisions about supports and tools for next school year. They are also directed to review the blueprint for the next grade as a source of information to plan academic IEP goals.

4.5 Security

This section describes secure assessment administration, including test administrator training, security during administration, and the Kite Suite; secure storage and transfer of data; and plans for forensic analyses for the investigation of potential security issues. Test security procedures during item development and review are described in Chapter 3.

4.5.1 Training and Certification

Test security is promoted through the DLM Required Test Administrator Training and certification requirements for test administrators. Test administrators are expected to deliver DLM assessments with integrity and maintain the security of testlets. The training for assessment administration details test security measures. Each year, test administrators must renew their DLM Security Agreement through Kite Educator Portal (Figure 4.3). Test administrators are not granted access to Kite Educator Portal if they have not completed the Security Agreement.

Figure 4.3: Test Security Agreement Text

A screenshot of the test security agreement.

Although each state may have additional security expectations and security-related training requirements, all test administrators in each state are required to meet these minimum training and certification requirements.

4.5.2 Maintaining Security During Test Administration

Several aspects of the DLM System support test security and test administrator integrity during use of the system. Because TIPs are the only printed material, there is limited risk of exposure. Guidance is provided in the Test Administration Manual and on TIPs regarding allowable and not allowable practices. This guidance is intended to promote implementation fidelity and reduce the risk of cheating or other types of misadministration. For a description of fidelity to intended practices, see the description of test administration observations in section 4.7.1 of this chapter.

Agile Technology Solutions, the organization that develops and maintains the Kite Suite and provides DLM Service Desk support to test administrators in the field, has procedures in place to handle alleged security breaches (e.g., test content is made public). Any reported test security incident is assumed to be a breach and is handled accordingly. In the event of a test security incident, access is disabled at the appropriate level. Depending on the situation, the testing window could be suspended, or test sessions could be removed. Test forms could also be removed if exposed or if data is exposed by a form. If necessary, passwords would be changed for users at the appropriate level.

4.5.3 Security in the Kite Suite

The Kite Suite prioritizes security to ensure confidentiality, integrity, and availability for all application data. All Kite Suite data is housed within the United States, including application backups and recovery data. Kite Suite runs in Amazon Web Services (AWS) that implements a “Shared Responsibility” model as it pertains to security controls. AWS is responsible for the security of the cloud, which protects all the infrastructure and services that AWS offers. This is composed of the hardware, software, networking, and physical access to the facilities, and all of the security controls associated with those, including environmental and physical controls. Just as the responsibility to operate the IT environment is shared between AWS and its customers, so is the management, operation, and verification of the IT controls. AWS runs an extensive compliance program reflecting the depth and breadth of their security controls. AWS is NIST 800-53 and FedRAMP compliant. For the controls that are not covered by AWS, the Kite team aligns with NIST standards.

Application access and support access to Kite Suite data follows the principle of least privilege. Access to Kite Suite data is provided through role-based access control systems that limit data to be available to those individuals that require access to perform their jobs. Access is regularly audited by our documented daily, weekly, and monthly security checkout processes.

All Kite Suite network transmissions are encrypted to prevent interception, disruption of reception, communications deception, and/or derivation of intelligence by analysis of transmission characteristics such as signal parameters or message externals. All client web traffic is HTTPS encrypted, with support limited to modern, secure algorithms within the TLS 1.2 or greater protocol. This secures all communication during the session, including the authentication and authorization stages. Support sessions and data transfers are protected using Secure Shell (SSH), an encrypted protocol designed to give a secure connection over an insecure network, such as the internet. All internal network traffic is also encrypted to protect data in transit between network tiers and application components.

Intrusion prevention is a critical component of the Kite Suite security implementation. The Kite Suite implementation in AWS, Kite Suite security processes and procedures, and the Kite Suite development lifecycle all contribute to intrusion prevention.

All Kite Suite Windows Servers utilize Microsoft tools for antivirus, anti-malware, and software firewalls. All laptops and desktops for project staff are fully managed with current antivirus, anti-malware, and storage encryption.

To protect the integrity of test items and scoring materials, the Kite Test Security Agreement lists the security standards that all educators involved with administering tests must follow to protect both the student’s privacy as well as test items and scoring materials.

4.5.4 Secure Test Content

Test content is stored in Kite Content Builder. All items used for released testlets exist in a separate pool from items used for operational testing purposes, ensuring that no items are shared among secure and non-secure pools. Only authorized users of the Kite assessment system have access to view items. Testlet assignment logic prevents a student from being assigned the same testlet more than once, except in cases of manual override for test reset purposes.

4.5.5 Data Security

Project staff collect personally identifiable information (PII) protocols and usage rules from states. Project staff document any applicable state laws regarding PII, state PII handling rules, and state-specific PII breach procedures. The information is housed in the shared resources where Service Desk agents and other project staff can access the information as needed. The protocols are followed with precision due to the sensitive nature of PII and the significant consequences tied to breaches of the data.

The procedures that are implemented in the case of a security incident, privacy incident, or data breach that involve PII or sensitive personal information are implemented by an investigation team that focuses first on mitigation of immediate risk, followed by identification of solutions to identified problems and communication with the DLM Governance Board.

4.5.6 State-Specific Policies and Practices

Some states also adopt more stringent requirements for access to test content and for the handling of secure data, above and beyond those for the overall DLM System. Each DLM agreement with a state education agency (SEA) includes a Data Use Agreement. The Data Use Agreement addresses the data security responsibilities of DLM project staff in regard to the Family Educational Rights and Privacy Act (FERPA, Family Educational Rights and Privacy Act, 1974). The agreement details the role of Accessible Teaching, Learning, and Assessment Systems (ATLAS) as the holder of the data and the rights of the SEA as the owner of the data. In many cases, the standard Data Use Agreement is modified to include state-specific data security requirements. Project staff document these requirements for each state, and the Implementation and Service Desk teams implement the requirements.

The Implementation team collects state education authorities’ policy guidance on a range of state policy issues such as individual student test resets, district testing window extensions, and allowable sharing of PII. In all cases, the needed policy information is collected on a state summary sheet and recorded in a software program jointly accessed by Service Desk agents and the Implementation team.

The Implementation team reviews the state testing policies during Service Desk agent training and provides updates during the state testing windows to supervisors of the Service Desk agents. As part of the training, the Service Desk agents are directed to contact the Implementation team with any questions that require state input or the state to develop or amend a policy.

4.6 Evidence from the DLM System

This section describes evidence collected by the DLM System during the 2021–2022 operational administration of the DLM alternate assessment. The categories of evidence include data relating to administration time, device usage, adaptive routing, and accessibility support selections.

4.6.1 Administration Time

Estimated administration time varies by student and subject. Testlets can be administered separately across multiple testing sessions as long as they are all completed within the testing window.

The published estimated total testing time per testlet is around 5–10 minutes in mathematics, 10–15 minutes in reading, and 10–20 minutes for writing. The estimated total testing time is 60–75 minutes per student in ELA and 35–50 minutes in mathematics in the spring assessment window. Published estimates are slightly longer than anticipated real testing times because of the assumption that test administrators need time for setup. Actual testing time per testlet varies depending on each student’s unique characteristics.

Kite Student Portal captured start dates, end dates, and time stamps for every testlet. The difference between these start and end times was calculated for each completed testlet. Table 4.5 summarizes the distribution of test times per testlet. The distribution of test times in Table 4.5 is consistent with the distribution observed in prior years. Most testlets took around seven minutes or less to complete, with mathematics testlets generally taking less time than ELA testlets. Time per testlet may have been impacted by student breaks during the assessment (for more information about breaks, see the Accessibility section above). Testlets with shorter than expected administration times are included in an extract made available to each state. States can use this information to monitor assessment administration and address as necessary. For a description of the administration time monitoring extract, see section 4.7.4 of this chapter.

Table 4.5: Distribution of Response Times per Testlet in Minutes
Grade Min Median Mean Max 25Q 75Q IQR
English language arts
3 .033 3.92 4.84 88.80 2.58 5.95 3.37
4 .150 4.17 5.18 89.93 2.75 6.37 3.62
5 .100 4.25 5.23 88.93 2.78 6.51 3.72
6 .100 4.22 5.21 89.90 2.78 6.43 3.65
7 .117 4.95 5.99 88.62 3.17 7.50 4.33
8 .167 4.30 5.23 88.85 2.85 6.48 3.63
9 .217 4.72 5.93 88.55 2.90 7.30 4.40
10 .183 4.50 5.54 89.93 2.82 6.87 4.05
11 .183 5.03 6.41 89.50 3.12 7.90 4.78
12 .217 4.88 6.28 87.18 2.82 7.80 4.98
Mathematics
3 .100 1.85 2.62 87.57 1.10 3.18 2.08
4 .067 1.45 2.09 85.32 0.90 2.42 1.52
5 .067 1.62 2.32 86.05 1.00 2.70 1.70
6 .050 1.68 2.36 86.30 1.07 2.77 1.70
7 .083 1.62 2.27 82.90 0.97 2.72 1.75
8 .050 1.60 2.30 88.53 0.98 2.70 1.72
9 .083 1.72 2.44 79.50 0.98 2.95 1.97
10 .067 1.60 2.29 85.87 0.97 2.70 1.73
11 .083 1.68 2.40 88.73 1.05 2.85 1.80
12 .083 1.65 2.41 48.90 0.98 2.87 1.88
Note. Min = minimum, Max = maximum, 25Q = lower quartile, 75Q = upper quartile, IQR = interquartile range.

4.6.2 Device Usage

Testlets may be administered on a variety of devices. Kite Student Portal captured the operating system used for each testlet completed. Although these data do not capture specific devices used to complete each testlet (e.g., SMART Board, switch system, etc.), they provide high-level information about how students access assessment content. For example, we can identify how often an iPad is used relative to a Chromebook or traditional PC. Figure 4.4 shows the number of testlets completed on each operating system by subject and linkage level for 2021–2022. Overall, 45% of testlets were completed on a Chromebook, 24% were completed on a PC, 23% were completed on an iPad, and 8% were completed on a Mac.

Figure 4.4: Distribution of Devices Used for Completed Testlets

A bar graph showing the number of testlets completed on each device, by subject and linkage level.

4.6.3 Blueprint Coverage

Each student is assessed on all EEs included on the assessment blueprint. For a description of the assessment blueprints see Chapter 2 of this manual. Table 4.6 summarizes the number of EEs required for each grade and subject.

Table 4.6: Essential Elements Required for Blueprint Coverage
Grade English language arts (n) Mathematics (n)
  3 10 8
  4 11 8
  5 10 8
  6 11 7
  7 13 7
  8 13 8
  9 14 7
10 14 8
11 14 6

Across all grades, 96% of students in ELA and 97% of students in mathematics were assessed on all of the EEs and met blueprint requirements. Table 4.7 summarizes the total number of students and the percentage of students meeting blueprint requirements based on their complexity band for each subject. When comparing complexity band distributions, there was a slightly lower percentage of Foundational students not meeting requirements. However, all complexity band groups had over 93% of students meeting the coverage requirements.

Table 4.7: Student Blueprint Coverage by Complexity Band
Complexity Band n % meeting requirements
English language arts
Foundational 12,014 93.9
Band 1 33,393 96.1
Band 2 32,876 97.1
Band 3   9,949 96.8
Mathematics
Foundational 12,380 94.4
Band 1 33,181 96.4
Band 2 34,363 97.5
Band 3   8,128 97.9

4.6.4 Adaptive Delivery

Following the spring 2022 administration, analyses were conducted to determine the mean percentage of testlets that adapted from the first to second testlet administered for students within a grade, subject, and complexity band. The aggregated results can be seen in Table 4.8 and Table 4.9 for ELA and mathematics, respectively.

For the majority of students across all grades who were assigned to the Foundational Complexity Band by the First Contact survey, testlets did not adapt to a higher linkage level after the first assigned testlet (ranging from 64% to 92% across both subjects). Consistent patterns were not as apparent for students who were assigned to Band 1, Band 2, or Band 3. Distributions across the three categories were more variable across grades and subjects. Results indicate that linkage levels of students assigned to higher complexity bands are more variable with respect to the direction in which students move between the first and second testlets. However, this finding of more variability in the higher complexity bands is consistent with prior years, which showed the same trend. Several factors may help explain these results, including more variability in student characteristics within this group and content-based differences across grades and subjects. Further exploration is needed in this area.

Table 4.8: Adaptation of Linkage Levels Between First and Second English Language Arts Testlets (N = 88,232)
Foundational
Band 1
Band 2
Band 3
Grade Adapted up (%) Did not adapt (%) Adapted up (%) Did not adapt (%) Adapted down (%) Adapted up (%) Did not adapt (%) Adapted down (%) Adapted up (%) Did not adapt (%) Adapted down (%)
Grade 3 13.8 86.2 59.5 21.8 18.7 77.7 14.5   7.8 90.8   6.6   2.6
Grade 4 27.8 72.2 17.6 28.9 53.5 63.1 25.6 11.3 50.2 20.6 29.2
Grade 5 28.0 72.0 24.1 31.4 44.5 59.9 33.3   6.8 88.6   8.5   3.0
Grade 6 32.0 68.0 11.8 22.0 66.2 25.0 39.4 35.6 35.9 47.7 16.4
Grade 7 28.7 71.3 31.2 25.5 43.3 52.2 35.1 12.7 69.6 25.1   5.3
Grade 8 35.7 64.3 30.6 22.3 47.1 66.4 21.7 11.9 86.1 10.3   3.6
Grade 9 13.6 86.4 32.2 32.4 35.4 18.9 32.8 48.2 65.5 22.6 11.9
Grade 10   7.6 92.4 30.5 32.5 37.0 14.9 30.8 54.2 61.1 23.9 15.0
Grade 11 28.6 71.4 12.1 36.6 51.2 62.8 24.3 12.9 65.9 21.7 12.4
Grade 12 25.0 75.0 11.4 33.9 54.7 59.7 27.5 12.9 58.8 23.0 18.2
Note. Foundational is the lowest complexity band, so testlets could not adapt down a linkage level.
Table 4.9: Adaptation of Linkage Levels Between First and Second Mathematics Testlets (N = 88,052)
Foundational
Band 1
Band 2
Band 3
Grade Adapted up (%) Did not adapt (%) Adapted up (%) Did not adapt (%) Adapted down (%) Adapted up (%) Did not adapt (%) Adapted down (%) Adapted up (%) Did not adapt (%) Adapted down (%)
Grade 3 12.6 87.4 11.2 31.4 57.4 20.1 52.3 27.7 74.6 15.7   9.7
Grade 4 14.5 85.5 19.6 33.1 47.3 66.1 26.2   7.7 78.2 18.1   3.7
Grade 5 16.3 83.7 13.2 30.9 55.9 40.8 25.9 33.3 66.3 22.9 10.8
Grade 6 17.3 82.7 14.6 42.9 42.5 28.5 36.2 35.3 49.1 43.1   7.8
Grade 7 17.3 82.7 12.9 27.5 59.7 19.8 20.9 59.3 72.5 19.2   8.3
Grade 8 15.3 84.7 15.7 49.4 34.9 30.5 54.8 14.7 49.7 22.5 27.8
Grade 9 14.5 85.5 21.6 48.0 30.4 55.8 36.1   8.0 57.9 34.5   7.6
Grade 10 14.1 85.9 22.6 30.4 47.0 34.2 20.6 45.1   3.8 21.4 74.8
Grade 11 27.7 72.3 11.0 28.2 60.8 25.8 37.0 37.2 16.0 12.3 71.7
Grade 12 25.5 74.5   6.9 30.7 62.5 23.5 38.3 38.3 15.2 14.5 70.3
Note. Foundational is the lowest complexity band, so testlets could not adapt down a linkage level.

After the second testlet is administered, testlets continue to adapt based on the same routing rules. Table 4.10 shows the total number and percentage of testlets that were assigned at each linkage level during the spring assessment window. Because writing testlets are not assigned at a specific linkage level, those testlets are not included in Table 4.10. Testlets were fairly evenly distributed across the five linkage levels, with slightly fewer assignments at the Target linkage level in ELA and slightly more assignments at the Initial Precursor linkage level in mathematics. There were fewer assignments at the Target and Successor levels.

Table 4.10: Distribution of Linkage Levels Assigned for Assessment
Linkage level n %
English language arts
Initial Precursor 170,081 24.5
Distal Precursor 142,525 20.5
Proximal Precursor 134,753 19.4
Target 103,132 14.9
Successor 143,936 20.7
Mathematics
Initial Precursor 215,969 33.5
Distal Precursor 155,915 24.2
Proximal Precursor 133,261 20.7
Target   78,259 12.1
Successor   61,274   9.5

4.6.5 Administration Incidents

DLM staff annually evaluates testlet assignment to ensure students are correctly assigned to testlets. Administration incidents that have the potential to affect scoring are reported to state education agencies in a supplemental Incident File. No incidents were observed during the 2021–2022 operational assessment windows. Assignment of testlets will continue to be monitored in subsequent years to track any potential incidents and report them to state education agencies.

4.6.6 Accessibility Support Selections

Table 4.11 shows selection rates for the three categories of accessibility supports. Each of the support categories are discussed in detail above in the Accessibility section. Overall, 85,429 students (88%) had at least one support selected. The most commonly selected supports in 2021–2022 were human read aloud, test administrator enters responses for student, and spoken audio. Additionally, educators reported in the First Contact survey (see section 4.2.1.1 of this chapter) that 40% of students were able to access a computer independently, with or without assistive technology.

Table 4.11: Accessibility Supports Selected for Students (N = 96,576)
Support n %
Supports provided in Kite Student Portal
Spoken audio 51,301 53.1
Magnification 13,471 13.9
Color contrast   8,059   8.3
Overlay color   3,647   3.8
Invert color choice   2,473   2.6
Supports requiring additional tools/materials
Individualized manipulatives 40,406 41.8
Calculator 28,280 29.3
Single-switch system   3,767   3.9
Alternate form - visual impairment   2,068   2.1
Two-switch system   1,165   1.2
Uncontracted braille      109   0.1
Supports provided outside the system
Human read aloud 75,846 78.5
Test administrator enters responses for student 52,439 54.3
Partner-assisted scanning   8,490   8.8
Language translation of text   1,569   1.6
Sign interpretation of text   1,218   1.3

4.7 Evidence From Monitoring Assessment Administration

Monitoring of assessment administration was conducted using various materials and strategies. DLM project staff developed an assessment administration monitoring protocol for use by DLM staff, state education agency staff, and local education agency staff. Project staff also reviewed Service Desk contacts and hosted regular check-in calls to monitor common issues and concerns during the assessment window. This section provides an overview of all resources and supports as well as more detail regarding the assessment administration observation protocol and its use, check-in calls with states, and methods for monitoring testlet delivery.

4.7.1 Test Administration Observations

DLM project staff developed an assessment administration observation protocol to standardize data collection across observers and locations. This assessment administration protocol is available for use by state and local education agencies; however, participation in the test administration observations is not required. The majority of items in the protocol are based on direct recording of what is observed and require little inference or background knowledge. Information from the protocol is used to evaluate several assumptions in the validity argument, addressed in the Test Administration Observation Results section of this chapter.

One observation form is completed per testlet administered. Some items are differentiated for computer-delivered and educator-administered testlets. The four main sections include Preparation/Set Up, Administration, Accessibility, and Observer Evaluation. The Preparation/Set Up section includes documentation of the testing location, testing conditions, the testing device used for the testing session, and documentation of the test administrator’s preparation for the session. The Administration section is provided for the documentation of the student’s response mode, general test administrator behaviors during the session, subject-specific test administrator behaviors, any technical problems experienced with the Kite Suite, and documentation of student completion of the testlet. The Accessibility section focuses on the use of accessibility features, any difficulty the student encountered with the accessibility features, and any additional devices the student uses during the testing session. Finally, Observer Evaluation requires that the observer rate overall student engagement during the session and provide any additional relevant comments.

The protocol is available as an online survey (optimized for mobile devices and with branching logic) administered through Kite Survey Solutions, a survey platform within the Kite Suite.

Training resources are provided to state education agency staff to support fidelity of use of the assessment administration protocol and increase the reliability of data collected (see Table 4.12). State education agency staff have access to the Test Administration Observation Training video on the use of the Test Administration Observation Protocol. The links to this video, the Guidance for Local Observers, and the Test Administrator Observation Protocol are provided on the state side of the DLM website, and state education agencies are encouraged to use this information in their state monitoring efforts. State education agencies are able to use these training resources to encourage use of the protocol among local education agency staff. States are also cautioned that the protocol is only to be used to document observations for the purpose of describing the administration process. It is not to be used for evaluating or coaching test administrators or gauging student academic performance. This caution, as well as general instructions for completing and submitting the protocol, are provided in the form itself.

Table 4.12: DLM Resources for Test Administration Monitoring Efforts
Resource Description
DLM Test Administration Observation Research Protocol (PDF) Provides observers with a standardized way to describe the assessment administration.
Guide to Test Administration Observations: Guidance for Local Observers (PDF) Provides observers with the purpose and use of the observation protocol as well as general instructions for use.
Test Administration Observation Training Video (Vimeo video) Provides training on the use of the Test Administration Observation Protocol.

During 2021–2022, there were 157 assessment administration observations collected in six states. Table 4.13 shows the number of observations collected by state. Of the observations, 115 (73%) were of computer-delivered assessments and 42 (27%) were of educator-administered testlets. The observations consisted of 84 (54%) ELA reading testlets, 8 (5%) ELA writing testlets, and 65 (41%) mathematics testlets.

Table 4.13: Educator Observations by State (N = 157)
State n %
Arkansas 70 44.6
Iowa 15   9.6
Kansas   4   2.5
Missouri 13   8.3
North Dakota   1   0.6
West Virginia 54 34.4

To investigate the assumptions that underlie the claims of the validity argument, several parts of the test administration observation protocol were designed to provide information corresponding to the assumptions. One assumption addressed is that educators allow students to engage with the system as independently as they are able. For computer-delivered testlets, related evidence is summarized in Table 4.14; behaviors were identified as supporting, neutral, or nonsupporting. For example, clarifying directions (51.3% of observations) removes student confusion about the task demands as a source of construct-irrelevant variance and supports the student’s meaningful, construct-related engagement with the item. In contrast, using physical prompts (e.g., hand-over-hand guidance) indicates that the test administrator directly influenced the student’s answer choice. Overall, 60% of observed behaviors were classified as supporting, with 2% of observed behaviors reflecting nonsupporting actions.

Table 4.14: Test Administrator Actions During Computer-Delivered Testlets (n = 115)
Action n %
Supporting
Read one or more screens aloud to the student 76 66.1
Navigated one or more screens for the student 60 52.2
Clarified directions or expectations for the student 59 51.3
Repeated question(s) before student responded 33 28.7
Neutral
Used pointing or gestures to direct student attention or engagement 42 36.5
Used verbal prompts to direct the student’s attention or engagement (e.g., “look at this.”) 39 33.9
Entered one or more responses for the student 21 18.3
Used materials or manipulatives during the administration process 17 14.8
Asked the student to clarify or confirm one or more responses 11   9.6
Repeated question(s) after student responded (gave a second trial at the same item) 10   8.7
Allowed student to take a break during the testlet   6   5.2
Nonsupporting
Physically guided the student to a response   3   2.6
Reduced the number of answer choices available to the student   3   2.6
Note. Respondents could select multiple responses to this question.

For DLM assessments, interaction with the system includes interaction with the assessment content as well as physical access to the testing device and platform. The fact that educators navigated one or more screens in 52% of the observations does not necessarily indicate the student was prevented from engaging with the assessment content as independently as possible. Depending on the student, test administrator navigation may either support or minimize students’ independent, physical interaction with the assessment system. While not the same as interfering with students’ interaction with the content of the assessment, navigating for students who are able to do so independently conflicts with the assumption that students are able to interact with the system as intended. The observation protocol did not capture why the test administrator chose to navigate, and the reason was not always obvious.

A related assumption is that students are able to interact with the system as intended. Evidence for this assumption was gathered by observing students taking computer-delivered testlets, as shown in Table 4.15. Independent response selection was observed in 48% of the cases. Non-independent response selection may include allowable practices, such as test administrators entering responses for the student. The use of materials outside of Kite Student Portal was seen in 10% of the observations. Verbal prompts for navigation and response selection are strategies within the realm of allowable flexibility during test administration. These strategies, which are commonly used during direct instruction for students with the most significant cognitive disabilities, are used to maximize student engagement with the system and promote the type of student-item interaction needed for a construct-relevant response. However, they also indicate that students were not able to sustain independent interaction with the system throughout the entire testlet.

Table 4.15: Student Actions During Computer-Delivered Testlets (n = 115)
Action n %
Selected answers independently 55 47.8
Navigated screens independently 43 37.4
Selected answers after verbal prompts 30 26.1
Navigated screens after test administrator pointed or gestured 27 23.5
Navigated screens after verbal prompts 25 21.7
Used materials outside of Kite Student Portal to indicate responses to testlet items 12 10.4
Asked the test administrator a question   6   5.2
Revisited one or more questions after verbal prompt(s)   6   5.2
Independently revisited a question after answering it   3   2.6
Skipped one or more items   1   0.9
Note. Respondents could select multiple responses to this question.

Another assumption in the validity argument is that students are able to respond to tasks irrespective of sensory, mobility, health, communication, or behavioral constraints. This assumption was evaluated by having observers note whether there was difficulty with accessibility supports (including lack of appropriate available supports) during observations of educator-administered testlets. Of the 42 observations of educator-administered testlets, observers noted difficulty in 1 case (2%). For computer-delivered testlets, evidence to evaluate the assumption was collected by noting students who indicated responses to items using varied response modes such as gesturing (25%) and using manipulatives or materials outside of Kite (10%). Additional evidence for this assumption was gathered by observing whether students were able to complete testlets. Of the 157 test administration observations collected, students completed the testlet in 113 cases (72%). In all instances where the testlet was not completed, no reason was provided by the observer.

Finally, the test administration observations allow for an evaluation of the assumption that test administrators enter student responses with fidelity. To record student responses with fidelity, test administrators needed to observe multiple modes of communication, such as verbal, gesture, and eye gaze. Table 4.16 summarizes students’ response modes for educator-administered testlets. The most frequently observed behavior was verbally indicated response to test administrator who selected answers.

Table 4.16: Primary Response Mode for Educator-Administered Testlets (n = 42)
Response mode n %
Verbally indicated response to test administrator who selected answers 24 57.1
Gestured to indicate response to test administrator who selected answers 20 47.6
Eye gaze system indication to test administrator who selected answers   3   7.1
No observable response mode   2   4.8
Note. Respondents could select multiple responses to this question.

Computer-delivered testlets provided another opportunity to confirm fidelity of response entry when test administrators entered responses on behalf of students. This support is recorded on the PNP Profile and is recommended for a variety of situations (e.g., students who have limited motor skills and cannot interact directly with the testing device even though they can cognitively interact with the onscreen content). Observers recorded whether the response entered by the test administrator matched the student’s response. In 21 of 115 (18%) observations of computer-delivered testlets, the test administrator entered responses on the student’s behalf. In 18 (86%) of those cases, observers indicated that the entered response matched the student’s response, while the remaining 3 observers either responded that they could not tell if the entered response matched the student’s response, or they left the item blank.

4.7.2 Formative Monitoring Techniques

Several techniques for formative monitoring purposes are available for the DLM System. First, because DLM assessments are delivered as a series of testlets, an assessment administration monitoring extract was available on demand in Kite Educator Portal. This extract allowed state and local staff to check each student’s progress toward completion of all required testlets. For each student, the extract listed the number of testlets completed and expected for each subject. To support local capacity for monitoring, webinars were delivered before the testing window opened. These webinars targeted district and school personnel who monitor assessments and had not yet been involved in DLM assessments.

Formative monitoring also occurred through regular calls with DLM staff and state education agencies. Throughout most of the year, these calls were scheduled twice per month. Topics related to monitoring that appeared on agendas for partner calls included assessment window preparation, anticipated high-frequency questions from the field, and an opportunity for state education agency-driven discussion. Particular attention was paid to questions from the field concerning sources of confusion among test administrators that could compromise assessment results. During the spring assessment window, check-in calls were hosted on the weeks between the regularly scheduled partner calls. The purpose of the check-in calls is to keep the DLM Governance Board apprised of any issues or concerns that arise during the testing window, which allows them to provide timely information to districts. States are provided with a description of the issues as well as actions that are in place to remedy the situation. During these meetings, partner states are encouraged to share any concerns that have arisen during the week from the field and to provide feedback on implemented fixes, if any were necessary.

4.7.3 Monitoring Testlet Delivery

Prior to the opening of a testing window, Agile Technology Solutions staff initiated an automated enrollment process that works in conjunction with test administrator EE and linkage level selection to assign the first testlet. Students who had missing or incorrect information in Kite Educator Portal were included in error logs that detail which information was missing (e.g., First Contact survey is not submitted) or incorrect (e.g., student is enrolled in a grade that is not tested). These error logs were accessed and evaluated by Agile Technology Solutions staff. When testlets could not be assigned for large numbers of students in a state due to missing or incorrect data, DLM staff worked with relevant state education agencies to either communicate general reminders to the field or solve problems regarding specific students.

Once the student completed the first testlet, adaptive delivery drove the remaining testlet assignments. During the spring assessment window, the DLM psychometric team monitored test delivery to ensure students received testlets according to auto-enrollment specifications. This included running basic frequency statistics to verify counts appeared as expected by grade, state, and testing model and verifying correct assignment to initial testlet-based rules that govern that process.

4.7.4 Data Forensics Monitoring

Two data forensics monitoring reports are available in Educator Portal. The first report includes information about testlets completed outside of normal business hours. The second report includes information about testlets that were completed within a short period of time.

The Testing Outside of Hours report allows state education agencies to specify days and hours within a day that testlets are expected to be completed. Each state can select its own days and hours for setting expectations. For example, a state could elect to flag any testlet completed outside of Monday through Friday from 6:00 a.m. to 5:00 p.m. local time. The Testing Outside of Hours report then identifies students who completed assessments outside of the defined expected hours. Overall, 10,366 (1%) ELA and mathematics testlets were completed outside of the expected hours by 8,315 (9%) students.

The Testing Completed in a Short Period of Time report identifies students who completed a testlet within an unexpectedly short period of time. The threshold for inclusion in the report was testlet completion time of less than 30 seconds in mathematics and 60 seconds in ELA. The report is intended for state users to identify potentially aberrant response patterns; however there are many legitimate reasons a testlet may be submitted in a short time period. Overall, 52,824 (4%) testlets were completed in a short period of time by 19,437 (22%) students.

4.8 Evidence From Test Administrators

This section first describes evidence collected from the spring 2022 test administrator survey. Data on user experience with the DLM System as well as student opportunity to learn is evaluated annually through a survey that test administrators are invited to complete after administration of the spring assessment. Test administrators receive one survey per rostered DLM student, which collects information about that student’s assessment experience. As in previous years, the survey was distributed to test administrators in Kite Student Portal, where students completed assessments. The survey consisted of four blocks. Blocks 1 and 4 were administered in every survey. Block 1 included questions about the test administrator’s perceptions of the assessments and the student’s interaction with the content, and Block 4 included questions about the test administrator’s background. Block 2 was spiraled, so test administrators received one randomly assigned section. In these sections, test administrators were asked about one of the following topics per survey: relationship to ELA instruction, relationship to mathematics instruction, or relationship to science instruction. Block 3 was added in 2021 and remained in the survey in 2022 to gather information about educational experiences during the COVID-19 pandemic. After evidence from the spring 2022 test administrator survey is presented, this section also presents evidence collected from First Contact survey responses and educator cognitive labs.

4.8.1 User Experience With the DLM System

A total of 16,836 test administrators responded to the survey (69%) about 53,684 students’ experiences. Test administrators are instructed to respond to the survey separately for each of their students. Participating test administrators responded to surveys for a median of two students. Test administrators reported having an average of 11 years of experience in ELA, 11 years in mathematics, and 10 years with students with significant cognitive disabilities.

The following sections summarize responses regarding both educator and student experience with the system.

4.8.1.1 Educator Experience

Test administrators were asked to reflect on their own experience with the assessments as well as their comfort level and knowledge administering them. Most of the questions required test administrators to respond on a 4-point scale: strongly disagree, disagree, agree, or strongly agree. Responses are summarized in Table 4.17.

Nearly all test administrators (96%) agreed or strongly agreed that they were confident administering DLM testlets. Most respondents (89%) agreed or strongly agreed that the required test administrator training prepared them for their responsibilities as test administrators. Most test administrators also responded that they had access to curriculum aligned with the content that was measured by the assessments (86%) and that they used the manuals and the Educator Resources page (89%).

Table 4.17: Test Administrator Responses Regarding Test Administration
SD
D
A
SA
A+SA
Statement n % n % n % n % n %
I was confident in my ability to deliver DLM testlets. 218 1.4 437 2.9 6,156 40.9 8,237 54.7 14,393 95.6
Required test administrator training prepared me for the responsibilities of a test administrator. 408 2.7 1,180 7.9 7,293 48.6 6,120 40.8 13,413 89.4
I have access to curriculum aligned with the content measured by DLM assessments. 497 3.3 1,574 10.5 7,503 50.0 5,423 36.2 12,926 86.2
I used manuals and/or the DLM Educator Resource Page materials. 382 2.5 1,215 8.1 8,082 53.8 5,336 35.5 13,418 89.3
Note. SD = strongly disagree; D = disagree; A = agree; SA = strongly agree; A+SA = agree and strongly agree.

4.8.1.2 Student Experience

The spring 2022 test administrator survey included three items about how students responded to test items. Test administrators were asked to rate statements from strongly disagree to strongly agree. Results are presented in Table 4.18. The majority of test administrators agreed or strongly agreed that their students responded to items to the best of their knowledge, skills, and understandings; were able to respond regardless of disability, behavior, or health concerns; and had access to all necessary supports to participate.

Table 4.18: Test Administrator Perceptions of Student Experience with Testlets
SD
D
A
SA
A+SA
Statement n % n % n % n % n %
Student responded to items to the best of his/her knowledge, skills, and understanding 1,808 3.7 3,503 7.2 25,442 52.1 18,075 37.0 43,517 89.1
Student was able to respond regardless of his/her disability, behavior, or health concerns 2,806 5.7 4,123 8.4 24,656 50.4 17,360 35.5 42,016 85.9
Student had access to all necessary supports to participate 1,568 3.2 2,328 4.8 25,282 51.9 19,562 40.1 44,844 92.0
Note. SD = strongly disagree; D = disagree; A = agree; SA = strongly agree; A+SA = agree and strongly agree.

Annual survey results show that a small percentage of test administrators disagree that their student was able to respond regardless of disability, behavior, or health concerns; had access to all necessary supports; and was able to effectively use supports. In spring 2020, DLM staff conducted educator focus groups with educators who disagreed with one or more of these survey items to learn about potential accessibility gaps in the DLM System (Kobrin et al., 2022). A total of 18 educators from 11 states participated in six focus groups. The findings revealed that many of the challenges educators described were documented in existing materials (e.g., wanting clarification about allowable practices that are described in the Test Administration Manual, such as substituting materials; desired use of not-allowed practices like hand-over-hand that are used during instruction). DLM staff are using the focus group findings to review existing materials and develop new resources that better communicate information about allowable practices to educators.

4.8.2 Opportunity to Learn

Table 4.19 reports the opportunity to learn results. Approximately 71% of responses (n = 34,877) reported that most or all ELA testlets matched instruction, compared to 62% (n = 30,386) for mathematics. More specific measures of instructional alignment are planned to better understand the extent that content measured by DLM assessments matches students’ academic instruction.

Table 4.19: Educator Ratings of Portion of Testlets That Matched Instruction
None
Some (< half)
Most (> half)
All
Not applicable
Subject n % n % n % n % n %
English language arts 2,869 5.8 10,636 21.7 19,619 39.9 15,258 31.1 733 1.5
Mathematics 3,472 7.1 14,010 28.7 18,210 37.4 12,176 25.0 865 1.8

A subset of test administrators was asked to indicate the approximate number of hours spent instructing students on each of the conceptual areas by subject (i.e., ELA, mathematics). Test administrators responded using a 6-point scale: 0 hours, 0–5 hours, 6–10 hours, 11–15 hours, 16–20 hours, or more than 20 hours. Table 4.20 and Table 4.21 indicate the amount of instructional time spent on conceptual areas for ELA and mathematics, respectively. Using 11 or more hours per conceptual area as a criterion for instruction, 51% of the test administrators provided this amount of instruction to their students in ELA, and 42% did so in mathematics.

Table 4.20: Instructional Time Spent on ELA Conceptual Areas
Number of hours
0
0–5
6–10
11–15
16–20
>20
Conceptual area Median n % n % n % n % n % n %
Determine critical elements of text 6–10 3,042 15.6 3,859 19.8 2,861 14.7 2,350 12.1 2,862 14.7 4,513 23.2
Construct understandings of text 11–15 2,151 11.1 3,505 18.1 2,829 14.6 2,453 12.7 3,032 15.7 5,399 27.9
Integrate ideas and information from text 11–15 2,637 13.7 3,643 18.9 2,916 15.1 2,568 13.3 2,965 15.4 4,567 23.7
Use writing to communicate 6–10 2,994 15.5 4,001 20.7 2,942 15.2 2,345 12.1 2,762 14.3 4,300 22.2
Integrate ideas and information in writing 6–10 3,840 19.9 3,942 20.5 2,857 14.8 2,346 12.2 2,686 13.9 3,599 18.7
Use language to communicate with others 16–20 1,212   6.3 2,394 12.4 2,398 12.4 2,230 11.5 3,177 16.4 7,949 41.1
Clarify and contribute in discussion 11–15 2,601 13.4 3,386 17.5 2,766 14.3 2,466 12.8 3,091 16.0 5,030 26.0
Use sources and information 6–10 4,472 23.1 4,106 21.2 2,951 15.3 2,285 11.8 2,447 12.7 3,066 15.9
Collaborate and present ideas 6–10 4,268 22.0 4,164 21.5 2,952 15.3 2,304 11.9 2,533 13.1 3,136 16.2
Table 4.21: Instructional Time Spent on Mathematics Conceptual Areas
Number of hours
0
0–5
6–10
11–15
16–20
>20
Conceptual area Median n % n % n % n % n % n %
Understand number structures (counting, place value, fraction) 16–20 1,414   6.5 3,560 16.3 3,081 14.1 2,637 12.1 3,639 16.7 7,452 34.2
Compare, compose, and decompose numbers and steps 6–10 3,206 14.8 4,281 19.8 3,406 15.8 2,910 13.5 3,332 15.4 4,458 20.6
Calculate accurately and efficiently using simple arithmetic operations 11–15 3,245 15.0 3,391 15.7 2,888 13.4 2,441 11.3 3,350 15.5 6,308 29.2
Understand and use geometric properties of two- and three-dimensional shapes 6–10 4,156 19.2 5,606 25.9 3,911 18.1 3,021 14.0 2,632 12.2 2,316 10.7
Solve problems involving area, perimeter, and volume 1–5 8,864 41.0 4,648 21.5 2,909 13.5 2,033   9.4 1,707   7.9 1,457   6.7
Understand and use measurement principles and units of measure 1–5 5,610 26.0 5,608 26.0 3,703 17.2 2,567 11.9 2,221 10.3 1,870   8.7
Represent and interpret data displays 6–10 5,509 25.6 5,105 23.7 3,775 17.5 2,695 12.5 2,386 11.1 2,088   9.7
Use operations and models to solve problems 6–10 4,689 21.7 4,099 19.0 3,262 15.1 2,782 12.9 2,989 13.9 3,738 17.3
Understand patterns and functional thinking 6–10 3,269 15.1 5,058 23.4 3,848 17.8 3,179 14.7 3,069 14.2 3,212 14.8

Results from the test administrator survey were also correlated with total linkage levels mastered by conceptual area, as reported on individual student score reports. See Chapter 7 of this manual for a description of results and reporting. While a direct relationship between amount of instructional time and number of linkage levels mastered in the area is not expected, as some students may spend a large amount of time on an area and demonstrate mastery at the lowest linkage level for each EE, we generally expect that students who mastered more linkage levels in the area would also have spent more instructional time in the area. More evidence is needed to evaluate this assumption.

Table 4.22 summarizes the Spearman rank-order correlations between ELA conceptual area instructional time and linkage levels mastered in the conceptual area as well as between mathematics conceptual area instructional time and linkage levels mastered in the conceptual area. Correlations ranged from 0.14 to 0.36, with the strongest correlations observed for writing conceptual areas (ELA.C2.1 and ELA.C2.2) in ELA, and measurement, data, and analytic procedures conceptual areas (M.C4.1 and M.C1.3) in mathematics.

Table 4.22: Correlation Between Instructional Time and Linkage Levels Mastered by Conceptual Area
Conceptual area Correlation with instruction time
English language arts
ELA.C1.1: Determine critical elements of text .22
ELA.C1.2: Construct understandings of text .30
ELA.C1.3: Integrate ideas and information from text .29
ELA.C2.1: Use writing to communicate .36
ELA.C2.2: Integrate ideas and information in writing .35
Mathematics
M.C1.1: Understand number structures (counting, place value, fraction) .14
M.C1.2: Compare, compose, and decompose numbers and steps .29
M.C1.3: Calculate accurately and efficiently using simple arithmetic operations .31
M.C2.1: Understand and use geometric properties of two- and three-dimensional shapes .17
M.C2.2: Solve problems involving area, perimeter, and volume .27
M.C3.1: Understand and use measurement principles and units of measure .24
M.C3.2: Represent and interpret data displays .26
M.C4.1: Use operations and models to solve problems .32
M.C4.2: Understand patterns and functional thinking .21

Another dimension of opportunity to learn is student engagement with instruction. The First Contact survey (see section 4.2.1.1 of this chapter) contains two questions about student engagement during computer- and educator-directed instruction. Table 4.23 shows the percentage of students who demonstrated different levels of attention by instruction type. Overall, 87% of students demonstrated fleeting or sustained attention to computer-directed instruction and 86% of students demonstrated fleeting or sustained attention to educator-directed instruction.

Table 4.23: Student Attention Levels During Instruction
Demonstrates
little or no attention
Demonstrates
fleeting attention
Generally
sustains attention
Type of instruction n % n % n %
Computer-directed (n = 83,824) 10,848 12.9 46,675 55.7 26,301 31.4
Educator-directed (n = 90,552) 12,651 14.0 56,079 61.9 21,822 24.1

4.8.3 Educator Ratings on First Contact Survey

Before administering testlets, educators complete the First Contact survey, which is a survey of learner characteristics (see section 4.2.1.1 of this chapter for more details). Because ratings on the First Contact survey are distinct from the DLM assessment (which uses only a subset of items to calculate the student complexity band for each subject), they can serve as one source of external evidence regarding the construct being measured. The First Contact survey includes academic skill items: nine in the reading section and 13 in the mathematics section.

For each academic item on the First Contact survey, test development teams reviewed the learning maps to identify tested nodes that measured the same skill. Not all First Contact items directly corresponded to nodes in the map. Tested nodes were identified for two of the reading items and nine of the mathematics items. A summary of the First Contact academic items and the number of nodes identified in the learning maps is provided in Table 4.24.

Table 4.24: First Contact Items With Nodes Identified in the Learning Maps
First Contact item Number of assessed nodes Number of linkage levels measuring the nodes
Reading
Recognizes single symbols presented visually or tactually   1   1
Identifies individual words without symbol support   1   7
Mathematics
Creates or matches patterns of objects or images   2   3
Identifies simple shapes in 2 or 3 dimensions   8   4
Sorts objects by common properties (e.g., color, size, shape)   1 11
Adds or subtracts by joining or separating groups of objects   2   6
Adds and/or subtracts using numerals 13   7
Forms groups of objects for multiplication or division   2   6
Multiplies and/or divides using numerals 15   4
Tells time using an analog or digital clock   4   2
Uses common measuring tools (e.g., ruler or measuring cup)   2   2

For each tested node identified by the test development teams, all EEs and linkage levels measuring the node were identified. A dataset was created that included student mastery of the EE and linkage level measuring the node, as well as First Contact survey responses. See Chapter 7 of this manual for a description of linkage level mastery and scoring rules. The First Contact items asked educators to use a 4-point scale to indicate how consistently students demonstrated each skill: almost never (0%–20% of the time), occasionally (21%–50% of the time), frequently (51%–80% of the time), or consistently (81%–100% of the time).

Polychoric correlations for reading and mathematics were calculated to determine the relationship between the educator’s First Contact rating and the student’s reported mastery of the linkage level measuring nodes associated with the First Contact items.

Moderate but positive correlations are expected between First Contact ratings and student mastery of the linkage level for several reasons. The First Contact items were not originally designed to align with assessment items or linkage level statements. Also, educators are required to complete the First Contact survey before testlet administration; some educators complete it at the beginning of the school year. Educators may choose to update survey responses during the year but do not have to. Therefore, First Contact ratings may reflect student knowledge or understandings before instruction, while linkage level mastery represents end-of-year performance. However, in general, higher First Contact ratings are expected to be associated with student mastery of the linkage level measuring the same skill.

Correlations for First Contact items with linkage level mastery are summarized in Table 4.25.

Table 4.25: Correlations of First Contact Item Responses to Linkage Level Mastery
Correlation
Standard Error
First Contact section Linkage levels (n) Min Max Median Min Max Median
Reading   8 .06 .53 .47 0.01 0.05 0.02
Mathematics 45 −.08   .81 .26 0.01 0.13 0.03

Mathematics First Contact items varied most in their relationship to linkage level mastery. Because mathematics nodes represent finer-grained skills, and test development teams identified more nodes in mathematics, more correlations were calculated (n = 45) than for reading (n = 8). Mathematics results were also likely affected by sample size. As few as 297 student data points were available for some linkage levels, compared to at least 915 in reading. The decreased sample size is likely attributable to fewer students testing at the Target and Successor linkage levels (see section 4.6.4 of this chapter). Furthermore, a negative relationship between mathematics First Contact rating and linkage level mastery was observed in three instances. An example is seen in the relationship between the Successor level of the grade 3 EE M.EE.3.OA.9 and the First Contact item “Creates or matches patterns of objects or images.” The linkage level statement for this EE and level is “Determine the pattern rule; extend a pattern by applying the pattern rule.” Although the linkage level measures the node “Extend a symbolic pattern by applying a rule,” it also measures other nodes that are not aligned to any First Contact item; this combination likely contributed to the negative relationship observed. However, small sample size is associated with increased standard errors (Moinester & Gottfried, 2014), and therefore these negative correlations should be interpreted with caution.

Overall, 94% (n = 50) of the correlations were positive, indicating generally positive associations between linkage level mastery and First Contact ratings. Results for all correlations are summarized in Figure 4.5.

Figure 4.5: Relationship of First Contact Responses to Linkage Level Mastery

A scatter plot showing correlation on the x-axis and standard error on the y-axis. The of the points is scaled such that correlations based on smaller sample sizes are smaller. As the points get smaller, the standard error increases.

4.8.4 Educator Cognitive Labs

Educator cognitive labs have been recommended as a potential source of response process evidence for alternate assessments based on alternate achievement standards, in which educator ratings are the items (Goldstein & Behuniak, 2010). This approach was used for DLM educator-administered testlets because educators interpret student behavior and respond to items about the student’s response. Most of these testlets involve test administrator interpretation of the responses of students who are working on consistent, intentional communication and who are working on foundational skills that promote their access to grade-level content. Writing testlets are also educator-administered at all linkage levels.

Cognitive labs were conducted in spring 2015 with 15 educators in five schools across two states. Educators completed think-aloud procedures while preparing for and administering educator-administered testlets in reading, writing, and math. They were first presented with the TIP, which is a short document that provides background information needed to prepare to administer the testlet (see section 4.3.1.2.1 of this chapter).

Educators were asked to think out loud as they read through the TIP. Next, the educator gathered the materials needed for the assessment and administered the testlet. Probes were sometimes used during the process to ask about educator interpretation of the on-screen instructions and the rationale behind decisions they made during administration. When the testlet was finished, educators also completed post-hoc interviews about the contents of test-administration instructions, use of materials, clarity of procedures, and interpretation of student behaviors. All labs were video recorded and an observer took notes during the administration. The initial phase of analysis involved recording evidence of intended administration and sources of challenge to intended administration at each of the following stages: (1) preparation for administration, (2) interpretation of educator directions within the testlet, (3) testlet administration, (4) interpretation of student behaviors, and (5) recording student responses. Through this lens, we were able to look for evidence related to fidelity (1, 2, 3, and 5) as well as response process (4). These 15 labs were the first phase of data collection using this protocol. Preliminary evidence on interpretation of student behaviors indicates that the ease of determining student intent depended in part on the student’s response mode.

  • Educators were easily able to understand student intent when the student indicated a response by picking up objects and handing them to the educator.
  • In a case where the student touched the object rather than handing it to the educator, the educator accepted that response and entered it, but speculated as to whether the student was just choosing the closest object.
  • When a student briefly touched one object and then another, the educator entered the response associated with the second object but commented that she was not certain if the student intended that choice.
  • When a student used eye gaze, the educator held objects within the student’s field of vision and put the correct response away from the current gaze point so that a correct response required intentional eye movement to the correct object.
  • When a student’s gesture did not exactly match one of the response options, the educator was able to verbalize the process of deciding how to select the option that most closely matched the student’s behavior. Her process was consistent with the expectations in the Test Administration Manual.
  • In one case, the educator moved objects to prepare for the next item, which took her attention away from the student and caused her to miss his eye gaze that indicated a response. She recorded no response. However, this was observed for a student whose communication and academic skills were far beyond what was being assessed. The testlet was not appropriate for this student and his typical response mode for DLM testlets was verbal.

4.9 Conclusion

Delivery of the DLM System was designed to align with instructional practice and be responsive to individual student needs. Assessment delivery options allow for necessary flexibility to reflect student needs while also including constraints to maximize comparability and support valid interpretation of results. The dynamic nature of DLM assessment administration is reflected in the initial input through the First Contact survey, as well as adaptive routing between testlets. Evidence collected from the DLM System, test administration monitoring, and test administrators indicates that students are able to successfully interact with the system to demonstrate their knowledge, skills, and understandings.