Here you will find a curated collection of reports and white papers produced by Smarter Balanced and our partners.
This report describes a study by CRESST, simulating the adaptive administration of the Smarter Balanced summative assessments using the general and accommodated item pools utilized during the 2016-2017 school year. The study was conducted to examine properties of the simulated tests (such as blueprint fulfillment and item exposure) and the quality of the resulting examinee score estimates (including bias and precision).
Read: Simulation-based Evaluation of the 2016-2017 Smarter Balanced Summative Assessments: General and Accommodated Item Pools
This report summarizes AIR’s simulation results of the 2016-2017 adaptive administrations in grades 3–8, and 11 for ELA/L in English and Braille, and mathematics in English, Braille, and Spanish.
Read: Smarter Balanced Summative Assessments Simulation Results (2016-17)
This report describes a study simulating the adaptive administration of the Smarter Balanced summative assessments. The study was conducted in order to examine properties of the simulated tests (such as blueprint fulfillment and item exposure) and the quality of the examinee score estimates (including bias and precision). The simulated tests included both the computerized adaptive test (CAT) and performance task (PT) components, thus mimicking the operational summative tests.
Read: Simulation-based Evaluation of the Smarter Balanced Summative Assessments
This report describes a study simulating the adaptive administration of the Smarter Balanced summative assessments using the accommodated item pools utilized during the 2014–15 school year.
Read: Simulation-based Evaluation of the 2014-2015 Smarter Balanced Summative Assessments: Accommodated Item Pools
Prior to test administration in 2014-2015, the American Institutes for Research (AIR) conducted simulations to evaluate and ensure the implementation and quality of the adaptive item-selection algorithm and the scoring algorithm.
Read: Testing Procedures for Adaptive Item-Selection Algorithm
Smarter Balanced contracted with Measurement Incorporated (MI) to conduct validation of cut scores for grades 9 and 10 as part of a larger contract to create item maps and playlists for 96 interim assessment blocks (IABs). The grades 9 and 10 cut scores validated by MI were previously interpolated from an analysis of cut scores set in 2014 performed by the American Institutes of Research (AIR).
The goal of this study was to gather comprehensive evidence about the alignment of the Smarter Balanced summative assessments to the Common Core State Standards (CCSS). The alignment analysis included the range of content, the balance of content, and the cognitive complexity. To determine these facets of alignment, HumRRO conducted a series of workshops during which participants reviewed the alignment among the Smarter Balanced content and item specifications, the Smarter Balanced blueprint, and the Common Core State Standards.
Read: Smarter Balanced Assessment Consortium Alignment Study Report
This report describes the procedures used in obtaining parameter estimates for items appearing on the 2014-2015 Smarter Balanced Assessment Consortium summative paper-pencil forms.
Read: Initial Report on the Calibration of Paper and Pencil Forms
This report informs Smarter Balanced of research that should be done to evaluate the degree to which the Consortium is accomplishing its goals and to demonstrate that the assessment system adheres to professional and federal guidelines for fair and high-quality assessment.
Smarter Balanced Assessment Consortium application for the Race to the Top Assessment program. The US Department of Education awarded a grant to the Consortium on September 28, 2010.
Read: Race to the Top Assessment Program Application for New Grants: Comprehensive Assessment Systems
Technical Documentation and Achievement Levels
Achievement level setting for Smarter Balanced assessments in Mathematics and English/Language Arts and Literacy occurred in three phases: online panel, in-person workshop, and cross-grade review. This report documents each of the three phases and provides results and recommendations for cut scores for Smarter Balanced Assessments to be given in the spring of 2015.
Read: Achievement Level Setting Final Report
Smarter Balanced cognitive labs, conducted in collaboration with the American Institutes for Research (AIR) in fall 2013, consisted of 14 small think-aloud studies that addressed topics pertaining to an automated test delivery system. The studies were designed to determine how effective various technology enhanced item types are in assessing the cognitive processes indicated within the Smarter Balanced Content Specifications.
The Smarter Balanced Assessment Consortium held an Achievement Level Descriptor (ALD) Writing Workshop in October 2012 to draft an initial set of ALDs and to review and comment on a College Content-Readiness policy. The panel developed a set of Policy ALDs, Range ALDs, and Threshold ALDs for English language arts and mathematics and provided valuable feedback on the Smarter Balanced College Content-Readiness Policy.
Read: Technical Report: Initial Achievement Level Descriptors
Smarter Balanced Findings
In 2021, National PTA and Smarter Balanced collaborated on a project to better understand how to help close the assessment system literacy gap with parents and meet the needs of parents around assessment reporting. National PTA working in partnership with Edge Research, sought to examine
how parents prefer to receive information about their child’s academic progress and whether the Smarter Balanced system, score reports, and standardized year-end assessments are accessible and useful for families. To learn more about the focus groups and their findings, you can view the full report here.
Assessing the Options: Considerations for provision of choice in assessment discusses types of choice and related goals and challenges relevant to assessment systems.
Recommendations: The brief includes a range of potential ways state education agencies and local education agencies can partner with professional development (PD) providers and assessment developers, including:
- Help teachers promote a culture of autonomy in their classrooms wherein students’ choices converge upon common, recognized learning goals and targets.
- Provide teachers with high-quality professional development on personalizing assessment by allowing students to make choices in assessment.
- Ensure innovations introduced into assessment systems and assessment procedures concerning choice are sufficiently supported by empirical evidence on their effectiveness.
Read: Assessing the Options: Considerations for provision of choice in assessment
Leveraging Student Perspective: Considerations for connecting assessment systems and multiple ways of knowing seeks to support state education leaders and assessment developers to include knowing during the design, development, and implementation of a balanced assessment system.
Recommendations: The brief includes a range of suggested ways state agencies can partner with assessment developers and local education agencies, including:
- Develop training, classroom materials, and resources for teachers (e.g., rubrics, observation tools, surveys) that help integrate assessment of and for learning and promote assessment system literacy.
- Create spaces that help educators make connections across school-home and school-community environments.
- Find ways to help students integrate their knowledge and experiences into assessment tasks and promote the development of assessment systems that incorporate considerations for students.
Read: Leveraging Student Perspective: Considerations for connecting assessment systems and multiple ways of knowing
Content Progressions & Clustering Across Instructional Materials: Viability for Supporting the Design of a Through-Year Assessment Model
Prepared for Smarter Balanced by Shelbi K. Cole & Carey Swanson
This paper examines existing through-year assessment models and provides analysis of the most used curriculum within the context of how well they would work with through-year assessment. This piece makes strong recommendations for strengthening the interim and formative resources available to educators, piloting innovative and alternative models, developing pathways for authentic integration with the interim system, and partnering with key curriculum providers and district users.
Read: Content Progressions & Clustering Across Instructional Materials: Viability for Supporting the Design of a Through-Year Assessment Model
Through-Year Assessments: Practical Considerations For LEA Implementation
Prepared for Smarter Balanced by the New Teacher Center
As State Educational Agencies (SEAs) weigh the advantages and disadvantages of embracing this emergent assessment system, it is important to consider the impact such a decision would have on Local Educational Agencies (LEAs). This paper explores the practical considerations for LEAs managing the implementation of a statewide through-year model with instructional priorities. Key implications include defining their theory of action for assessment; ensuring alignment and coherence between curriculum, instruction, and assessment; developing a plan to support teachers and students; providing information about assessments with families and other key stakeholders; and managing the administrative systems of testing and assessments.
Read: Through-Year Assessments: Practical Considerations for LEA Implementation
An Introduction to Considerations for Through-Year Assessment Programs: Purposes, Design, Development, Evaluation
A Paper Prepared for the Smarter Balanced Assessment Consortium
The paper, written for Smarter Balanced by Nathan Dadey and Brian Gong from the Center for Assessment, is one of the most comprehensive pieces to date on the purposes of through-year assessments, their designs, and reflections on current models seen across the country.
This document is written primarily for policy makers and state department of education staff who are considering through-year assessments, as well as consultants and contractors state departments rely on. The document identifies essential things to consider when designing or evaluating a through- year assessment program. This document is not, however, a comprehensive encyclopedia of all possible through-year assessment designs and topics. It is not a handbook for how to design and construct a through-year assessment. It is not a comprehensive review of the relevant literature and practical work relevant to through-year assessments. It offers no “thumbs-up/ thumbs down” verdict regarding particular through-year assessment efforts by specific states or vendors. And it certainly is not a crystal ball regarding what the future might bring to federal Peer Review or state laws. It does provide a firm foundation, however, for considering the “whether,” “why,” and “what” of through-year assessment design.
Read: An Introduction to Considerations for Through-Year Assessment Programs: Purposes, Design, Development, Evaluation