Research
Here you will find a curated collection of reports and white papers produced by Smarter Balanced and our partners.
Simulations
CRESST, 2017
This report describes a study by CRESST, simulating the adaptive administration of the Smarter Balanced summative assessments using the general and accommodated item pools utilized during the 2016-2017 school year. The study was conducted to examine properties of the simulated tests (such as blueprint fulfillment and item exposure) and the quality of the resulting examinee score estimates (including bias and precision).
AIR, 2017
This report summarizes AIR’s simulation results of the 2016-2017 adaptive administrations in grades 3–8, and 11 for ELA/L in English and Braille, and mathematics in English, Braille, and Spanish.
Read: Smarter Balanced Summative Assessments Simulation Results (2016-17)
CRESST, 2015
This report describes a study simulating the adaptive administration of the Smarter Balanced summative assessments. The study was conducted in order to examine properties of the simulated tests (such as blueprint fulfillment and item exposure) and the quality of the examinee score estimates (including bias and precision). The simulated tests included both the computerized adaptive test (CAT) and performance task (PT) components, thus mimicking the operational summative tests.
Read: Simulation-based Evaluation of the Smarter Balanced Summative Assessments
CRESST, 2016
This report describes a study simulating the adaptive administration of the Smarter Balanced summative assessments using the accommodated item pools utilized during the 2014–15 school year.
AIR, 2014
Prior to test administration in 2014-2015, the American Institutes for Research (AIR) conducted simulations to evaluate and ensure the implementation and quality of the adaptive item-selection algorithm and the scoring algorithm.
Read: Testing Procedures for Adaptive Item-Selection Algorithm
Research
2017
Smarter Balanced contracted with Measurement Incorporated (MI) to conduct validation of cut scores for grades 9 and 10 as part of a larger contract to create item maps and playlists for 96 interim assessment blocks (IABs). The grades 9 and 10 cut scores validated by MI were previously interpolated from an analysis of cut scores set in 2014 performed by the American Institutes of Research (AIR).
HumRRO, 2016
The goal of this study was to gather comprehensive evidence about the alignment of the Smarter Balanced summative assessments to the Common Core State Standards (CCSS). The alignment analysis included the range of content, the balance of content, and the cognitive complexity. To determine these facets of alignment, HumRRO conducted a series of workshops during which participants reviewed the alignment among the Smarter Balanced content and item specifications, the Smarter Balanced blueprint, and the Common Core State Standards.
Read: Smarter Balanced Assessment Consortium Alignment Study Report
CRESST, 2015
This report describes the procedures used in obtaining parameter estimates for items appearing on the 2014-2015 Smarter Balanced Assessment Consortium summative paper-pencil forms.
Read: Initial Report on the Calibration of Paper and Pencil Forms
Sireci, 2012
This report informs Smarter Balanced of research that should be done to evaluate the degree to which the Consortium is accomplishing its goals and to demonstrate that the assessment system adheres to professional and federal guidelines for fair and high-quality assessment.
2010
Smarter Balanced Assessment Consortium application for the Race to the Top Assessment program. The US Department of Education awarded a grant to the Consortium on September 28, 2010.
Technical Documentation and Achievement Levels
2015
Achievement level setting for Smarter Balanced assessments in Mathematics and English/Language Arts and Literacy occurred in three phases: online panel, in-person workshop, and cross-grade review. This report documents each of the three phases and provides results and recommendations for cut scores for Smarter Balanced Assessments to be given in the spring of 2015.
Read: Achievement Level Setting Final Report
AIR, 2013
Smarter Balanced cognitive labs, conducted in collaboration with the American Institutes for Research (AIR) in fall 2013, consisted of 14 small think-aloud studies that addressed topics pertaining to an automated test delivery system. The studies were designed to determine how effective various technology enhanced item types are in assessing the cognitive processes indicated within the Smarter Balanced Content Specifications.
2013
The Smarter Balanced Assessment Consortium held an Achievement Level Descriptor (ALD) Writing Workshop in October 2012 to draft an initial set of ALDs and to review and comment on a College Content-Readiness policy. The panel developed a set of Policy ALDs, Range ALDs, and Threshold ALDs for English language arts and mathematics and provided valuable feedback on the Smarter Balanced College Content-Readiness Policy.
Read: Technical Report: Initial Achievement Level Descriptors