Remarks by Richard Innes of the Bluegrass Institute for Public Policy solutions:
Beyond the Common Core State Standards Beyond the issue of Common Core Standards State Standards looms the equally critical issue of what the new state tests will look like.
Educational Testing Service recently released some one-page summaries of the test proposals from two separate consortia that are working on this. One is known as “The Smarter Balanced Assessment Consortium” (SBAC) and the other is “The Partnership for the Assessment of Readiness for College and Careers” (PARCC)
As I read these, I was struck by their similarity to the description of the performance based assessment program that
KIRIS was supposed to be a performance based assessment. It featured complete reliance on open response written questions (multiple choice questions were administered intermittently, but never counted), performance events, and portfolios.
The question creators believed students would come up with an estimation process such as cutting the paper into fourths, counting the images in one of the smaller areas, and then multiplying by four to estimate the overall total.
Of course, by fourth grade it should have been a trivial task for the students to simply count up all the images on the paper, but that apparently never occurred to the KIRIS performance event creation team.
In any event, the important point is that these sorts of events proved impossible to manage in
Ultimately, the performance events collapsed in 1996 when the middle school performance event generated totally unusable results. The fallout led to the legislature completely removing performance events from the testing program along with a concerted, but unsuccessful, effort to obtain damages from the testing company that created the performance events.
Eventually, the rest of KIRIS, and yet another follow-on
Flash forward to today.
What in the current proposals looks much different from what has already been tried, and failed, sometimes twice, in
Will the new consortia come up with a way to create different performance events from year to year that can be linked and equated with high accuracy? This is absolutely essential for a valid and reliable longitudinal assessment. Can the process support changing questions frequently to preclude cheating?
Will the consortia come up with a way, never found in Kentucky, to insure enough questions are on the test to create valid and reliable scores for individual students (never achieved with either KIRIS or CATS) if those tests include time-consuming to administer (and grade) open response questions?
How will the open response questions and performance events be graded? If by outside scorers, can states afford to hire scorers with adequate subject knowledge and grading skills (problems not solved with KIRIS or CATS)? If teachers do the scoring, how will inflation be avoided if results are to be used for accountability (problem never solved with
With many states ultimately gambling a huge investment on the new Common Core based testing program, these issues need to be resolved in a careful, thoughtful manner.
Bluegrass Institute for Public Policy Solutions