InterACT: “Sorry I Cannot Write More” - A Smarter Balanced Tale
The Common Core transition reached me on a more personal level this week when my 11-year old son came home and asked, “Dad, do you know anyone who made the Smarter Balanced test?” When I replied that I do not, he said, “That’s good. It’s a bad test.” My older son chimed in as well, giving the test mixed reviews.
Some people who know me might think I’d opt my children out of standardized testing, but so far, I haven’t. While I don’t care for standardized tests, I haven’t felt the need for my sons to opt-out because I don’t believe they have been over-tested, or had their time wasted on test prep; to my knowledge, no one in their school, or in our district, has suggested we should put those test result to any inappropriate use for the children or teachers.
I might have considered opting out anyways because I don’t like the use of tests to rank and punish schools, but even that objection has faded for now, with California moving away from its prior accountability program and entering a transitional period to something new. To be honest, I don’t fully understand the new system yet; schools can report the same rankings or ratings this year that they had last year, as we pilot test the Smarter Balanced assessments, and the eventual accountability measures will involve more local decision-making about a variety of measures beyond testing.
My sons had a few specific critiques of the test questions, user interface, and other issues. The whole family got a good laugh out of my 11-year old telling us about his essay. Due to a glitch in the system, he reports, his full response turned out like this:
Sorry I cannot write more than a line or it deletes itself. :(
My point in sharing this anecdote is not to criticize. Next year, the test will probably be improved, and from my sons’ perspective, the novelty will be long gone. What’s too hard this year may seem normal next year. Maybe not. It’s too soon to say.
But I will say this: I think my son’s response was perfect for the situation. And I think this anecdote perfectly illustrates how stupid it would be to evaluate his teacher based on results from a first-ever administration of a flawed assessment. (Even using the old tests, the problems of VAM in teacher evaluation are, at this point, insurmountable). Thankfully, in California, no one will even see the results from the first run-through. And why should we? No one would be able to say reliably what the results even mean, and it will take at least a few iterations before year-to-year comparisons have even a chance of offering any real insights.
I don’t think any independent expert in educational measurement or assessment is ready to go on record vouching for the validity of value-added measures in teacher evaluation if the inputs come from brand new assessments – tests that were never validated for that purpose in the first place. It’s mainly politicians and certain “accountability” enthusiasts in the education bureaucracy or think tank crowd who are ready to plunge recklessly into these unknown waters. Of course, many of these individuals, and their districts and states, are reacting to the pressure from the Education Department to take these unwise steps. Despite the legal ambiguities around his approach, and the deficiencies in research and reasoning, Secretary Arne Duncan continues to play at carrots and sticks to push VAM into teacher evaluation.
My sympathies to those of you living and working with the consequences; for the time being at least, in my state and district, the imperfections are being handled perfectly. Who knows? California’s slow and sane approach just might work.
This blog post has been shared by permission from the author.
Readers wishing to comment on the content are encouraged to do so via the link to the original post.
Find the original post here:
The views expressed by the blogger are not necessarily those of NEPC.