The Problem with ‘Proficient’

The Problem with ‘Proficient’

Back in 2016, there was a little kerfuffle about how the term ‘Proficient’ as a measure of student ability on standardized tests should be interpreted.  In a nutshell, the education news site The 74 noted that “2 out of 3 8th graders in this country cannot read or do math at grade level,” an assertion based on the 2016 NAEP scores for 8th graders and the percentage scoring ‘Proficient.’  That sparked some argument from the Brookings Institute, which tweeted  the NAEP definition of proficiency, which they felt The 74 had misinterpreted. You can click the link above for all the back and forth.

But The 74 should perhaps be excused for its interpretation of NAEP scores because ‘Proficient’ means different things on different instruments and even different things on the same instrument.  Check this out:

  • NAEP definition of ‘Proficient’: “Students have demonstrated competence over challenging subject matter.” According to the NAEP, this is not the same as working at grade level, BUT they qualify ‘Basic’ (the level below ‘Proficient’) as “partial mastery of prerequisite knowledge and skills that are fundamental for proficient work at each grade.” Sooooo….’Basic’ is partial mastery of knowledge required for grade level work but ‘Proficient’ is not grade level work….What now?
  • On the Iowa Assessment, ‘Proficiency’ is measured by cut scores, but the relationship to the student’s actual number of correct answers is not explicit. The band for ‘Proficient’ is by far the largest band and when placed in context with ‘Not Proficient’ and ‘Advanced’ strongly resembles a bell curve. Since this is a normed test, we can probably take this to mean that ‘Proficient’ is going to encompass 50% of all the students taking the test and is therefore something of a moving target from year to year. Some years, proficiency might be lower in terms of actual correct answers than in others.
  • The STAAR test (Texas) doesn’t use ‘Proficient,’ but rather uses ‘Meets Grade Level.’ This would appear to be a clearer measure of a child’s ability but the actual definition for this is: “…students have a high likelihood of success in the next grade or course but may still need some short-term, targeted academic intervention. Students in this category generally demonstrate the ability to think critically and apply the assessed knowledge and skills in familiar contexts.” This seems a little bizarre – short term intervention would imply some degree of shakiness on the part of the student with regard to concepts and content. It must also be remembered that the STAAR is a test which requires little in the way of higher order thinking so the bar here is already fairly low. So ‘Meets Grade Level’ does not mean a kid is 100% ready for the next grade level, even though that is what it implies.
  • PARCC also uses different designations. ‘Met Expectations’ means that students “have demonstrated readiness for the next grade level/course and, ultimately, are on track for college and careers.” Like the Iowa Assessment, though, the designations are represented by cut scores that don’t clearly relate to the actual number of correct answers.  The level below ‘Met Expectations’ — ‘Approached Expectations’ — is defined as “likely need[ing] academic support to engage successfully in further studies in the subclaim content area” which sounds a lot like the STAAR’s description of ‘Meets Grade Level.’
  • And of course, now that many states have jumped ship on national tests like PARCC, many individual states have their own tests and their own designations for ‘Proficient.’

It’s a situation.

None of this even addresses the larger issue which is the alignment of such tests — any such tests — with what’s being taught in classrooms.  Kids may be getting a fine education, but if it doesn’t align with the content or contexts of the test, they may not perform well and look less competent than they really are.  Or the test may be so poorly aligned that what it really measures is socio-economic status and the teachers look like rock stars even though they’re not teaching anything on the test.  Yet these ‘performance indicators’ get picked up by states and the media and applied to measures of supposed competence.  Does this mean that competence is underestimated? Over-estimated?  I don’t think we can really know for sure because the answer to that is going to be highly dependent on the individual districts in question — even individual schools and classrooms if there’s not good coordination across grade levels and teachers with regard to objectives and expectations for mastery. What, out of curiosity, does “challenging subject matter” (NAEP) actually entail?  That’s an entirely different definition that we’d need to fully unpack for clarity.

Here are my three takeaways from this:

  1. Be aware of these semantic issues and the underlying confusion and lack of consistency they  embody.  Take test results with a least a small grain of salt and remember that they don’t in any meaningful way measure the total humanity, dignity, personality, and potential of your students. They are not the last word.
  2. Give students time to practice the content that will be on the test in the same ways (contexts) it appears on the test.  And by time, I mean 9 months or so.  Now, before everyone gets all wound up about “teaching to the test” and how it’s destroying American education, the third takeaway is this:
  3. Instruction in the classroom must go beyond the test in content, context, and cognitive demand.  Students taught to “go high” — meaning, studying concepts at depth and using higher-order thinking and more complex and engaging contexts that they can apply in hands-on, real-world situations — will always be able to “go low” — meaning, do well on a typical, low level multiple choice instrument.  Do not make the mistake of making the floor your ceiling. Investing in a lot of pre-packaged test prep material is not a good idea and is not going to produce the kind of flexible, critical thinkers capable of complex problem solving that virtually every district says is their end-goal for students.

CMSi can help your district develop high quality curriculum that both aligns to state standards and prioritizes and refines those standards for local use.   Our Curriculum Writing Training walks you through the process of curriculum development in a way that is designed to promote teacher buy-in.  Your end result will be a curriculum that promotes higher order thinking and hands-on, real-world activity for students and supports good pedagogy for your teachers so that students are getting the best possible delivery of the content available.  Contact us for information – we would love to help your district start building a high quality curriculum today!

 

 

 

Using Writing in Math to Deepen Understanding

Comments are closed.