In 2011, Georgia became the first country in the world to use computer adaptive testing for its high school exit exams. That May, 47,000 grade 12 students sat down in front of brand new netbook computers to complete their finals, covering all eight subjects in their curriculum.
While the decision to use adaptive testing on such a large scale was influenced by many political, economic and logistical considerations, one stand-out factor that tipped the scales in its favor was security. It was an efficient way to make sure no two students got the same test.
More and more, the dialogue revolves around the potential to gauge how much students have learned and identify what challenges they’re ready for next – or what gaps needs to be met – on an individual scale.
With the rise of adaptive curricula and personalized instruction, however, adaptive testing has become a topic of conversation for reasons beyond efficiency and security. More and more, the dialogue revolves around the potential to gauge how much students have learned then identify what challenges they’re ready for next – or what gaps need to be met – on an individual scale.
What is adaptive testing anyway?
An adaptive test serves up questions to students based on their previous responses. At its core is a large bank of questions, each ranked in terms of its difficulty. The questions it selects depend on that difficulty scale, along with what a student has already answered correctly and incorrectly.
Flying through the test with a perfect score? You’ll get more challenging problems. Struggling to get the answers right? Your next question will be an easier one. The score at the end considers both difficulty ranking and student performance.
On top of enabling personalized learning, adaptive testing comes with a few other benefits:
They’re fast: Studies have shown they can achieve the same accuracy as fixed-form tests in just half the testing time. For parents, teachers, and administrators who feel their students spend too much time taking tests, this can make a big difference.
They’re engaging: They aim to provide the same level of challenge for each student. High achievers receive more difficult problems, meaning they won’t be bored with questions that are too easy; students who are struggling won’t be discouraged when they scramble for answers they can’t reach.
They’re real time: Teachers get immediate insights into how students are learning the class material. Rather than waiting for all tests to be graded and data to be analyzed, they can start making decisions in their classrooms to help their students right away.
They focus on what comes next: Adaptive tests look for that challenge line between what a student understands well and what they need to understand better. Knowing where that line lies helps teachers direct where they need to take their instruction next.
While they claim to be a balanced, fairer method of assessment, adaptive tests have limitations that schools need to consider seriously before jumping on board:
Computer literacy: They rely on students taking the test on a computer, so those who aren’t comfortable with technology are at a disadvantage. Was an answer incorrect because the student didn’t know the answer, or because they have weaker computer skills?
Cost: Adaptive tests also require that each student has access to some kind of computer. For school districts lacking in funding, providing a 1:1 ratio of hardware along with the software and technical support needed to administer tests is a real barrier.
What they do best
While they’re often used as a replacement for multiple-choice summative tests, there are two areas where adaptive tests really stand out:
Figuring out where students are starting: Because they can gauge familiarity with content that stretches beyond grade levels, they’re a great tool to discover what each student knows when you have an incoming class of new students. Knowing what each student has learned will help paint a picture of your new classroom and plan the best instructional path going forward.
Seeing how students are progressing: This one’s all about tracking data over time. As you identify gaps in learning and adjust your instruction to meet them, you can see the results on the next test.
And you’ll find lots of examples in use, too. Of course, there are the Georgia exit exams for graduating high-schoolers. The GMAT adopted adaptive technology as well, back in 1998. Many professional designations require taking one. Even Binet’s IQ test, first developed in 1905, was a form of adaptive testing (though without the benefit of computer analysis).
As we move closer to a personalized classroom with truly targeted and tailored instruction for individual learning, this is an area to keep your eye on.