RESOURCES

Read our latest news and access media resources

Scoring Models for Innovative Items

Posted:
Categories: Reports & Guides

As a way to capture the richness of job performance, many credentialing organizations are supplementing traditional multiple-choice questions (MCQs) with innovative item types. Although this view is not unanimous, one theory suggests that MCQs represent a somewhat artificial representation of job tasks and that innovative item types represent a more refined way to assess candidate competence. This white paper explores this topic in depth.

Evaluating Your Distractors

Posted:
Categories: Reports & Guides

In addition to looking at an item’s p-value and discrimination index to determine how well an item is functioning, it is also important to analyze the distractor choice. The study of distractors is important for subject matter experts to better understand the performance of an item. Accordingly, distractor analyses can be used in an item’s revision process. Distractor evaluation is also helpful during key validation as it can help determine whether an item has a key error or more than one correct answer. This white paper discusses two methods for distractor evaluation: tabular and graphical.

Reading Your Item Analysis Report: Item Is Making a Limited Contribution to the Measurement Capacity of the Test

Posted:
Categories: Reports & Guides

Following an exam administration, Meazure Learning (formerly Yardstick) often produces an item analysis report from their proprietary software, COGs. The item analysis report provides information about each item in terms of its difficulty, discrimination, and the distribution of responses across alternatives. This white paper focuses on understanding what it means when a COGs item analysis report reads: “Item is making a limited contribution to the measurement capability of the test.”

Meazure Exam Platform: Tips for reading your item analysis report

Posted:
Categories: Blog Articles

Following an exam administration, Yardstick often produces an Item Analysis report from their proprietary software called COGs. The item analysis report provides information about each item in terms of its difficulty, discrimination, and the distribution of responses across alternatives. Our Senior Psychometrician Dr. Michaela Geddes, PhD. has written a handy backgrounder that focuses on a […]

Item Response Theory 101: Assessing Candidate Ability

Posted:
Categories: Reports & Guides

In the testing world, mention of Item Response Theory (IRT) conjures up images of cutting-edge, state-of-the-art practices needed for a program to be seen as modern and valid. The purpose of this white paper is to provide an introduction to IRT using nontechnical language. Since IRT is such a big topic, this white paper will focus on how IRT can be used to assess test-taker ability.

How Long Does It Take To Create a Multiple Choice Question?

Posted:
Categories: Blog Articles

A client with whom Yardstick (now Meazure Learning) is working to develop a new certification program asked me this question recently. And it’s a great question. In order to plan out exam development activities, including the recruitment of subject matter experts (SMEs) to generate multiple choice questions (MCQs), realistic expectations need to be provided as […]