RESOURCES

All the research, insights, case studies, videos, and webinars you need to learn more about Meazure Learning and how we’re revolutionizing the assessment industry.

Resources

Collusion Detection and Its Application to Regulation

Posted:
Categories: Webinars

Every profession, at one time or another, has to deal with exam misconduct. A data forensics program is a deterrent and quality-control measure as well as an investigative tool to support the validity of the assessment process. This webinar will provide information on data forensics and collusion detection.

How to Develop Good Multiple-Choice Questions

Posted:
Categories: Webinars

Writing effective multiple-choice questions is both a science and an art. This webinar will introduce you to the principles of writing multiple-choice questions and demonstrate how these principles can be applied to the evaluation of questions in an item bank.

The Time Crunch: How Much Time Should Candidates Be Given to Take an Exam?

Posted:
Categories: Reports & Guides

When developing an assessment, two major decisions a credentialing organization needs to make are: How many items will be on the exam? and How much time will test-takers be given to complete the exam? These choices can have a significant impact on fairness and validity. Often, once an exam has been administered, many ctest-takers will anecdotally report that they ran out of time and the assessment was unfair. Therefore, an important question to ask is, What can credentialing organizations do in order to investigate and address these concerns? We explore this topic in the white paper.

Scoring Models for Innovative Items

Posted:
Categories: Reports & Guides

As a way to capture the richness of job performance, many credentialing organizations are supplementing traditional multiple-choice questions (MCQs) with innovative item types. Although this view is not unanimous, one theory suggests that MCQs represent a somewhat artificial representation of job tasks and that innovative item types represent a more refined way to assess candidate competence. This white paper explores this topic in depth.

Competency Validation Surveys

Posted:
Categories: Reports & Guides

A competency survey is a popular instrument for validating the skills, knowledge, and behaviors included on your competency profile. It allows an organization to reach numerous practitioners working in different practice settings and gather quantitative and qualitative data that lends itself to multiple methods of analysis and interpretation. This white paper explores the options for completing such a survey as well as some important considerations. 

Evaluating Your Distractors

Posted:
Categories: Reports & Guides

In addition to looking at an item’s p-value and discrimination index to determine how well an item is functioning, it is also important to analyze the distractor choice. The study of distractors is important for subject matter experts to better understand the performance of an item. Accordingly, distractor analyses can be used in an item’s revision process. Distractor evaluation is also helpful during key validation as it can help determine whether an item has a key error or more than one correct answer. This white paper discusses two methods for distractor evaluation: tabular and graphical.

Yardstick Measure™: Custom exam administration reports

Posted:
Categories: Blog Articles

Understanding your post-administration exam reports Have you ever wondered about the types of exam administration reports that Measure generates? Yes, there is more than one. You can retrieve an “Exam History”, “Exam Responses” and/or an “Exam Notes” report after each exam administration. Each of these reports serve different purposes. Here’s a quick snapshot of each of them, […]

Reading Your Item Analysis Report: Item Is Making a Limited Contribution to the Measurement Capacity of the Test

Posted:
Categories: Reports & Guides

Following an exam administration, Meazure Learning (formerly Yardstick) often produces an item analysis report from their proprietary software, COGs. The item analysis report provides information about each item in terms of its difficulty, discrimination, and the distribution of responses across alternatives. This white paper focuses on understanding what it means when a COGs item analysis report reads: “Item is making a limited contribution to the measurement capability of the test.”

Meazure Exam Platform: Tips for reading your item analysis report

Posted:
Categories: Blog Articles

Following an exam administration, Yardstick often produces an Item Analysis report from their proprietary software called COGs. The item analysis report provides information about each item in terms of its difficulty, discrimination, and the distribution of responses across alternatives. Our Senior Psychometrician Dr. Michaela Geddes, PhD. has written a handy backgrounder that focuses on a […]

Yardstick Measure™: Reporting features for item banking

Posted:
Categories: Blog Articles

Using Yardstick Measure™ to bank platform with robust reporting functionality Many of us at Yardstick use reports generated by Measure daily, for both item banking and exam scoring purposes. All reports can be found under “Reports” tab on the dashboard. What I like the most about the reports in Measure is that they give me access […]

1 18 19 20 21 22 33