Responsible Use of AI at Meazure Learning
Artificial intelligence (AI), including machine learning, generative AI, and AI agents, has many potential benefits for our customers in both professional credentialing and higher education. However, using AI also comes with a responsibility to mitigate risks, including those related to fairness and data privacy.
Meazure Learning takes this responsibility seriously and has, therefore, adopted the following principles for the use of AI within our products and services:
- Human Oversight: We prioritize active human oversight of AI systems, including their training and ongoing operation.
- Elevating, Not Replacing: Our philosophy is that AI should augment, not replace, human judgment.
- Fairness: We are committed to actively detecting, preventing, monitoring, and correcting instances of bias.
- Collaboration: We believe in working closely with our customers to identify, build, implement, and refine the most beneficial AI use cases.
- Privacy, Security, and Accessibility: We ensure that the AI systems we develop or use adhere to our privacy and compliance policies and procedures.
A human-in-the-loop (HITL) approach is central to following these principles and maximizing AI’s impact. The HITL approach relies on human involvement to:
- Select the most appropriate source data and AI models
- Protect users in real time from possible bias, inaccuracies, errors, access issues, etc. by identifying and addressing AI malfunctions as they occur
- Improve AI model performance and accuracy through ongoing human feedback
- Audit and trace underlying data used to produce AI results
AI-supported proctoring is a perfect example of why HITL is essential. A lack of adequate human participation before, during, and after a test can put an organization’s users and reputation at risk.
The following are examples of HITL applied to AI-supported proctoring:
- Human subject matter experts select the most appropriate AI models and evaluate the integrity of the source data used for initial training.
- Live human proctors use controls to detect and address AI anomalies during exams, assisting users in real time and providing valuable feedback to improve AI models.
- Live human proctors review potential incidents flagged by AI systems and use their judgment to determine whether an incident report is warranted.
- To help eliminate false positives, proctoring managers audit AI-generated flags in incident reports before submitting them to test owners.
- Subject matter experts review AI-model results at an aggregate level to investigate potential bias, inconsistencies, or unexpected outcomes.
Test development and psychometrics present another opportunity for AI to improve efficiency and effectiveness, but it is crucial to consider the potential challenges and ethical implications. This is another area where HILT is critical. Read our article titled “GenAI in Credentialing Assessments: Revolutionary or Just Another Thing to Manage?” to learn more.
At Meazure Learning, we are committed to using AI responsibly to advance our customers’ goals. Like any other rapidly developing technology, AI systems have benefits and potentially negative side effects. We are excited to be partnering with our customers to apply AI in situations where the benefits clearly outweigh the risks.
For a brief overview of the principles outlined above as well as a curated list of resources on AI in education and assessment, see “Embracing Responsible AI: Our Approach and Educational Resources.”