Each year, the I.C.E. Exchange provides a unique opportunity for the credentialing community to connect with peers and deepen their expertise. Hosted by the Institute for Credentialing Excellence (I.C.E.) in Miami Beach, this year’s Exchange brought together credentialing and assessment professionals for a multiday exploration of industry trends and best practices. Key session themes and points of conversation revolved around adopting DEI practices, advancing skill- and performance-based testing, and navigating change management.
Session Spotlight: 10 Keys to a Winning AI Strategy
Of course, the use of AI across the assessment life cycle was also a major talking point at this year’s Exchange. One of the most compelling sessions on the topic was presented by a panel of tech-focused executives, including our own CEO, Tim McClinton; Vaital’s CEO, David Yunger; and ETS’s CSO, Wallace Dalrymple. Titled “Top 10 Countdown: Winning AI Strategies to Grow Your Testing Program,” this session covered:
- Why it’s important to build a business strategy around AI use
- How to thoughtfully incorporate AI into a testing program
- What it means to “fail fast” with AI
- How to use AI to improve test security in ways that go beyond proctoring
Let’s explore the session speakers’ top 10 AI strategies for program growth.
Strategy #1: Seize the Opportunity
AI is everywhere—and it’s not slowing down anytime soon. But here’s the surprising part: Fewer than 15% of organizations have a formal AI strategy. For credentialing organizations, this represents a “once-in-a-generation opportunity” to use AI technology to amplify human expertise.
However, to make the most of its potential, organizations must develop a battle plan—one that defines where AI can add the most value, how it can set the organization up for success, and where it can integrate into existing workflows.
Strategy #2: Level Up Your AI Capabilities
Today’s competitive edge comes from using AI better than others. This holds true across industries—credentialing included. Quoting economist Richard Baldwin, the speakers reminded attendees that “AI won’t take your job. [But] somebody using [it] will.”
Some examples of how credentialing programs can leverage AI include:
- Automating routine tasks, such as application processing or candidate communication, freeing up resources for strategic growth
- Creating tailored assessments or expanding into new credentialing areas
- Building intelligent applications that bolster test security and adapt to evolving candidate needs
AI can be a catalyst for program growth, but it’s important to take an incremental approach that maximizes ROI while minimizing risks. The speakers recommend focusing on specific, high-impact areas where AI can deliver measurable value over time.
Strategy #3: Think Beyond Item Generation and Proctoring
AI’s potential extends far beyond its current applications. Key opportunities include the following:
- Scaling credential administration: AI can streamline credential application processing and scale eligibility validation for education, professional experience, and continuing education units.
- Accelerating content creation: AI has the potential to streamline item generation by 12-15x, and it can help expand localization efforts to meet diverse language needs.
- Making operations more efficient: Using AI to automate candidate email responses can boost productivity by tenfold.
- Expanding security measures: AI can significantly bolster exam security in both online and in-person test environments.
Strategy #4: Start Small. Start Now.
When it comes to AI, progress doesn’t have to be overwhelming; it just has to begin. The key is to start with small, low-risk projects that encourage experimentation without overextending resources. For credentialing programs, this might mean automating a single process to build confidence and demonstrate value—like using AI to send personalized recertification reminders based on candidate activity and timelines.
Strategy #5: Fail Fast and Fail Often
Starting small creates space for experimentation, but true AI innovation requires a willingness to take risks and embrace failure. Credentialing programs can stay relevant and competitive by fostering a culture where teams are encouraged to test bold ideas, make mistakes, and iterate quickly.
This type of mindset ensures faster progress toward market-ready solutions. But don’t mistake “faster” for “smarter” or “better.” To effectively execute this strategy, program leaders must create a controlled environment where experimentation can take place without jeopardizing broader systems. For example, a credentialing organization might pilot an AI-powered tool to provide justifications during the item-writing process. This would allow them to evaluate how AI-generated justifications impact item quality and writer efficiency on a smaller scale before incorporating the tool into broader development workflows.
Strategy #6: Focus on Human-Centric AI Tools
AI’s growing capabilities present both opportunities and challenges for credentialing. As the session speakers noted, “AI can now pass all our exams with flying colors.” Organizations should “fight fire with fire” and use AI to enhance test security and build anti-bias measures. However, human oversight must be at the core of this approach. By keeping humans in control and using AI as a supporting tool, credentialing organizations can preserve trust and fairness as they mitigate the threats posed by technology’s growing capabilities.
Strategy #7: Mitigate AI Risk
AI’s power comes with vulnerabilities. But managing those vulnerabilities isn’t a one-time task; it’s a continuous process of monitoring, learning, and adapting. The speakers recommend the following approach:
- Start with a clear understanding of AI’s risks. Identifying potential risks—such as security threats, ethical issues, and privacy concerns—is central to effective risk management. Otherwise, credentialing organizations may implement AI tools without safeguards or oversight in place.
- Establish governance and accountability frameworks. Organizations need structured policies and governance frameworks to ensure compliance, accountability, and ethical decision-making while using AI technology.
- Tackle bias and discrimination head-on. Without human oversight at every stage, AI tools can easily reinforce existing inequities in credentialing assessments, leaving certain candidate groups at a disadvantage.
- Build trust through transparency. Users and stakeholders need to understand how AI systems work—what data is being used, how decisions are made, and where limitations exist. Make this information more accessible to reduce fear and uncertainty.
Strategy #8: Understand the Global Regulatory Landscape
Staying informed about AI regulations is challenging given the wide variation in regional and country-specific approaches. But an effective action plan—where regulatory awareness is treated as a recurring activity—can make it easier.
Start by identifying trusted sources for timely insights into AI policies that directly impact credentialing. These can include data privacy, test design, and security compliance. Then, tap into your professional network of industry associations and regulatory experts for updates on emerging rules. The session speakers emphasize that this can help you adjust your operations and maintain stakeholder trust.
Strategy #9: Develop and Adhere to Core Tenets
When using AI technology, the speakers recommend continually asking: Is it legal? Is it ethical? Is it moral? To answer these questions, credentialing organizations should prioritize three core tenets—transparency, equity, and human oversight.
Adhering to these tenets can lead to positive outcomes. For instance, transparency builds trust and confidence in AI systems, which encourages wider adoption. And removing bias ensures credentialing processes are equitable for everyone involved. Finally, keeping humans at the center of it all helps align AI-driven decisions with organizational goals and high-impact ROI opportunities.
Strategy #10: Have an AI Strategy
A strong AI strategy addresses organizational purpose and alignment, not just technology’s role. Credentialing organizations with successful strategies ask questions like:
- How can AI enhance our program goals while staying aligned with organizational priorities?
- What problem are we solving, and why does it matter?
- Where is our data coming from, and how is it being collected and stored?
- Who needs to be involved in decisions about AI implementation?
- What are the risks with AI usage, and how will they be mitigated?
- How will we measure progress and impact?
By addressing questions like these, credentialing programs can build an AI strategy that’s intentional and sustainable.
Conclusion
The 2024 I.C.E. Exchange explored many important topics for the credentialing community—with AI being of particular interest. From developing ethical frameworks to integrating human-led AI tools, the insights shared at this year’s conference offered a roadmap for programs looking to adapt and thrive in a rapidly evolving credentialing landscape.
To learn more about the use of AI tools in professional credentialing, check out our white paper “Generative AI in Test Development and Psychometrics: An Analytical Review of Its Potential and Limitations.”