Giving AI a Job Interview: Why Traditional Testing Is Failing
Giving AI a Job Interview: Why Traditional Testing Is Failing
Introduction: When AI Test Prep Surpasses Humans
In late 2025, GPT-4 scored higher than 90% of human test-takers on the bar exam. Yet when researchers asked it to handle real client consultations, its performance fell far short of expectations. This gap reveals a critical oversight: we are evaluating AI the wrong way.
Professor Ethan Mollick of Wharton School proposes a sharp observation: most AI benchmarks are like giving job candidates a standardized test, while true capabilities only emerge during a job interview.
Analysis: Three Blind Spots in Traditional AI Testing
1. Data Contamination: AI Is Memorizing Answers
Mainstream tests like MMLU-Pro and GPQA have had their questions and answers publicly available for years. Many AI models have seen these questions during training—this is not capability demonstration, it is memorization.
More embarrassingly, some test questions contain errors. Mollick notes that MMLU-Pro includes questions like What is the approximate mean cranial capacity of Homo erectus?—questions that even human experts might struggle to answer accurately.
2. Score Inflation: What Does 1% Improvement Mean?
When an AI improves from 84% to 85% on a test, is this a breakthrough or statistical noise? We lack calibration—we do not know what real capability differences different score ranges represent.
3. Context Disconnect: Exam Champions, Real-World Novices
An AI might excel at SWE-bench coding tests yet fail to understand a vague real-world requirements document. It might pass medical exams but freeze when facing complex patient cases.
Case Study: From Taking Tests to Doing Work
Mollick suggests adopting job interview style evaluation: give AI a real task and observe how it completes it.
Traditional test asks: Which is the correct syntax for sorting a list in Python?
Real task asks: Help me organize this student grade data, identify the top 10 most improved students, and generate a visualization report.
The latter tests not just syntax knowledge but also: requirement comprehension, data cleaning, logical reasoning, tool selection, and result presentation—the integrated skills the real world demands.
Recommendations: How Educators Should Redesign AI Assessment
For Students: From Can Use to Can Verify
Do not settle for AI-generated answers; learn to question and verify:
- Ask AI to explain its reasoning process
- Request information sources
- Cross-verify critical conclusions with different AIs
- Test its performance in edge cases
For Teachers: Design Real Task Assessments
Rather than testing whether students remember a specific AI feature, design open-ended tasks:
- Use AI to assist in completing a market research report
- Have AI help you analyze the argumentative flaws in this paper
- Design an AI workflow to automate class attendance tracking
Evaluation criteria should not be what tools were used but what problems were solved.
For Administrators: Build AI Capability Frameworks
Establish AI capability assessment frameworks for your teams:
- Foundation: Can they accurately describe requirements?
- Intermediate: Can they decompose complex tasks?
- Advanced: Can they verify and iterate on AI outputs?
Conclusion: The End of Testing, The Beginning of Practice
Mollick's core insight is simple: the best way to evaluate AI is to have it do real work.
The implications for education are profound. When our students leave school, they face not standardized tests but fuzzy, complex, uncertain real-world problems.
Teaching them how to give AI a job interview—asking good questions, verifying answers, iterating improvements—is more valuable than teaching them any single tool.
After all, in the AI era, the ability to ask the right questions matters more than knowing the right answers.

