Skip to main content

Command Palette

Search for a command to run...

The Agentic Era: Education Shifts from Q&A to Collaboration

Updated

Introduction

Ask any student what they do with AI, and nine out of ten will say the same thing: "I ask it questions." Stuck on homework? Ask AI. Don't understand a formula? Ask AI. No ideas for an essay? Still — ask AI.

This isn't surprising. For three years, our interaction with AI has been dominated by chat windows. Open ChatGPT, Claude, or Gemini, type a question, get an answer. That's been the full extent of most people's understanding of "using AI."

But the AI world of 2026 has undergone a fundamental shift. Ethan Mollick, a professor at Wharton, pointed out in a late-March article: the chatbot interface itself is the biggest obstacle preventing us from fully unleashing AI's capabilities.

From Chat to Agent: A Paradigm Shift

Mollick breaks down today's AI ecosystem into three layers: Models, Apps, and Harnesses.

Models are the AI brains — GPT-5.2, Claude Opus 4.6, Gemini 3. Apps are the products you actually use — chatgpt.com, claude.ai. And Harnesses are the critical piece — systems that let AI use tools, take actions, and autonomously complete multi-step tasks.

The same Claude Opus 4.6 behaves completely differently in a chat window versus Claude Code. In a chat window, it gives you a text response. In Claude Code, it can autonomously research, write, and test code for hours.

This is the core proposition of AI education in 2026: we're still using 2014-era chat methods to work with 2026-era agentic tools.

The Cognitive Tax of Chat Windows

Mollick cited new research where financial professionals used GPT-4o for complex valuation tasks while researchers measured their cognitive load turn by turn. The findings are striking — while AI did boost productivity, the chat interface itself imposed a significant cognitive cost. AI overwhelmed users with walls of text, offered unprompted tangential topics, and once a conversation got messy, it stayed messy.

The people hurt most were less experienced workers — exactly those who could benefit most from AI.

Translate this to education: when students turn to chat-based AI for help, they face the same trap. AI's verbose responses, scattered suggestions, and unstructured information leave struggling students more confused than before.

Education's Harness Problem

So what does an education-specific harness look like?

Programming already has mature answers — Claude Code, OpenAI Codex, and similar tools provide complete agentic workflows for developers. But education?

A few attempts are worth noting: Khanmigo, Khan Academy's AI tutor, tries to constrain chat-based AI within educational scenarios, but it remains fundamentally a chat interface. Google's NotebookLM lets students upload sources and research within them — closer to the harness concept. Anthropic's new Claude Dispatch lets you message a desktop AI agent from your phone to complete complex tasks autonomously — imagine students using it to manage long-term projects.

But none of these go far enough. Education's harness shouldn't be a repurposed general-purpose tool. It needs to be designed from the ground up for learning.

Recommendations for Educators

1. Distinguish between "Q&A" and "collaboration" modes. Asking AI a question is consumption. Delegating a project to an AI agent is creation.

2. Teach students to choose the right harness. Note-taking with NotebookLM, coding with Claude Code, project planning with dedicated agentic tools — different tasks need different interfaces.

3. Mind the cognitive load. Research shows chat interfaces confuse novices. Before introducing AI in classrooms, teach students how to interact efficiently — don't just tell them to "ask AI."

4. Embrace agents, don't ban them. Banning AI is no longer realistic. Instead, teach students to manage AI agents like a team — assign tasks, review outputs, iterate.

Conclusion

The next frontier of AI education isn't whether to use AI, but how to use it well. The shift from chat windows to agentic tools isn't just a UI update — it's a revolution in how we interact with intelligence. When students stop merely asking AI questions and start collaborating with AI agents to tackle complex challenges, genuine AI literacy begins.

More from this blog

Ai已超越人类基准测试——教育评估体系正在崩塌

2026年3月,一份来自AI研究机构的评估报告让教育界哗然:在Google-Proof Q&A基准测试中,顶级AI系统的准确率达到了94%,而研究生使用Google搜索时的准确率仅为34%(跨领域)至70%(本领域)。 这不是科幻,这是正在发生的事实。 指数级增长的真相 Ethan Mollick在其最新文章中展示了令人震惊的数据曲线: GDPval测试:AI在复杂任务上的表现已达或超过顶级人类专家82%的时间 Humanity's Last Exam:由大学教授编写的极难问题集,AI表现持续...

Apr 11, 2026

Ai比你想象的更强大,只是被聊天框困住了

你有没有发现,明明AI已经很聪明了,但用起来总觉得差点意思? Ethan Mollick在最新文章中提出了一个扎心的观点:AI的能力远超大多数人的认知,问题出在我们与AI的交互方式上。 界面即瓶颈 研究显示,当金融专业人士使用GPT-4o完成复杂估值任务时,虽然AI确实提升了效率,但聊天框界面带来的"认知税"几乎抵消了这些收益。 问题出在哪? 巨大的文字墙:AI动辄输出五大段,答案藏在里面 无关建议轰炸:你问A,AI顺便推荐B、C、D 对话失控:一旦聊乱了,双方都在互相镜像对方的混乱 最受伤...

Apr 11, 2026
R

RaysLifeLab

43 posts