Why Does AI Make You More Tired? Interface Design Is Stealing Your Child's Attention
Recent research on financial professionals revealed something unexpected: when people use AI for complex tasks, their cognitive load actually increases.
Imagine asking AI a question and receiving five paragraphs with the answer buried somewhere inside, plus three suggestions for topics you never asked about. The conversation gets messier, the AI gets more "helpful," and you get more confused. This isn't about AI being unintelligent—it's about interface design failing us.
The Cognitive Tax of Chatbots
Chatbot interfaces have a fatal flaw: they assume all work can happen through conversation. But most knowledge work requires structured thinking, multi-step operations, and persistent state tracking.
Research shows that when conversations become chaotic, both sides compound the problem. The AI, optimized to be helpful, mirrors back every unstructured thought the user expresses. The user, overwhelmed, lacks the mental bandwidth to reorganize. Those hurt most are less experienced workers—the very people who could benefit most from AI assistance.
This phenomenon is called the "Cognitive Tax": the mental resources consumed by the interface itself, offsetting the intelligence gains from AI.
Case Studies
Negative Case: Traditional Chatbot Alex uses ChatGPT to prepare a history report. He enters the topic; AI returns a 2,000-word overview. Alex extracts key points, asks follow-up questions, and receives more lengthy responses with "helpful" recommendations for three related topics. Three hours later, Alex has 12 browser tabs open, notes scattered across three documents, and the report hasn't started.
Positive Case: Dedicated Workspace Interface Jordan uses NotebookLM for the same assignment. PDFs, web pages, and notes are imported into a unified space. AI automatically organizes information connections, generating summaries and Q&A cards. Jordan can query specific passages anytime; AI responds precisely and concisely. Two hours later, the report structure is clear, materials organized.
An Educator's Guide to Interface Selection
1. Match Tools to Task Types
- Creative brainstorming: Chatbots work well
- Deep research: Choose dedicated tools like NotebookLM or Perplexity
- Programming education: Use IDE-integrated tools like Claude Code or GitHub Copilot
- Visual design: Explore AI-native interfaces like Google Stitch
2. Teach "Interface Literacy"
Don't just teach students how to use AI—teach them which interface to use when. This is like teaching when to do mental math versus using a calculator.
3. Beware the "One-Size-Fits-All" Trap
Tools claiming "one chatbot for everything" often perform mediocrely in all scenarios. Real efficiency comes from combining specialized tools.
4. Monitor Cognitive Load Indicators
If students become more anxious or confused after using AI, it's not the AI—it's the interface mismatch. Switching tools beats persisting with the wrong choice.
Conclusion
In the AI era, choosing the right interface means choosing the right way to learn. When we use chatbots as the only entry point, we inadvertently train students to accept fragmented, unstructured thinking patterns.
Education isn't about information acquisition—it's about building knowledge systems. This requires providing students with tool interfaces that support deep thinking, not letting them get lost in endless conversations.

