The Eureka Experience #13 - Should learning games be easy or hard? Does it even matter?
Is AI making us stupid? or Smarter?
Hello peeps 🔥
Let me guess the feedback you get every time you propose a learning game.
One person says: “Make it easier. People are busy.”
Another person says: “Make it harder. Adults need challenge.”
A third person says: “Can we add points and badges?” (Yes. And we can also add a clown horn sound every time they click “Next.” Same value.)
Here’s the annoying truth: “easy vs hard” is the wrong argument.
The real question is: Does the experience create uncertainty, then help the learner reduce it (fast and clearly) so they feel themselves getting better than they expected?
That’s predictive processing in games. And it explains why people happily:
die 47 times to the same boss, then call it “relaxing”
stare at an idle game watching numbers go up like it’s a spiritual practice
And yes: it has direct implications for learning games, especially if you want something better than “content dump, but gamified.”
The prediction machine Vs The prediction gym (The Brain Vs The Game)
Predictive processing is a big, slightly audacious idea from cognitive science: your brain isn’t passively “taking in” the world. It’s constantly predicting what’s about to happen and correcting itself when it’s wrong.
When reality doesn’t match the prediction, you get prediction error. That “error signal” pulls attention and drives learning. Because your brain updates its model of the world.
Now let’s take that into games.
Sebastian Deterding and colleagues frame game enjoyment like this: games are fun when they let you reduce uncertainty “surprisingly efficiently”. Basically, when you do better than you expected, and you can feel the progress.
So no, the core dopamine story isn’t “winning feels good.”
It’s: “I’m getting less wrong faster than I thought I would.”
That’s the hook. Reminds me every time learners approach game-based learning boardgames like it’s another Monopoly board, then get surprised with how the game is not as they have expected it to be. Works like a charm every time.
Why hard games work: failure isn’t the problem, opaque failure is!
Souls-likes (and other “masocore” genres) look like they should be miserable. You fail constantly. You lose resources. You repeat the same walk of shame back to the boss like a ritual.
Predictive processing explains why people still love it:
Expectations are set low. You go in expecting to die. That changes how “bad” failure feels.
Each death becomes a test. “What did I learn about the pattern?” “What timing did I miss?”
The win hits different. When you finally succeed, the gap between expectation and outcome is huge, so the emotional payoff spikes.
And we’re not just romanticizing pain here.
A CHI study on Dark Souls III found that high challenge and even avatar death can enable positive experiences, especially when players interpret negative moments as part of learning and eventual mastery.
So the design lesson is brutally practical:
Hard is fine. “Hard but unclear” is what kills engagement.
If the learner can’t tell why they failed, they can’t update their mental model.
No model update = no progress = rage quit (or “this program/training/learning game is stupid”).
Why easy games work then: “guaranteed success” isn’t boring if uncertainty lives somewhere else
Now the other extreme: idle/clicker games.
They “play themselves.” Success is basically inevitable. And yet people lose hours to them, then act confused, like it was an accident.
Predictive processing still fits, because these games relocate uncertainty away from execution skill.
Deterding’s account points out that even low-challenge genres can create repeating moments of uncertainty and resolution, often through novelty, unlocking, and system understanding rather than reflex mastery.
And research on idle games backs up that they’re not just “nothing happens”: one CHI paper examined 66 idle games and built a taxonomy of how they structure interaction, progress, rewards, and waiting. That’s a polite academic way of saying: this genre has actual design mechanics, not just vibes (God I hate that word).
Then there’s the sneaky psychological trick idle games use: exponential growth.
Humans chronically underestimate exponential change. Classic work shows people massively under-predict exponential series when asked to extrapolate intuitively.
So when an idle game ramps growth, your brain keeps getting surprised: “Wait… I’m making THAT much now?”
That’s “better than expected” on loop, without requiring twitch skills.
In other words: easy games still feed the prediction engine. They just feed it with system surprises instead of execution challenges.
“Don’t we already have Flow & SDT?” Yes! And they’re not enough.
Flow says people enjoy a balance of challenge and skill, with clear goals and feedback. Useful, but it struggles to explain why people enjoy extreme challenge or near-zero challenge.
Self-Determination Theory (SDT) emphasizes competence, autonomy, and relatedness. Also useful (especially for learning design) but “competence” often gets interpreted as “let them feel successful,” which can accidentally turn into training that never risks uncertainty.
Predictive processing doesn’t replace these. It tightens the mechanism:
Competence isn’t just “I’m good.”
It’s “I’m getting better faster than I expected.”
That small shift changes how you design difficulty, feedback, and pacing.
What this means for learning games?
Most corporate learning games fail for one simple reason:
They don’t create learnable uncertainty.
They either:
spoon-feed certainty (so nothing needs predicting), or
drown learners in chaos (so nothing can be predicted)
A good learning game sits in a tighter window:
uncertainty → action → clear feedback → model update → next uncertainty
And yes, Greg Costikyan basically called this years ago from a design angle: games hold interest through uncertainty, and the struggle to master it is central to their appeal.
So here’s how to build learning games that don’t feel like “A glorified quiz with a Super Mario theme.”
1) Design for uncertain success, not “hard”
Give learners decisions where the outcome isn’t obvious, but it’s not random either.
They should be able to say: “I got it wrong, and I know what I’d change.”
That’s the difference between:
a scenario that informs/teaches judgment
and a scenario that teaches “guess what the designer wanted”
2) Make feedback discriminable
Feedback must tell the learner what changed in the system because of their choice.
Not five animations, three sound effects, and a confetti explosion that says nothing.
If the learner can’t read the signal, they can’t update the model. That is what we call autonomy.
3) Control expectations on purpose
Soulslike players tolerate failure because expectations are calibrated.
In learning, we do the opposite: we oversell “this will be easy,” then act shocked when people disengage.
Set expectations like an adult:
“This will feel uncertain. That’s the point. You’ll get tight feedback.”
4) Put uncertainty in the right place for the skill
If the target skill is execution (e.g., a procedure), uncertainty should live in sequencing and timing.
If the target skill is judgment (e.g., customer escalation), uncertainty should live in tradeoffs, incomplete information, and consequences.
Don’t put uncertainty in irrelevant trivia. That’s not challenge. That’s noise!
For example
Imagine a compliance “game” where learners click through rules and collect points.
That’s certainty. The brain doesn’t need to predict much. It just needs stamina (& prayers).
Now imagine the same topic as a short scenario loop:
You’re a manager. A vendor asks for a favor. The situation is ambiguous on purpose.
You choose an action.
The system responds immediately: downstream risk, reputational exposure, policy breach likelihood.
You get one clean explanation of what signal you missed.
Then you run it again with a new twist.
That’s predictive processing-friendly design:
uncertainty, resolution, learning progress you can feel, relate to and carry out a judgement upon.
And that’s why it actually trains judgment instead of training clicking.
Accordingly,
Let’s stop arguing “easy vs hard.”
Start asking:
Where is the uncertainty?
Can the learner reduce it through meaningful action?
Is the feedback clear enough to update their mental model?
Do they feel “better than expected” progress?
Because that’s the engine behind both the suffering of Soulslikes and the weird calm of idle games, and it’s the same engine you want powering your learning games.
Science Versus is hyping up the AI panic “ChatGPT makes you stupider” versus the AI hype “it boosts productivity and science”, then cuts through the noise with a simple frame: LLMs can degrade learning when they turn research into a passive, one-shot summary.
They cite a large study (~10,000 adults) where people using ChatGPT (vs. old-school Google links) produced advice that was more generic, less factual, felt like they learned less, and was rated as less helpful by others; they also mention early, not-yet-settled evidence (small preprints) suggesting lower brain engagement/connectivity and weaker recall when people outsource writing to ChatGPT, plus a real-world “deskilling” example where clinicians using AI support performed worse once the tool was removed.
Tune in to what we are jamming to this week as we work and design.
This week it is all about good old 80s & 90s tracks. Here is a Goldie by Whitney Houston.
Here is our weekly comics & games reference, because even the most serious topics deserve a little humor (and maybe a poorly-drawn stick figure or two).
Chronicles of Crime is a co-op investigation game where you and your team act like detectives… except the “case file” lives in an app. You scan QR-coded location cards, suspects, and evidence, explore 360° scenes for clues, interview people, link items on an evidence board, and burn time with every move until you’re confident enough to close the case and answer the final questions.
It’s basically a sensemaking simulator disguised as a board game. And it’s a about how to turn “information” into decisions under uncertainty which is, awkwardly, what learning designers keep saying they want.
What Learning Designers Can Learn from Chronicles of Crime
1) It’s not about “content.” It’s about the investigation loop.
Chronicles of Crime doesn’t reward you for knowing facts. It rewards you for running a tight loop: observe → hypothesize → test → update. That’s what real performance looks like in the workplace, too. Most training stops at “here’s the info,” then acts shocked when people can’t diagnose a situation, ask the right questions, or connect signals.
2) Time is a constraint, not a nuisance. That’s why decisions matter.
Every action costs time. That single mechanic forces prioritization: Do we chase this lead or ignore it? Do we travel or call forensics? Do we interrogate more or commit? In L&D, we design like time is infinite, then complain learners “don’t apply.” This game bakes in a constraint so choices have weight, exactly like work.
3) Evidence linking beats memory dumps.
The evidence board isn’t decoration. It’s the external brain. Players don’t “remember more,” they structure more linking people, places, and objects into a story that can be tested. That’s your cue: if your learning solution relies on learners holding everything in their head, you’re designing for failure. Give them a workbench for thinking, not just slides.
4) The game uses progressive disclosure like a professional.
You don’t get the whole case up front. You earn information by asking, scanning, and exploring. That staged reveal creates controlled uncertainty and curiosity without chaos. Most corporate learning does the opposite: it front-loads everything (“here’s all the policy”), then wonders why engagement dies. Progressive disclosure is not a UX trick. It’s how you design judgment.
5) It separates “collecting info” from “closing the case.”
In Chronicles of Crime, you can keep investigating forever… but the game forces the real skill: deciding when you have enough confidence to commit. That’s a workplace behavior we rarely train: decision thresholds. When do you escalate? When do you stop gathering data? When do you act? Most training teaches knowledge. This teaches readiness to decide.
6) Collaboration is the product—if you design for it.
The best moments happen when the team argues: “This clue matters.” “No, it’s a dead end.” “What assumption are we making?” That’s shared mental model building. But here’s the catch: because it’s app-driven, collaboration can collapse into “whoever is holding the phone is the leader.” Same thing happens in learning when one person becomes the “clicker.” If you’re using this as a reference, design roles, prompts, and turn-taking to keep the thinking distributed.
7) Feedback is immediate and specific. That’s why learning sticks.
Scan a clue → get a response. Ask the wrong question → waste time. Miss something → it shows up later as confusion. That tight feedback loop is what makes uncertainty tolerable. In corporate training, feedback is often delayed, vague, or social (“your manager will coach you”). Translation: no loop, no learning.
8) The ending is a debrief disguised as a quiz.
Closing the case is essentially: “What happened?” “Who did it?” “Why?” That’s not trivia, t’s reconstruction. A good learning game ends the same way: not “pick the right definition,” but “tell me the story you believe and the evidence you used.” That’s how you test mental models, not memory.











