In 1939 at the University of California, Berkeley, a PhD student arrives late to a statistics class. On the blackboard: two problems. He copies them, assuming they are homework. He works relentlessly and submits solutions. Plot twist: they were not homework at all—Jerzy Neyman had written down open problems. The student’s name was George Dantzig. (Stanford University News) (UC Berkeley Engineering)
The story later turned into a widely repeated legend—sometimes exaggerated, sometimes relocated to other universities. Strong sources still converge on the core: he was late, the problems were framed like “homework,” and they were genuinely unsolved at the time. (The College Mathematics Journal) (Stanford University News) (Snopes)
The real lesson: labels change the brain
The useful part is not “Dantzig was superhuman.” The fascinating part is a very human mechanism: framing.
- When the brain reads “homework,” it opens a file labeled: hard but doable.
- When it reads “unsolved,” it opens another file: social danger + failure risk + wasted effort.
Same content. Two psychological realities.
In my book, chapter 6, I explain how framing locks perception, narrows options, and triggers automatic responses.
Dantzig benefited from a context “bug”: he didn’t know the problems were supposed to be impossible. So he applied normal effort to something everyone else had mentally placed behind glass.
The four invisible brakes that kill innovation before the idea
In teams, most projects die before testing—not due to lack of ideas, but due to human pre-filters.
1) Framing bias
If an idea is framed as “out of scope” or “not a priority,” it gets processed as an annoyance, not an opportunity.
2) Status quo bias
When nobody solved it before, the brain translates: failure is the norm.
3) Conformity bias
The group builds a ceiling. Consensus becomes a border. Yet consensus often reflects social comfort, not truth.
4) Impostor syndrome
It acts like a circuit breaker: I’m not legitimate to tackle this. The worst part: you don’t speak up, so you never get real feedback.
Dantzig bypassed these brakes by not knowing the “status” of the problems. That’s why the story matters: it shows the first enemy is not lack of ideas, but self-censorship.
Turning “impossible” into a protocol, not a slogan
The goal is to build conditions where “impossible” becomes a testable hypothesis again.
A simple Dantzig-inspired method:
Step 1 — Rename the object
Replace “it won’t work” with: “under what conditions would it work?”
Step 2 — Shrink the risk
Big ideas trigger fear. Break them into micro-experiments: a prototype, a user test, a limited pilot. Many high-performing innovation systems treat uncertainty as a sequence of small bets, not a leap of faith. (National Academies)
Step 3 — De-sacralize judgment
Don’t evaluate the person—evaluate the hypothesis.
Step 4 — Make doubt socially safe
If error is punished, the team optimizes caution. If learning is rewarded, the team optimizes speed of testing.
The question that matters
In your environment, which sentence acts like an invisible ban right before the idea?
References
(Stanford University News) = https://news.stanford.edu/stories/2005/05/george-b-dantzig-operations-research-professor-dies-90
(UC Berkeley Engineering) = https://engineering.berkeley.edu/george-dantzig-operations-research-phenom/
(The College Mathematics Journal) = https://www.cs.umd.edu/~gasarch/BLOGPAPERS/Albers-InterviewGeorgeB-1986.pdf
(National Academies) = https://www.nationalacademies.org/read/12473/chapter/18
(Snopes) = https://www.snopes.com/fact-check/the-unsolvable-math-problem/



