The Accidents That Started It All
How WWII plane crashes gave birth to human factors engineering
The planes weren’t supposed to be crashing.
The pilots were trained. Many of them were experienced. The aircraft had passed inspection. The weather was often fine. And yet, again and again, planes were going down—during takeoff, during landing, sometimes moments after leaving the runway.
In the early years of World War II, the U.S. Army Air Forces were losing aircraft at an alarming rate. Not in combat. At home. During routine operations that should have been uneventful.
Young men were dying before they ever saw the enemy. Machines were being destroyed before they could be deployed. The losses were quiet, bureaucratic, and deeply inconvenient to the story the military wanted to tell itself.
The initial explanation was simple and familiar: pilot error.
Pilots, it was assumed, were making mistakes. They were accused—sometimes gently, sometimes not—of being confused, careless, insufficiently disciplined. The logic followed a well-worn path. If accidents were happening, someone must be at fault. If someone was at fault, the solution was training. More drills. More procedures. More responsibility placed squarely on the individual in the cockpit.
The response was swift and confident. Training programs were intensified. Checklists grew longer. Expectations hardened. The pilots were told, in effect, to try harder.
The crashes continued.
This is usually the point in a story where blame deepens. Where pressure increases. Where those at the bottom of the hierarchy absorb the cost of a problem they did not create.
But at some point—quietly, without fanfare—someone asked a different question.
What if the pilots weren’t the problem?
What if competent, capable, well-trained people were being placed into systems that had already signed off on failure?
This was not a comfortable idea. It disrupted command-and-control thinking. It challenged hierarchy. It implied that responsibility might sit not with the person at the controls, but with the system that shaped their choices long before they ever touched them.
It suggested something more unsettling still: that accidents might be designed into the environment itself.
When investigators began looking closely at the aircraft—not just the outcomes, but the conditions—patterns emerged.
Controls that looked nearly identical performed opposite functions. Levers were placed where a pilot might reach instinctively, especially under stress—only to trigger the wrong action. Gauges were crowded, poorly labeled, or positioned outside natural sightlines. Emergency procedures required sequences of steps that were difficult to execute under darkness, fatigue, or panic.
None of this was malicious. No one had intended to design a dangerous cockpit.
It was simply assumed that a “good pilot” would adapt.
This assumption runs deep in human culture. Folktales are full of heroes who overcome hostile environments through grit and virtue alone. If the mountain is steep, climb harder. If the task is impossible, prove yourself worthy.
But adaptation has limits.
Under pressure, human beings do not become more precise. They become more habitual. They reach for what looks familiar. They act on patterns learned through repetition. When seconds matter, cognition narrows. Vision tunnels. Fine motor control degrades.
The turning point came when researchers stopped asking why pilots were failing and started asking how the cockpit was teaching them to fail.
In one now-famous case, a pilot retracted the landing gear instead of the flaps during takeoff—a fatal mistake. The two controls were nearly identical in shape, size, and placement. In another incident, a pilot shut down the wrong engine during an emergency because the switches were visually indistinguishable at a glance.
These were not lapses of character. They were predictable outcomes of design.
This is where our first quiet hero enters the story.
Alphonse Chapanis was not a general. He did not command troops or fly missions. He was a psychologist, trained to study perception and behavior, brought into the military not to assign blame but to understand failure.
Chapanis approached the problem differently. Instead of asking what the pilots had done wrong, he asked what the system had demanded of them. Instead of assuming error was a moral flaw, he treated it as data.
What he and others like him discovered was deceptively simple: humans are part of the system. Their strengths and limitations matter. And when a system ignores those realities, it will fail—repeatedly, reliably, and often catastrophically.
This was not a sentimental insight. It did not excuse mistakes. It reframed them.
Out of this realization emerged a new way of thinking about design. Not as aesthetics. Not as preference. But as a form of risk management.
Psychologists, engineers, and physicians began working alongside military designers. They studied perception, attention, memory, and motor control. They observed how people behaved under real conditions—not how manuals claimed they should behave, but how they actually did.
The field that grew out of this work came to be known as human factors.
Its premise was neither kind nor forgiving. It did not assume perfect users. It assumed fatigue. Stress. Distraction. It anticipated error and asked a different question: What would it take to make the right action the easiest one to take?
This way of thinking was not theoretical. It was empirical. The feedback loop was brutal and immediate. If a design was wrong, people died.
When cockpits were redesigned—when controls were differentiated by shape and texture, when critical instruments were grouped logically, when labels were clarified—accident rates dropped.
Not because pilots suddenly became better people.
Because the system stopped setting them up to fail.
What’s striking, in retrospect, is how quickly this lesson fades outside of crisis.
In peacetime. In civilian contexts. In organizations and platforms where consequences are distributed, delayed, or abstracted. Where failure rarely arrives as wreckage on a runway.
We return, almost instinctively, to blaming individuals. We talk about training gaps, compliance failures, and “user error.” We patch procedures instead of redesigning environments. We add warnings instead of removing traps.
And yet the core insight remains unchanged: if a system requires constant vigilance to avoid disaster, it is not a robust system.
The early aircraft investigators weren’t trying to invent a new discipline. They were trying to stop planes from crashing.
What they ended up creating was a foundation for everything that followed—ergonomics, usability, interface design, safety engineering. Fields that would quietly shape how modern systems work, long after the urgency of war faded and the stories of those early crashes were forgotten.
The irony is that these ideas are now treated as optional. As nice-to-haves. As constraints to work around in the name of speed or innovation.
But they were never optional.
They were born out of necessity. Written in wreckage. Proven by survival.
My work traces where those ideas went—how they spread, where they were adopted, and where they were quietly abandoned. It follows the lineage of people like Chapanis, whose work rarely makes headlines but whose fingerprints remain everywhere.
It starts here not because this was the beginning of design, but because it was the moment we learned—clearly and painfully—that systems do not fail because people are inadequate.
They fail because they are inadequately designed.
And that lesson, like the accidents that revealed it, has a way of repeating itself.