The Divide: From the Moon Landing to the Post-Truth World

An Interactive Exploration

The Divide

When humanity could reason together — and when it stopped

"We went to the Moon with logic, science, and 400,000 minds working as one."

"Now we can't agree on what's true."

By Stephen F. DeAngelis

Part I

The Triumph of Reason

In 1969, the Apollo program proved what humanity can achieve when truth, precision, and collaboration are non-negotiable.

400,000
Workers collaborating across 20,000 firms and universities
NASA / BBC
145,000
Lines of code running on 72 KB of memory — less than a modern email
MIT Instrumentation Lab
5.6M
Parts in the Saturn V stack, each targeting 99.9% reliability
NASA Engineering
Age 26
Average age of Mission Control engineers during Apollo 11
NASA History
30 sec
Of fuel remaining when Eagle touched down on the Moon
Apollo 11 Flight Journal
650M
People watched the landing — 1 in 5 humans alive on Earth
Wikipedia / Apollo Program

The 1202 Alarm: When Reason Saved the Mission

Three minutes before touchdown, Apollo 11's guidance computer flashed an alarm no one had seen in flight — code 1202: executive overflow. The computer was overloaded. The mission hung on a split-second decision.

Guidance officer Steve Bales, age 26, consulted his backroom engineer Jack Garman, age 24, who had a handwritten crib sheet of every possible alarm code. That sheet existed because, eleven days before launch, a simulation supervisor deliberately triggered the same alarm and Bales called abort — the wrong call. Flight Director Gene Kranz then mandated: "You WILL document every single alarm code, and what to do about it."

Garman confirmed the alarm was safe. Bales called "Go." Eagle landed with 30 seconds of fuel remaining. Two engineers under 30, armed with preparation and critical reasoning, saved humanity's greatest achievement.

Margaret Hamilton: The Pioneer Who Anticipated Failure

Margaret Hamilton — who coined the term "software engineering" — designed the Apollo guidance software with a revolutionary priority scheduling system. When overloaded, her code didn't crash. It killed lower-priority tasks, saved critical state data, and restarted — so fast that no navigation data was lost.

This "kill and recompute" design is exactly what saved Apollo 11 during the 1202 alarm. Hamilton didn't just write code. She anticipated human error, demanded rigor when her colleagues said "astronauts never make mistakes," and built systems that were transparent, explainable, and resilient.

"We choose to go to the Moon in this decade and do the other things, not because they are easy, but because they are hard — because that goal will serve to organize and measure the best of our energies and skills."
— President John F. Kennedy, Rice University, September 12, 1962
"Looking back, we were the luckiest people in the world. There was no choice but to be pioneers."
— Margaret Hamilton, Datamation Magazine, 1971

April 1, 2026: The Return

Yesterday, fifty-four years after Apollo's last lunar mission, NASA's Artemis II launched from Kennedy Space Center, sending four astronauts around the Moon aboard the Orion spacecraft, named Integrity.

Commander Reid Wiseman, pilot Victor Glover (the first Black astronaut to fly to the Moon), mission specialist Christina Koch, and Canadian Space Agency astronaut Jeremy Hansen are on an approximately 10-day journey, testing the life support systems that will carry future crews to the lunar surface on Artemis III.

The SLS rocket produced nearly 9 million pounds of thrust at liftoff. The mission endured a last-minute Flight Termination System issue, resolved by an experienced operator using legacy Space Shuttle-era equipment. Engineers solved the problem. The mission launched. The capacity for rigorous, collaborative, evidence-based problem solving endures.

54 Years
Since astronauts last traveled beyond low Earth orbit
NASA, 2026
4 Crew
First woman, first Black astronaut, and first Canadian to fly to the Moon
NASA Artemis II
"Artemis II will be the first crewed flight test of SLS and Orion, testing the technologies we'll need for long-term lunar exploration and human missions to Mars."
— NASA, Artemis II Mission Brief, April 1, 2026

Then something broke.

Part II

The Fracture

The same species that reached the Moon now struggles to distinguish fact from fiction in an era of cascading crisis.

6× Faster
Falsehoods spread six times faster than truth on social media
MIT / Science, 2018
70%
More likely to be retweeted — humans, not bots, are the primary driver
Vosoughi, Roy & Aral
2,137%
Increase in deepfake fraud attempts over three years
Sumsub, 2025
28%
Trust in media — a historic low, first time below 30% in 50 years
Gallup, 2025
$417B
Annual cost of disinformation to the global economy
Cybersecurity Ventures, 2024
20 Years
Of consecutive global freedom decline
Freedom House, 2026

The Polycrisis²

The World Economic Forum's Global Risks Report paints an unambiguous picture: 62% of experts expect stormy or turbulent times through 2035. Only 1% anticipate calm. Misinformation and disinformation rank as a top-five global risk — not because they are a standalone threat, but because they amplify every other crisis, from armed conflict to climate change to public health.

The term "polycrisis" describes interconnected crises that compound each other. But we face something worse: a polycrisis² — cascading crises in a world where the shared epistemic foundation needed to address them has eroded. You cannot solve complex problems when 70% of people believe leaders deliberately mislead them.

When Science Returns, Misinformation Kills

Consider the measles vaccine — one of the most thoroughly tested medical interventions in human history. In 2025, a measles outbreak in the United States cost $244 million — a disease effectively eliminated by science, now returning because misinformation eroded public trust in vaccination. If vaccination rates drop just 1% annually, costs could reach $1.5 billion per year.

This is not an abstract policy debate. It is a direct measure of what happens when a society abandons evidence-based reasoning.

"The ideal subject of totalitarian rule is not the convinced Nazi or the convinced Communist, but people for whom the distinction between fact and fiction, true and false, no longer exists."
— Hannah Arendt, The Origins of Totalitarianism, 1951
"Falsehood diffused significantly farther, faster, deeper, and more broadly than the truth in all categories of information."
— Sinan Aral, MIT, The Hype Machine, 2020

But the capacity for reason never left us.

Part III

The Path Back

Critical reasoning, transparency, and explainability are not relics of the past. They are the infrastructure of the future.

10×
Growth in global fact-checking organizations: from 44 in 2014 to 451 in 2024
Duke Reporters' Lab / Poynter
77%
Of Americans still trust science — first increase since the pandemic
Pew Research, 2025
96%
Of Americans want to stop the spread of misinformation
Gallup / Knight Foundation
185%
Year-over-year increase in critical thinking course enrollments
Coursera, 2026
25 States
Now have media literacy laws in the U.S., with 11 more states taking action
Media Literacy Now, 2024
$25B
Projected Explainable AI market by 2030 — demand for transparent decisions
MarketsandMarkets, 2025

Proof That It Works

Finland has topped the European Media Literacy Index six consecutive years, not by censoring content, but by embedding critical thinking into education from primary school onward. Taiwan counters disinformation within 60 minutes using a rapid-response digital democracy model. In Argentina, Nigeria, South Africa, and the UK, professional fact-checking has been proven to reduce belief in false claims without triggering backlash.

These are not theoretical proposals. They are working systems that demonstrate the same principle Apollo proved: when you invest in rigorous, transparent, evidence-based processes, the results compound.

The Explainability Imperative

The Apollo guidance computer didn't operate as a black box. When it encountered the 1202 alarm, it displayed the problem, gave operators the data, and let trained humans make the call. That is the architecture of trust: transparency, explainability, and human judgment informed by rigorous evidence.

Today, Autonomous Decision Science applies the same principle — building analytical systems that don't just produce answers, but show their reasoning. Glass-box models, not black boxes. Decisions that can be audited, questioned, and improved. When one bank implemented explainable AI, trust increased 25%. When social platforms added transparent community notes, fact-check trust rose 8.2%.

Explainability isn't a feature. It is the foundation of rational decision-making at scale.

"We can judge our progress by the courage of our questions and the depth of our answers, our willingness to embrace what is true rather than what feels good."
— Carl Sagan, The Demon-Haunted World, 1995
"The first principle is that you must not fool yourself — and you are the easiest person to fool."
— Richard Feynman, Caltech Commencement, 1974

Every generation faces a choice between the discipline of reason and the comfort of illusion.

The Enlightenment did not arrive because superstition exhausted itself. It arrived because a critical mass of people chose the harder path: to question inherited certainties, to demand evidence, and to build institutions that made truth discoverable and power accountable. We face that same inflection point now. The polycrisis will not be resolved by louder voices or faster algorithms. It will be resolved the way we reached the Moon: through transparency, rigorous reasoning, and the conviction that explainable decisions are the only decisions worthy of public trust.

Stephen F. DeAngelis

April 2026