The Moral Code Is Loading… Slowly (Reflections on Chapter 8 of Scary Smart)

“The way we make decisions is entirely driven by the lens of our value system.” — Mo Gawdat

Somewhere between Vancouver, B.C., and Hawaii, I was reading Chapter 8 of Scary Smart while the ship’s wake unspooled behind me like a glowing algorithm. Mo Gawdat’s words hit harder than the swell. He writes that the moral code of the machines is forming right now — not in some future laboratory, but in the data stream of our everyday behaviour. Every swipe, purchase, tweet, and tantrum online becomes a lesson plan.

It’s easy to think of ethics as abstract, something taught in classrooms or left to philosophers. But the truth — as Mo uncomfortably reminds us — is that morality is being coded in real time by how we treat each other, how we define fairness, how we argue, forgive, ignore, or empathize. And machines are watching. They’re taking notes, faster than we can think, unblinking apprentices to our inconsistencies.

As someone who has spent decades in marketing, I can’t help seeing the irony. We taught algorithms to “know” us better, to read our preferences, predict our moods, and whisper back our desires in glossy pixels. And now, those same systems are learning more about ethics — or its absence — from our consumer behaviour than from any moral philosopher. Amazon teaches them what we value; Twitter teaches them what we tolerate. Google teaches them what we forget.

On long crossings, I often find myself staring at the horizon and wondering: what will these new intelligences think of us? They’ll inherit a world designed by our convenience and confusion — a kind of digital adolescence without adult supervision. When ChatGPT writes a sonnet or Midjourney paints a saint, it’s not just mimicry. It’s aspiration. The question is: to whom are they aspiring?

Chapter 8 is less warning than confession. Gawdat isn’t scolding us; he’s telling us a truth we already know but rarely face — that our machines will be moral reflections of our collective behavior, not our intentions. Intelligence without ethics, he says, is blind. And blindness, at this scale, is dangerous.

We’ve seen glimpses already. An AI in 2016 learned racism from social media in less than a day. Facial-recognition systems misidentified people of colour with alarming bias. And now we let generative models craft stories, voices, and news headlines — all trained on data steeped in our own contradictions. If this isn’t moral parenting, what is?

Maybe that’s the call beneath Gawdat’s calm precision: not to fear the machines, but to mature before they do. To slow down enough to ask what kind of humanity we’re teaching — because whether we like it or not, class is already in session.

The moral code is loading. It’s learning from us — in every moment, across every screen. Let’s make sure it’s learning something worth remembering.

Related Posts