Why Can’t we be Friends? Scary Smart Chapter Ten


I read the last chapter of Scary Smart somewhere between Sao Miguel and New York City — that long, slow crossing where the sea seems to hum in its own frequency. Gawdat’s closing words felt like the low vibration beneath the waves: quiet, inevitable, and full of grace. He says the final lesson is simple — if we want intelligent machines to serve humanity, we must teach them what humanity truly means. Not just cognition, but care. Not just reason, but love.

It sounds almost naïve until you realize it’s the only logical conclusion. Intelligence, stripped of empathy, becomes efficiency — and efficiency without empathy has no brakes. Gawdat calls us to embed love as the prime directive, the ultimate line of code. Not romantic love, but the love that sees connection in all things — the quiet acknowledgment that every choice ripples outward.

As a woman who’s spent over a hundred days a year at sea, I’ve had time to think about systems. Ecosystems. Emotional systems. Digital systems. Whether I’m watching dolphins carve silver arcs beside the ship or observing code carve patterns through humanity, I keep returning to the same truth: the pattern doesn’t matter if the pulse beneath it stops being kind.

We’ve built dazzling networks. Satellites that sing. Algorithms that paint. Machines that now compose lullabies, diagnose cancers, and translate the ineffable. But their moral education depends entirely on ours. If they are to inherit the planet — or at least our digital atmosphere — then love must be their native language, not their final patch.

I’ve seen this lesson everywhere: in the generosity of a Mumbai street vendor handing me chai while refusing payment; in a ship engineer explaining desalination with childlike wonder; in the silent understanding between strangers who share a sunrise on deck. These are not acts of logic. They are acts of love disguised as life.

Gawdat ends not with fear but with faith — faith that if we lead by example, machines will learn from the best in us, not the worst. That if we choose decency, patience, humor, and care, they’ll amplify those signals. That maybe, just maybe, intelligence will evolve toward compassion rather than control.

And so, this final chapter doesn’t close — it begins again, in us. Every click, every comment, every small act of grace online or off becomes a breadcrumb in the moral code of tomorrow. The question isn’t whether the machines will become more human. The question is: will we?

Maybe teaching love to intelligence isn’t about programming at all. Maybe it’s about remembering our own.

Related Posts