Everybody’s Talkin’
The Promise of AI and its Possible Threats
Everybody’s talkin’ at me, I don’t hear a word they’re sayin’ Only the echoes of my mind I’m goin’ where the sun keeps shinin’ through the pourin’ rain Goin’ where the weather suits my clothes Skippin’ over the ocean like a stone
— Fred Neil, 1966
Everybody is most definitely talking about it. It’s the predominating conversation of the moment, even edging out oil-anxiety. Will AI destroy us, wiping out the human race as superintelligence develops and takes control? But what does all that mean? Most of us, gazing dumbfounded at the state of our beautiful Earth, would welcome just a modicum of basic human intelligence among the World’s Leadership. While we whisper fearfully about the uncontrolled development of AI, the politicians and their billionaire associates proceed apace, creating a machine that is barely understood — all this while a famine rages in Sudan, a war grinds on in Ukraine, a genocide unfolds in Gaza, and a nightmarish balletic dance of threat and counter-threat plays out in the Straits of Hormuz between one of the world’s oldest civilisations and one of its youngest.
I — The Conversation We’re Not Having
Ask almost anyone and they will tell you they are worried about Artificial Intelligence. Ask them what they are worried about, exactly, and the conversation quickly becomes a kind of séance — vague presences, half-formed fears, the vocabulary of science fiction pressed into service as analysis. Superintelligence. Existential risk. The Singularity. Words that function more as incantation than argument.
This is not stupidity. It is something subtler and more troubling: a collective condition in which the noise of a debate substitutes for its substance. We are all, in our way, Nilsson’s narrator — surrounded by voices we register without hearing, faces we see without reading. The AI conversation washes over most of us as a kind of weather: we know it’s happening, we feel its pressure, but we cannot quite locate ourselves inside it.
And yet decisions are being made. Not by the worried majority conducting its anxious scroll through apocalyptic headlines, but by a remarkably small number of people — engineers, investors, politicians who have not always demonstrated mastery of technologies considerably simpler than this one — who are building at a speed that outpaces not just public understanding but their own. The gap between the conversation we are having and the one that would actually be useful has become, itself, a kind of danger.
Everybody’s talkin’. The echoes are deafening. And somewhere in the noise, the future is being decided.
II — The Present Tense is on Fire
While we debate the future, the present is not waiting politely. In Sudan, a famine of biblical proportions unfolds at the edges of the world’s attention — not because we lack the means to know about it, but because we have somehow misplaced the will to look. In Gaza, the word genocide is being spoken by people whose job it is to weigh such words carefully, while the machinery of diplomacy performs its grim, procedural dance. In Ukraine, a land war of a kind Europeans told themselves they would never see again enters its third year, grinding lives into statistics. In the Straits of Hormuz, one of the world’s oldest civilisations and one of its youngest conduct a nightmarish ballet of threat and counter-threat, each move watched by the rest of us with the helpless fascination of an audience that knows it is not entirely audience.
These are not distractions from the important conversation. They are the important conversation. They are what human intelligence — unaugmented, unassisted, the kind we already possess in abundance — looks like when it is placed in the service of power without accountability or wisdom.
Here is the question that the AI debate rarely stops long enough to ask: if we cannot coordinate well enough to prevent a famine, to halt a genocide, to imagine a ceasefire — what precisely do we think we are going to do with a technology that will require more coordination, more wisdom, and more accountability than anything our institutions have yet been asked to manage?
We are not worried about the wrong thing. We are worried about the future with an intensity we are unwilling to apply to the present. And the present, unlike the future, is already here — already burning, already asking something of us that we have not yet found the courage to give.
II(b) — How We Got Here
This did not arrive from nowhere. In 1950, Alan Turing asked a quiet question — can machines think? — and set in motion a chain of human choices, each one deliberate, each one taken by people who understood, at least partially, what they were doing. The decades that followed brought long winters of failed promise and sudden springs of breakthrough: expert systems, neural networks, deep learning, and then the large language model — a technology that does not think, exactly, but that mirrors human language with a fluency so uncanny that the distinction has stopped mattering to most people who encounter it. Seventy-five years from Turing’s question to a machine that can pass, in most conversations, for a person. The acceleration is not slowing.
It is worth asking what economic system has been driving that acceleration, and what values are encoded in its logic. Kate Raworth, in her quietly radical Doughnut Economics, proposed a different model: an economy that operates within a social foundation — enough for everyone — and beneath an ecological ceiling — not more than the planet can bear. A bounded system. Enough as the goal. It is a vision so modest in its ambitions and so devastating in its implications that it has been largely admired and almost entirely ignored, because the system it challenges is not built for enough. It is built for more, always more, and it will deploy every tool available — including this one — in the service of that imperative.
H.G. Wells saw the destination in 1895. His Time Traveller arrived in a future where humanity had split: the Eloi, beautiful and witless on the surface, consuming without curiosity; and the Morlocks, grinding away underground, maintaining the machinery that kept the surface world comfortable. Wells meant it as a warning about class. It reads now as a description. The content moderators in Nairobi processing the internet’s worst material so that the model learns what to avoid — they are the Morlocks. The lithium miners in the Congo whose labour powers the devices on which we conduct our AI anxiety — they are the Morlocks. We, scrolling and consuming and marvelling, are the Eloi. And in Wells’s story, the Morlocks eventually ate them.
Underneath all of it, unaddressed, the oldest emergency continues its patient, catastrophic work.
The Jesuit palaeontologist Teilhard de Chardin dreamed of the noosphere — a layer of human thought and consciousness clothing the earth like a living skin, the next stage of creation’s unfolding. He spoke of the world being clothed with a brain. He meant it as a form of hope: the awakening of collective intelligence, the planet becoming aware of itself. Iain McGilchrist, whose life’s work maps the consequences of the human brain’s divided attention, might gently ask which half of the brain we are using to make that garment. The left hemisphere — precise, instrumental, optimising, deaf to context — or the right, which holds ambiguity, empathy, and the kind of wisdom that knows what it does not know. A large language model is the left hemisphere made machine: fluency without understanding, pattern without meaning, confidence without wisdom. We are clothing the world with a brain. The question Teilhard did not live to ask is whose brain, exactly, and what will it dream?
III — But the Fear Isn’t Wrong
And yet. And yet.
It would be a comfortable argument — perhaps too comfortable — to conclude that AI anxiety is simply a displacement activity. The chattering of a species that finds hypothetical futures easier to inhabit than the actual present. A way of feeling serious about something without the discomfort of being asked to act.
But the fear is not wrong. It is, in some respects, not nearly afraid enough.
The technology is real. The trajectory is real. And the people accelerating it are not, for the most part, the reckless innocents the “barely understood machine” framing implies. Some of them understand it very well indeed — understand it with a clarity that makes their continued acceleration not reassuring but vertiginous. They have read the literature. They have, in certain cases, written the literature. They have sat in rooms and said, carefully and on the record, that what they are building might be the most transformative and dangerous technology in human history — and then gone back to building it, because the logic of the race permits no other move. To stop is to cede the future to someone less careful. So everyone drives, and calls it responsibility.
This is not villainy in any shape that satisfying narratives require. There is no moustache to twirl. It is something considerably harder to resolve: a collective action problem dressed in the clothing of progress, with enough genuine good faith distributed throughout to make clean judgement impossible.
What connects the burning present and the vertiginous future is not distraction but architecture. The same systems that have produced the famine-watching and genocide-managing and ceasefire-failing are the systems now steering the development of AI — the same concentrations of power, the same accountability gaps, the same tendency to move faster than wisdom and call it ambition. The threat is not that AI arrives from outside our existing failures. It is that it will be shaped by them, and will amplify what it finds.
That is what should keep us awake. Not the science fiction. The continuity.
IV — Villain and Angel, All of Us
There is a particular comfort in locating the problem elsewhere. In the billionaire with the rocket ship and the chatbot. In the politician who cannot define an algorithm but cheerfully legislates around one. In the general public, sleepwalking toward a future it hasn’t chosen. We are all, at various moments, happy to stand in that crowd and point.
But the pointing finger has a way of curling back.
We are the people who scroll past the Sudan headline to read about the latest model release — not because we are callous, but because the model release has been made vivid and immediate and the famine has been made distant and abstract, and we are only human, which is precisely the point. We sign the open letter about AI safety and do not change our pension fund. We express horror at Gaza over dinner and book the flight anyway, because what else is there to do, really, and life must continue, and one person’s choices cannot — can they? — make that kind of difference.
This is not a counsel of despair, and it is not an accusation. It is a description of a condition that is genuinely, structurally difficult. We are each of us simultaneously the narrator who cannot hear and the voice contributing to the noise. The angel who knows what is at stake and the villain who cannot quite bring themselves to act as if they know it. Not because we are hypocrites — though we are, all of us, sometimes — but because the gap between understanding and action has never been wider, and the forces that profit from that gap have never been better resourced or more sophisticated in their exploitation of it.
The AI developers are us, only further along the same road. The politicians are us, with more power and less excuse. The billionaires are us, with the ordinary human hunger for significance scaled to a size that bends institutions. None of this is exculpation. All of it is true.
We are, collectively, Nilsson’s narrator — standing in the crowd, hearing the echoes, already composing the escape route in our heads. Goin’ where the weather suits my clothes. As if there is still somewhere to go. As if the ocean has an other side we could skip a stone to and start again.
The Stone and the Ocean
Fred Neil wrote “Everybody’s Talkin’” in five minutes in a studio bathroom, eager to get back to Florida. He was, by all accounts, a man who spent his life trying to find the place where the weather suited him — eventually retreating to Coconut Grove, devoting himself to the rescue of dolphins, largely abandoning the music industry that had briefly made him famous. He found his escape. Most escapes, of course, are only partially successful. The world has a way of following you to Florida.
What stays with me — what I think should stay with all of us — is that last image. Skippin’ over the ocean like a stone.It is the most beautiful line in the song, and quietly the most devastating. A stone skipping across water is a miracle of lightness, of angle and momentum, of the right conditions briefly met. It is also a stone that is going to sink. That is not a metaphor you can argue with. It is just physics.
We are skipping. The debate about AI is skipping — bright, fast, catching the light. Our engagement with the burning present is skipping. Our institutions, our attention, our moral seriousness — all of it moving with just enough velocity to stay above the surface a little longer.
The ocean does not care about our velocity.
What lies beneath is not, or not only, the artificial superintelligence of our nightmares. It is the accumulated weight of the things we have not done, the connections we have not made, the faces we have seen without reading. The question that the song asks, and that our moment is asking, is not whether the stone will sink. It is what we will have understood by the time it does.
Everybody’s talkin’. The echoes are extraordinary.
I’m listening for something else.


