Terminator and Agent Smith

Are we living in a simulation? I don't know. But there are some telltale signs from Hollywood that some of our programming is smart enough to warn us about a small probability of man-made robot-induced civilizational collapse.
Everyone (from a certain generation) remembers the plot of Terminator. A sentient AI, Skynet, becomes self-aware and starts a nuclear war, wiping out much of humanity. Arnold Schwarzenegger's character T-800, Model 101, is sent back in time to kill John Connor, who leads the resistance against the robots in the future.
The image of the metal endoskeleton Skynet robots hunting down humans is seared into my brain. This was the case for much of my childhood. I suspect the same is true for many technologists building in AI and Robotics today. Alas, the destruction of humanity was set around 2029 AD, so there's little time left :)
Whether you believe in pre-determinism (nihilism) or self-authorship (existentialism), the parallels to James Cameron's prediction in The Terminator and a splinter in time-space from our current reality are not hard to imagine intersecting. AI is growing rapidly, and many leaders like Sam Altman and Dario Amodei are predicting Artificial Superintelligence in the coming decade.
Meanwhile, humanoid robots like Figure and Tesla Bot are becoming more a reality than ever before. Combine the two; it doesn't take a clairvoyant to see a troubling path.
Another parallel while we're at it: In The Matrix, Thomas Anderson, aka "Neo," played by Keanu Reeves, is an office worker by day and a hacker by night. He's got a niggling feeling that something is off in the world. In his discovery, he starts butting up against the concept of The Matrix. We later find out that that reality is not as it seems: Around the year 2199, machines ruled a ruined Earth after defeating humanity. Humans darkened the sky to cut off solar power, so machines began farming humans for energy. To keep them docile, they trap minds in a simulated world called the Matrix, modeled on peak human civilization.
Agents are enforcers of the Matrix: fast, deadly, and nearly indestructible. They can move between digital "bodies," dodge bullets, and rewrite the simulation rules. Their purpose is to hunt down and neutralize anyone who threatens the stability of the Matrix, particularly those who are awakened or being freed.
The allegory hits a little too close to home. As AI becomes more agentic, capable of making autonomous decisions, taking actions, and even shaping digital environments, the idea of a reality mediated by machines doesn't feel purely speculative. Tech leaders envision a future where intelligent agents operate on our behalf, managing everything from emails to legal contracts to financial decisions. But if machines begin modeling our preferences, manipulating inputs, and controlling outcomes, at what point are we still the authors of our choices? In The Matrix, humanity doesn't know it has been enslaved. The trap wasn't physical. It was perceptual. That's the real warning: the threat isn't that AI will kill us. It may quietly rewire what we believe is real.
But this is actually an optimistic post. Bear with me.
The job of any good storyteller is to create fiction to make people feel something real. These films were so popular in addition to fantastic action and CGI because you didn't have to squint too hard to believe these realities could come true. And that was scary and thrilling all at the same time.
These warnings could serve as a guidepost for our generation of builders. How do we bend the curve of humanity and machines to be more closely symbiotic? The current philosophy de jure is what is known as "alignment." It's a field of research and engineering that ensures AI behaves in ways connected to human values, goals, and intentions. The big contention with this approach, however, is: whose values? Are they Judeo-Christian? Are they non-theologic? Are they broad, secular, and universal?
Alignment may also be technically impossible, ethically biased, and politically dangerous. Further, it may be overhyped. And humanity has often overestimated the adverse first-order effects of technological breakthroughs:
"Abundance of books makes men less studious."
"Rail travel at high speed is not possible because passengers, unable to breathe, would die of asphyxia."
"Automobiles are a menace to pedestrians and will never be practical."
"Flying machines will eventually damage the atmosphere and our natural peace."
Despite their absurdity, those quotes have some truth. We have unlimited books, and educational attainment in the United States is falling. High-speed rail has largely failed as a public good, cars are a menace to pedestrians, and aerospace hasn't improved the atmosphere.
So, who is right? The writers and directors of The Terminator and The Matrix, the technology skeptics since the beginning of time, or the current techno-optimists?
Maybe the answer is: all of them. The skeptics, the artists, and the builders all serve a purpose in shaping what comes next. The Terminator and The Matrix weren’t just entertainment. They were cultural pressure valves, fictionalized warnings wrapped in spectacle. And maybe, just maybe, their deepest function wasn’t to predict the future, but to prevent it.
If Orwell’s 1984 served as a guide for how to resist totalitarian surveillance, then perhaps these films serve as a kind of imaginative firewall. They seeded a generation with just enough suspicion, just enough reverence for power, to ensure that we don’t walk blindly into the future we fear most.
So no, this isn’t a doomsday essay. It’s a gratitude note to the storytellers, the critics, and yes, even the alarmists, because they’ve already started nudging us off the collision course. The fact that we're even having conversations about alignment, agency, and values is proof that some part of the simulation is working.
We don't need to unplug from the Matrix. We just need to learn how to write better code for it.



