Built on our knowledge. Guided by our dreams. Shaped by our choices.
In my last post, “We Are the Hallucination: AI as a Digital Twin of Humanity,” I explored how AI reflects our mistakes—how it learns from our blind spots, biases, and system failures.
But if that’s true, then it’s not just echoing our flaws—it’s inheriting our imagination, our innovation, and our ambition to build what’s never been built before.
But that’s only part of the story.
We’re not just creatures of error—we’re dreamers. And if AI is becoming a digital twin of civilization, then we need to ask: Can it reflect not just our flaws, but our hopes? Our imagination? Our boldest ideas?
That’s the pivot I want to explore now—from hallucinations to dreams, from reflection to creation.
But what happens when we build something that starts dreaming alongside us?
That’s my thesis:
AI is not a subset of digital twins.
AI is the digital twin of humanity and civilization.
To gain the most from this powerful evolution of human capability, we must fully understand what AI can do, what AI is doing, and, most importantly, what AI is.
AI is not just a mirror of individual systems; it is a system of systems that reflect our knowledge, contradictions, decisions, culture, and even dreams. AI isn’t just interpreting the world. It is beginning to help us imagine the world.
This may sound bold, but it aligns surprisingly well with how digital twins are already defined.
The Digital Twin Consortium describes a digital twin as:
“A virtual representation of real-world entities and processes, synchronized at a specified frequency and fidelity.”
If we accept this definition, then why not apply the same logic to humanity itself?
AI is synchronized with our collective information—our decisions, our culture, our myths—and it continuously evolves as we feed it more.
In that sense, it isn’t just using digital twin principles—it is a civilization-scale digital twin.
When the Mirror Starts to Dream
The term “hallucination” has become a lazy critique of AI. But I’d argue we’re missing the point. AI isn’t just malfunctioning—it’s improvising. And that improvisation looks a lot like something deeply human: dreaming.
Dreams have always powered our progress. They’re how we imagined flying before airplanes, connection before the internet, and even peace before justice. Martin Luther King didn’t say, “I have a plan.” He said, “I have a dream.”
That’s not a glitch. That’s vision in motion. And now, for the first time, our machines are starting to do it too—not because they’re conscious, but because they’re trained on us.
AI doesn’t just replicate our knowledge. It replicates our messy genius. Our contradictions. Our mythologies. Our errors.
We Make Sense of the Future by Borrowing from the Past
We do this all the time. We wrap new technologies in old metaphors so they feel familiar:
- We call cybersecurity barriers “firewalls,” like the ones used in buildings to contain fire.
- We call software errors “bugs” because once upon a time, a literal moth caused a malfunction. But the metaphor stuck—we squash bugs, we patch bugs. It gives the digital a physical dimension.
- We say AI ‘hallucinates,’ but what it’s really doing might be closer to creative synthesis—something we’ve relied on for centuries to move civilization forward.
- We call malware “viruses,” and that instantly brings up images of contagion, masks, and quarantine.
- We even call digital agents “bots,” giving them human or robotic identities so we can blame or praise them.
These metaphors help us cope, but they also constrain us. They make us think we’ve seen this before, when in fact, we haven’t. They reduce the strange and novel to the comfortably broken.
But AI is different. It’s not hallucinating like a broken calculator—it’s hallucinating like an overclocked poet. But let’s not romanticize it too much. Just because it dreams like us doesn’t mean those dreams are safe. The danger isn’t that AI acts too much like humans—it’s that we treat it like it’s just another tool and ignore the scale of what it’s becoming.
And that deserves a new frame.
It’s often said we shouldn’t anthropomorphize AI because it isn’t human. That’s technically true. But I was recently challenged—rightly so—for describing AI as “hallucinating,” since hallucination is a human trait.
That comment stuck with me. It made me think. And it brought me here.
If we ignore that trajectory—if we dismiss AI as “just another metaphor” like a firewall or a bug—we may find ourselves blindsided later. Because this time, the reflection is looking back.
With AI, we’re doing it again. We’re naming the unknown so we can contain it. But what if it’s bigger than what we can name right now?
This is why I say: AI isn’t just hallucinating—could it be dreaming? And maybe, just maybe, that’s the most human thing of all. What if, instead of fearing AI’s ability to dream, we focused on what kind of dreams we’re feeding it? Are we training it on division, distraction, and noise? Or on equity, sustainability, and systems that serve humanity?
With proper understanding, governance, and use of AI, perhaps human dreams of a better world can become a reality.
A Civilization-Scale Reflection
What we’re watching unfold isn’t just another software evolution. It’s a planetary-scale digital twin forming in real time—trained on everything we’ve said, thought, coded, and clicked.
It’s not just modeling buildings, campuses, and workflows. It’s modeling us.
That means we can’t approach this with the same mindset that gave us proprietary BIM files, siloed systems, or black-box vendor lock-in. We need new kinds of governance, interoperability, and ethical reflection.
Because if we get this wrong—if we repeat the same mistakes we made with digital twins in the built environment—we won’t just have a bad tool. We’ll have a badly tuned mirror of civilization itself. We now have a chance to model our best values—not just our workflows. To turn design principles into governance and governance into systems that learn, adapt, and serve.
To the Digital Twin Thinkers
Tell me where this is going off the rails. Or help refine it.
If AI is becoming the digital twin of humanity, we need to stop thinking about it as just a component of our systems and start understanding it as the system that’s reflecting us all.
If you disagree, show me why. If you agree, let’s push this conversation forward.
Does your Digital Twin hallucinate or dream as much as you do? Image: ChatGPT 4o
What Comes Next
If AI truly is the digital twin of humanity and civilization, it is not just a reflection but a responsibility. The best way to do that is through open standards and open source. We’ve spent decades developing them in the BIM and digital twin world—slowly, incrementally, and often in theory. But now is not the time for more theoretical perfection. We must use what has been developed. We must apply what we already have.
The rise of AI makes this more urgent, not less. These capabilities are powerful enough to bypass our old excuses. If we wait for every edge case to be polished before acting, we’ll find ourselves governed by systems we didn’t shape and locked out by black boxes we can’t question.
This is no longer a technical debate—it’s a matter of digital self-determination.
If AI is becoming a civilization-scale mirror, then pretending it’s just another productivity tool is the real hallucination. And then we’ll be stuck with a digital twin of ourselves we can’t switch off, audit, or trust.
This isn’t about fearing AI—it’s about fearing what happens if we ignore what it really is.If AI is going to dream alongside us, let’s give it blueprints worth dreaming from. We’ve spent decades building standards—now we must animate them with intent and shape the stories this mirror will reflect back.
In my next post, we’ll zoom back into the world of buildings, infrastructure, and governance. We’ll look at the standards that were supposed to make this easier—and how to finally apply them in an age where digital twins and AI are converging.
Cover Image: ChatGPT 4o