For Better or Worse, It Reflects Us All
I’ve never been one to blindly chase new tech trends—but I’ve always believed it’s critical to stay ahead of where technology is taking us. That’s especially true now, with AI evolving faster than anything we’ve seen before.
Yes, it hallucinates. If you’ve seen my recent posts, you’ve probably noticed the strange, sometimes brilliant mistakes in AI-generated videos and images. But those so-called “hallucinations” aren’t the whole story—they’re a window into what this technology is capable of, and where it still falls short.
Even just playing with prompts, images, and videos helps you learn how it thinks—what it understands, and where it fakes it. The point isn’t to blindly trust AI. It’s to understand it.
Because we are imperfect, and those imperfections are reflected in the machines we create and train.
AI is a Digital Twin of Humanity
Whether we like it or not. It’s learning from us—mirroring our best ideas, our worst impulses, and everything in between.
But here’s the real danger: if we ignore it, dismiss it, or let it run wild without guidance, it will grow into something far wilder than we intended. Just like ignoring a child while maturing, you can’t expect them to turn out well if you’ve been absent. This is on us.
Honestly, it’s starting to feel like déjà vu every time I hear architects, engineers, building owners, or facility managers brush off AI—dismissing it as hype, irrelevant, hallucinating, inaccurate or not their problem. But let’s be real: that’s not just skepticism; it’s a little bit of fear talking. And fear doesn’t future-proof anything. If we’re not part of the conversation—shaping how AI fits into our work, our buildings, and our decisions—then we’re letting someone (or something) else decide for us.
This isn’t about defending AI—it’s about making sure we don’t repeat our own mistakes at machine speed.
Before we point fingers at AI for hallucinating, let’s admit something uncomfortable:
We’ve been Hallucinating for Centuries!
Let’s take a journey to explore these human hallucinations and see what we can learn.
1. Trojan Horse (~12th century BCE)
- Note: This is the only myth in the list—but it’s a powerful metaphor for blindly accepting systems, gifts, or assets without understanding what’s inside.
- Responsible Party: Trojans (City of Troy leadership)
- Hallucination: “It’s a harmless gift—we’ve won!”
- Reality: Enemy soldiers were hiding inside the horse.
- Impact: The City was sacked, and the war reignited.
- Modern Lesson: Never accept an asset, system, or software at face value—know what’s inside before you integrate it into your environment.
- Source: Trojan Horse – Wikipedia
2. St. Francis Dam Failure, California (March 1928)
- Responsible Party: Los Angeles Department of Water and Power (LADWP)
- Hallucination: “The dam is structurally sound and safe.”
- Reality: Design flaws and unstable geology caused a catastrophic collapse.
- Impact: ~431 deaths, massive property damage.
- Source: 23 of the Worst Engineering Disasters to Date
3. Ford Pinto Fuel Tank Design (1970s)
- Responsible Party: Ford Motor Company
- Hallucination:“It’s cheaper to settle lawsuits than to fix the flaw.”
- Reality: Fuel tank design led to deadly fires in rear-end collisions.
- Impact: Estimated 27–180 deaths, millions paid in legal settlements.
- Source: Disastrous Engineering Failures Due to Unethical Practices of Engineers
4. Hyatt Regency Walkway Collapse, Kansas City, Missouri (July 1981)
- Responsible Party: Structural Engineers & Project Team
- Hallucination: “This design change is minor—no need to recalculate.”
- Reality: The change doubled the load on the connectors, causing failure.
- Impact: 114 deaths, 216 injuries.
- Source: Hyatt Regency walkway collapse
5. Chernobyl Disaster (1986)
- Responsible Party: USSR
- Hallucination: “The test is under control.”
- Reality: Reactor exploded due to ignored protocols and human error.
- Impact: Immediate deaths of 31 people; long-term health effects and environmental contamination affecting thousands.
- Source: Engineering Disasters: 25 of the Worst Engineering Failures on Record!
6. Challenger Disaster (1986)
- Responsible Party: NASA
- Hallucination: “It’s safe to launch.”
- Reality: O-rings failed in cold temps—engineers warned, leadership ignored.
- Impact: Loss of seven astronauts.
- Source: A look at 10 notable engineering failures – ASME
7. Mars Climate Orbiter (1999)
- Responsible Party: NASA
- Hallucination: “Everyone’s using the same units, right?”
- Reality: One team used metric, the other imperial.
- Impact: $125M lost.
- Source: Mars Climate Orbiter – Wikipedia
8. Hurricane Katrina Levee Failures (2005)
- Responsible Party: U.S. Army Corps of Engineers
- Hallucination: “The levees are strong enough.”
- Reality: Flawed design, poor construction, and lack of maintenance led to catastrophic failure.
- Impact: 1,800+ deaths, $125 billion in damage, 80% of New Orleans flooded.
- Source: Hurricane Katrina Levee Failures – Wikipedia
9. OPM Data Breach (2015)
- Responsible Party: Office of Personnel Management
- Hallucination: “Our systems are FedRAMP secure.”
- Reality: Outdated legacy systems, no multi-factor authentication, and lax cyber hygiene led to one of the worst breaches in U.S. government history.
- Impact: 21M+ records stolen. To date, direct costs have exceeded $612 million for taxpayers, encompassing credit monitoring services and legal settlements.
- Source: OPM Data Breach – Wikipedia
10. Eaton Fire, Altadena, California (January 2025)
- Responsible Party: Southern California Edison (SCE) Allegedly, although in denial right now…
- Hallucination: “Our infrastructure is smart and won’t cause fires.”
- Reality: Alleged failure of SCE’s transmission equipment sparked the fire..
- Impact: Destroyed over 9,400 structures and resulted in 17 fatalities.
- Source: Los Angeles Business Journal
And there are many, many more examples…
Smart People, Dumb Systems: Why Hallucinations Happen
Many of these failures didn’t happen because people lacked knowledge, vision, or even the right intentions. These were smart teams working under pressure. But intelligence alone isn’t enough. What failed was the system around them: siloed departments that couldn’t see what the others saw, communication gaps that let bad assumptions persist, and a lack of checks, balances, or connected data to challenge the hallucinations. Yes—hallucinations. Not the AI kind, but human ones—false beliefs that went uncorrected because nothing in the system was designed to connect the dots.
That’s what makes today different. With technologies that are open, interoperable, and powered by connection profiles using open-source standards, we finally have a shot at solving these challenges. But it won’t happen on its own. We have to be intentional—about how we design, connect, and govern these systems—so we stop hallucinating in isolation and start making decisions with clarity and context.
These failures were less about a lack of intelligence or ambition—and more about systems that couldn’t (or wouldn’t) talk to each other. In many of these cases, a well-designed digital twin could have flagged the risks earlier: a levee under strain, a dam with poor geology, a walkway overloaded due to a structural design change, or even data systems showing signs of cyber vulnerability. Some failures didn’t need AI—just better connections between humans and systems. But today, AI gives us the chance to go even further. If trained well and grounded in real-world data, it can surface early warnings, detect patterns we miss, and help us coordinate complex responses. The question is: are we willing to partner with it—or just wait until it hallucinates like we did?
The Four Robotic Laws:
AI, Digital Twins, & Human Collaboration in Facility Management
At the NFMT Conference, we brought these ideas to life through storytelling—with a live demonstration of how AI, machine learning, and robotics can be used in facility management. We staged a scenario where 43 monkeys (robots) escaped from a biotech lab and scattered across the digital twin of a real facility. Attendees had to locate and respond using AI-powered work orders, asset data, and building maps. This wasn’t just a gimmick—it showed that truly smart buildings should be talking to us. Not like the so-called “smart” infrastructure that led to tragedies like the Edison fire, where data stayed locked in black boxes until it was too late.
The exercise highlighted how, when systems are open and interconnected, we can partner with AI to surface insights we’d never see on our own. And to keep things light, we even used attendee-submitted work orders—with first names, requests, and room numbers—to generate song lyrics for the video. It was a playful moment with a serious message: hallucinations aren’t just AI’s problem—they’ve always been ours. And now, we finally have the tools to do better.
As a final takeaway, we promised the audience that this was just the beginning. The AI-generated song—built from real work orders submitted during the session—and the digital twin responses will be featured in an upcoming webinar hosted by the Asset Leadership Network on April 10, at 3 pm Eastern register at this link https://us02web.zoom.us/webinar/register/WN_a-fcH1F3RLyo7ZP9Tg5XJA.
We’ll explain how it all came together: the tech stack, the digital twin logic, the real-time inputs, and how AI helped turn facility data into insights—and lyrics. This isn’t about flashy demos. It’s about showing what happens when buildings stop being black boxes and start talking back. Join us as we unpack the behind-the-scenes story and show how open, connected systems can turn hallucinations into informed action.
These examples remind us that humans have always faced challenges. It’s up to us to reduce AI’s hallucinations, guide its use, and truly understand this technology. Dismissing it or waiting for it to resolve itself isn’t an option. By embracing our role, we can harness AI’s potential responsibly, driving innovation and shaping a better future together. Open-source connectivity is crucial because of this connectivity between our systems, humans, AI, and sensors.
This ensures transparency and reduces risks as we move toward a more interconnected future—where AI doesn’t just reflect our hallucinations but our wisdom, too.