Roger Smith’s Ghost and the “Orphans of Automation"
From Robot Arms to Large Language Models: The Danger of Scaling Incoherence to the Nth Power
In the 1980s, GM Chairman Roger Smith had a vision. He was going to leapfrog the Japanese by “automating” the competition into oblivion. He spent roughly $90 billion—more than the market cap of Toyota and Honda combined—on high-tech robotics and the acquisition of EDS (Electronic Data Systems) and Hughes Aircraft.
He didn’t just want robots; he wanted to buy a “technological brain” for GM. It was the most expensive “Placebo Integration” in history.
We are witnessing the rise of what William Langewiesche calls the 'orphans of automation.' From the cockpit to the assembly line, the human element is being sidelined by systems we no longer fully command. As Sandy Munro often reminds engineers, the goal should be to 'simplify, then automate'—because an automated system built on a flawed human process is destined for failure.
Today, we are building a new generation of orphans. We call them AI.
The “Lulu” of a Deal
Roger Smith famously called the purchase of Ross Perot’s EDS a “lulu of a deal.” He believed that grafting a world-class software firm onto a legacy car manufacturer would magically unify GM’s fragmented data.
But instead of integration, he got an organ rejection. The “Information Systems” guys at EDS spoke a different language than the “Manufacturing” guys on the floor. Because the Information Design hadn’t been fixed, the technology was just an expensive layer of paint over a crumbling structure.
The robots were “Orphans” because they had no Context. A robot arm would perfectly weld a door frame, even if the frame was misaligned by two inches. It performed its “Function” flawlessly while destroying the “Value Stream.”
The 2020s Glitch: The AI Orphan
We are currently repeating this at the Nth Power.
Companies are dropping “AI Orphans” into their departments:
The Finance AI that can generate a report in seconds but has no context of why the customer is churn-risk.
The Support AI that summarizes a call perfectly but can’t see the engineering defect that caused the problem in the first place.
Just like Roger Smith’s robots, these AI tools are isolated islands of efficiency. We are asking an AI to “optimize” a silo, forgetting that the silo itself is the design flaw.
Information Design is the Parent
An “Orphan” is what happens when you have Tooling without Architecture. If GM had used the logic of Lean or Theory of Constraints correctly, they would have redesigned the flow of the factory before buying the robots. They would have ensured the “Information” (the measurements, the quality cues) was integrated into the system’s DNA.
In the modern corporation, the only way to prevent “AI Orphans” is to treat Information as a Design Problem. * Stop asking: “How can AI do this task?”
Start asking: “How do we redesign the work so the AI has the context it needs to actually be useful?”
The In-Context-able Conclusion
Roger Smith proved that you can’t buy your way out of a bad design. You can’t automate incoherence. Whether it’s a robot arm in 1985 or a Large Language Model in 2026, if the information is siloed, the tool is an orphan.
We don’t need faster orphans. We need a better architecture.


