top of page

While We Prepared to Fight Skynet, Excel Quietly Took Over

  • Writer: Arnoud Engelfriet
    Arnoud Engelfriet
  • 11 minutes ago
  • 7 min read

Author:

Arnoud Engelfriet - Data Protection and AI/ML Lawyer & Computer Scientist

Author of the The Annotated AI Act

Chief Knowledge Officer at ICTRecht


Abstract


When Hollywood imagined artificial intelligence (AI), it did so vividly. We braced ourselves for Skynet from The Terminator, debated morality with HAL from 2001: A Space Odyssey, and joked along with the intelligent car KITT from Knight Rider. Pop culture convinced us that when AI arrived, we would unmistakably recognize it: reasoning, interacting, morally ambiguous or benevolent, and always undeniably intelligent.


Yet, while we scanned the horizon for dramatic signs of artificial consciousness, something very different quietly reshaped society. It wasn’t a conscious AI rebellion or friendly companion bots. Rather, Excel spreadsheets, credit-scoring algorithms, and predictive analytics quietly assumed roles once reserved for human judgment, subtly and irrevocably shaping our daily lives. We misread AI’s arrival because we were looking in the wrong direction, guided by pop culture's anthropomorphic illusions.


Seasons of AI: Summers, Winters, and Rebrandings


The history of AI is a pendulum swinging between overpromising and disappointment.

AI Summers bring optimism, funding, and the belief that machines will soon rival human reasoning. AI Winters follow when the promises prove hollow. Importantly, in past Winters, practitioners stopped calling their work "AI" altogether. They rebranded as decision-support systems, expert systems, or predictive modelling. It was a rhetorical retreat, but also a dose of honesty.


We are once again in a Summer.

Generative models like ChatGPT dominate headlines and boardrooms. They appear, at first glance, to deliver the AI we were promised. Conversational, versatile, and disarmingly fluent, they resonate with the cultural conditioning laid down by decades of films and television. But this is also precisely why we must be cautious.

The pattern is familiar: inflated expectations set up inevitable disillusionment.


The Mirage of Pop Culture


Entertainment media has been the single most influential force shaping public expectations of AI. Decades of stories----from HAL’s eerie calm to the Cylons’ existential debates, from the soulful intimacy of Her to the dystopian futures of Black Mirror----conditioned us to believe AI would think, reason, and even feel. The lesson absorbed was not technical but anthropomorphic: we came to expect AI as a partner, antagonist, or companion.


Clippy, Microsoft’s infamous paperclip assistant, revealed the danger of this expectation. Clippy was designed to feel human-like---smiling, conversant, always ready with a suggestion. But users quickly grew frustrated. It failed at basic tasks, offered irrelevant advice, and stubbornly misunderstood intent. Clippy’s failure was not technological incompetence; it was the betrayal of an interface that implied intelligence it did not have. Clippy was an early example of the ELIZA effect: we project understanding onto pattern-based outputs. And when the illusion breaks, disappointment turns to disdain.


Large Language Models repeat this at scale. Their chat interfaces, fluent prose, and first-person voices make them feel intelligent. But like Clippy, they are predictive systems, not reasoning entities. They disappoint when pushed beyond pattern recognition into genuine comprehension or judgment. The frustration is familiar, only magnified by scope and scale.


Excel, Not KITT: Contrasts That Matter


To grasp the mismatch between cultural expectation and reality, it helps to set explicit comparisons:


  • KITT vs Google Maps: KITT conversed, inferred goals, and expressed loyalty. Google Maps optimizes cost functions. It reroutes efficiently but does not understand why you are traveling or whether the trip matters.

  • HAL vs Excel: HAL debated ethics of mission secrecy. Excel implements formulas. JPMorgan’s 2012 “London Whale” disaster showed how flawed spreadsheets can silently drive billion-dollar risks without awareness or intent.

  • Her vs Chatbots: Her imagined an AI that empathized, remembered, and grew with its human partner. Chatbots mimic empathy through phrasing, but lack memory of self or continuity of relationship. Their “I” is empty.

  • Black Mirror vs Today’s UX: Episodes like Be Right Back explore AI as grief companion, raising moral questions about identity. Today’s chatbots simulate companionship with autocomplete. The resemblance is skin deep.

  • Terminator vs killer drone: The T‑800 is imagined as an autonomous hunter-killer with intent, goals, and relentless pursuit. An AWS drone executes pre-programmed instructions and remote commands, with no comprehension of mission purpose or ethical weight. One embodies cinematic agency; the other is logistics automated.


Each contrast reinforces the same point: what we have are predictive engines, not minds. Excel, not KITT.


Why LLMs Succeed: Meeting the Cultural Script


If LLMs are so limited, why their explosive success? Because they finally deliver the performance of intelligence that culture taught us to expect. Decades of films and shows primed us to equate conversation with cognition. When ChatGPT responds smoothly to questions, it feels like HAL without the menace, or like Samantha from Her----an accessible version of the long-promised companion AI.


This resonance is their marketing power. But it is also a risk.

Public trust, corporate adoption, and regulatory excitement are being built not on capabilities, but on illusions. Studies confirm that when people perceive media portrayals of AI as realistic, they are more inclined to accept AI as autonomous, reasoning, or emotionally aware. We are watching that dynamic play out in real time with generative AI: the pop culture promise finally feels delivered, even though technically it has not been.


Research by Denia highlights a critical insight: public perception of AI significantly varies according to familiarity and proximity to actual AI technology. General audiences typically lean towards apocalyptic narratives, driven by loss of control fears. In contrast, groups more familiar or directly engaged with AI adopt nuanced, practical perspectives, recognizing both opportunities and challenges but rarely embracing the apocalyptic extremes portrayed in popular culture. This suggests an essential disconnect: the further removed individuals are from real AI applications, the more susceptible they are to sensationalized narratives.


The anthropomorphic language used by media and corporations exacerbates this misunderstanding.


As entertainment media frequently depict AI as having human-like consciousness or intentions, audiences unconsciously internalize these attributes, expecting emotional interaction or logical reasoning from systems designed solely for statistical pattern recognition. Hence, anthropomorphic terminology such as “thinking machines” or “AI making decisions” misleadingly implies conscious agency, deepening public misunderstanding (Nader et al., 2024).


Naming the Illusion: AI as False Advertising


This brings us to my core claim: calling these systems “Artificial Intelligence” is false advertising. They are statistical models trained on data. They generate predictions, not reasons. Their conversational surface conceals their mechanical core. Labeling them as intelligent misleads users, inflates trust, and sets up inevitable disappointment.


Historically, AI Winters corrected this through silence. Practitioners quietly abandoned the term AI until a new wave of optimism returned. This time, we should not wait for disillusionment. We should enforce it: declare a deliberate AI Winter, not of research, but of rhetoric. Rebrand the tools honestly: predictive modeling, generative text, statistical patterning. If a system cannot justify its outputs with reasons, it has no claim to the label “intelligent.”


To make this practical, we should adopt enforceable norms. First, let’s ban first-person pronouns and anthropomorphic claims in system voices and ads unless proven. No “I think” or “I understand” when the system cannot. Second, require clear Capability & Limits Statements at deployment: training data classes, typical failure modes, and domains where the system is unreliable. Third, no autonomous actions without explicit opt-in and clear scope. And fourth, regulators should treat unsubstantiated claims of “intelligence” as misleading advertising under consumer protection law.


Critics may object that stricter language will stifle innovation. History shows the opposite. AI Winters redirected focus from overhyped goals to tools that actually delivered value. Honesty stabilizes adoption and prevents the boom-bust cycle of hype and backlash. A deliberate Winter is not retreat, but maturity.


Conclusion: Name It, Frame It, Enforce It


We did not miss the rise of intelligent machines. We misnamed the software we built. What sits behind today’s chat interfaces is not a mind. It is a statistical engine producing plausible words in a human key. That can be useful. But it is not intelligence in the sense the public hears.


The gap between what people infer and what the systems are is not semantic quibbling. It is the driver of recurring Summers and Winters, of misplaced trust and eventual backlash. Pop culture gave us companions and antagonists that reasoned, remembered, and cared. Industry gave us Clippy with better autocomplete. LLMs succeed today because they finally deliver the performance of pop culture’s promise, not the substance.


The remedy is clarity. Ban anthropomorphic claims. Mandate transparency. Require tools to tell the truth about themselves. If research wants to use the shorthand AI, fine. But in products and law, we must insist: predictive model, generative text, statistical system. Excel, not KITT.


References


Bender, E.M., Gebru T., Mcmillan-Major, A., & Shmitchell, S., 2021. "On the dangers of stochastic parrots: Can language models be too big?" FAccT ’21 Proceedings.


Epley, Waytz & Cacioppo, 2007. "On seeing human. Psychological Review," 114(4), 864–886.


Denia, E., 2025. "AI narratives model: Social perception of artificial intelligence." Technovation, 146, Article 103266.


Nader, K., Toprac, P., Scott, S., & Baker, S. (2024). "Public understanding of artificial intelligence through entertainment media." AI & SOCIETY, 39(4), 713-726.


Weizebaum, J., 1966. "ELIZA—a computer program for the study of natural language communication between man and machine." Communications of the ACM, 9(1), 36–45.


Biography of the Guest Expert


Arnoud Engelfriet is a Dutch IT lawyer and computer scientist with over 30 years of experience exploring the complex intersection of law, data, and emerging technologies. Renowned for his ability to bridge complex legal frameworks with technical insight, Arnoud has been a leading voice on issues related to software law, AI, and data governance since the early 1990s.


With an academic background in both computer science and law, Arnoud has cultivated a rare dual perspective that informs his work on legal and technical challenges in digital environments. His legal qualifications include specialization in intellectual property and patent law, further enhanced by his certification as a Dutch and European patent attorney, credentials that laid the foundation for his career in high-stakes tech law and regulation.


Arnoud currently serves as Chief Knowledge Officer at ICTRecht, where he leads the firm’s Academy and designs training programs tailored to legal and business professionals navigating digital compliance. He is the creator of the CAICO® course for AI Compliance Officers and frequently lectures on IT and law at Vrije Universiteit Amsterdam. In addition to his academic and consulting roles, he is a prolific public educator, publishing a daily blog on IT law and emerging technologies.


His published works include ICT en Recht and AI and Algorithms, both of which explore the legal dimensions of AI transformation. He is the author of The Annotated AI Act, one of the leading commentaries on the EU AI Act. Earlier in his career, Arnoud spent a decade as IP Counsel at Royal Philips, where he advised on software licensing and intellectual property matters, cementing his reputation as a pioneering legal mind in the tech sector.


 
 
 

Recent Posts

See All

Comments


bottom of page