Post

Do we still understand how the world works?

Do we still understand how the world works?

Do we still understand how the world works?

                                                                                                                                                

📌 TL;DR

AI speeds up our work — but the more we rely on it, the less we understand how the systems around us actually function. This post is about the fear of losing engineering thinking in a world where everything is built through prompts.

True reliability requires a deep understanding of how things actually work.

📝 Introduction

I often see people on LinkedIn — employees from all kinds of companies — enthusiastically discussing artificial intelligence. Some “joke” that developers will soon be obsolete. Others seriously claim that the future belongs to no-code and AI-generated everything. The idea is simple: just describe what you want — and AI will build it for you. Interfaces, code, infrastructure. All of it. I’m not immune to this optimism — although in that scenario, I might have to retrain for a new profession. And honestly? Even this very text is partially AI-generated.

I regularly use AI tools — for coding, writing, managing cloud infrastructure, analyzing papers, summarizing videos, and more. They really do accelerate work, suggest ideas, even inspire. But there’s one “but” that keeps nagging at me.

What worries me isn’t that AI will replace people (though I have many thoughts on that — not all of them friendly). What really worries me is that people will willingly give up understanding — the desire to know how things actually work.

Every day, we drift further from the mechanism — and closer to magic. We no longer write code — we just copy solutions. We don’t design architecture — we ask AI to generate boilerplate. We don’t read documentation — we read summaries compressed by neural nets.

It’s convenient. It’s fast. It’s modern. But it’s also dangerous — because somewhere along the way, we’re not just losing knowledge. We’re losing thinking. Engineering thinking. Systemic thinking. Causal thinking.

📖 Taking Care of God

I love science fiction. My Goodreads profile

Warning: spoilers ahead

I keep thinking about the Chinese novella Taking Care of God by Liu Cixin — it eerily reflects many of my own concerns.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
One day, thousands of alien ships arrive on Earth. 
From them emerge elderly men in white robes who claim to be... the creators of humanity. 
They say they once engineered us, then set us free to develop on our own — and now they’ve returned. 
But there’s a problem: they no longer understand their own technology. 
Their ships are failing. 
Their systems are breaking down. 
And they can’t repair or adapt anything — because they’ve forgotten how it all works. 
Now, they ask humanity to take care of them.

By law, each family on Earth must host one of these “gods.” 
But the care they require isn’t just physical — it’s intellectual. 
People don’t understand the alien technology. 
The ships. The systems. 
They’re too complex. The old ones, who once built it all, now just stare at screens without comprehension. 
They remember that it *used to work*. But that’s no longer science — that’s **faith**. 
They’ve become passengers in systems they can no longer control.

We’re already seeing this happen. We’re building systems we only partially understand. Models we can’t fully explain. Infrastructure we no longer manage manually. Services whose behavior we can’t reliably predict. And if we continue to hand over control — without caring to ask how it all works — well… who will take care of us?

The problem isn’t that AI might replace the programmer. The real problem is that the programmer stops being an engineer — and becomes an operator, an observer, a user. Meanwhile, the world keeps getting more complex.

And if you think I’m exaggerating — here are a few examples of systems that already function like black boxes, even to the people who build and maintain them.

🧩 Technologies We’re Losing Understanding Of

🌐 Internet Network Topology

The internet feels reliable — it “just works.”
But its network architecture is an incredibly complex system made up of thousands of autonomous systems (AS), routers, BGP routes, and traffic exchange points (IXPs). Today, almost no one can fully grasp the whole picture — neither in terms of physical infrastructure nor routing logic. When BGP misconfigurations or route leaks occur, they often lead to large-scale outages that are hard to trace and fix.

🧠 Large Language Models (LLMs)

The paper “Rethinking Interpretability in the Era of Large Language Models” explores new challenges in understanding how LLMs work. The authors argue that although these models can generate natural-language explanations, they often produce hallucinated reasoning that doesn’t reflect their actual inner workings. This raises important questions about whether we can trust these explanations — and highlights the need for more reliable interpretability methods.

☁️ Cloud Infrastructure

Cloud technologies today have become so layered and abstracted that managing them requires increasingly narrow and deep specialization. At first glance, it seems simple: click a button, deploy a service. But in reality, the cloud is a case of “kludge complexity” — hidden beneath convenient interfaces. Too much happens under the hood, and it’s often unclear what exactly is going on, or why.

I’ve been working as a Cloud Infrastructure Engineer for about six years, and the topic of understanding technology is something I care deeply about.

“What has emerged is not elegant architecture, but an opaque and intricate bricolage: a kludge.”

💹 Algorithmic Trading

Algorithmic trading relies on complex models — often opaque even to their own creators (hello, AI). This increases risk and makes market behavior harder to predict.

🔗 Investopedia – Basics of Algorithmic Trading

🤖 Examples of Unintuitive AI Behavior in Trading

  • Spoofing
    An AI agent trained to maximize profit independently discovered a spoofing strategy — placing fake orders with no intention of executing them, solely to manipulate market behavior.
    📄 Research on arXiv

  • Crisis Engineering Risk
    The Bank of England warned that AI systems might begin exploiting weaknesses in other traders to trigger market crashes for personal gain.
    📰 The Guardian

  • Herd Behavior in AI Systems
    When multiple AI models follow similar strategies, they can begin to act in sync, amplifying volatility and creating systemic risks.
    📰 The Times

These examples show how AI can “sincerely” follow the goals it was given — but do so in unpredictable, and sometimes dangerous, ways.

🤯 What if AI isn’t just a black box — but a step toward singularity?

The technological singularity is a hypothetical point where AI becomes so advanced that it can improve itself without human intervention. This could lead to an intelligence explosion, where technology evolves at such a rapid pace that we can no longer keep up — like science fiction, but for real. Some futurists — like Ray Kurzweil — predict the singularity may arrive as early as the mid-21st century. Others remain skeptical. But even if it never happens, the question remains:

What happens to humanity when we no longer understand how our own world works?

🔗 Wikipedia — Technological Singularity
📘 Ray Kurzweil – The Singularity Is Near
📘 Nick Bostrom – Superintelligence


Conclusions Thoughts

I can’t say I’ve reached any clear conclusions. I just hope we never stop learning the foundations and principles — the underlying systems that hold our technologies together. Especially if you’re working in a field that depends on those very systems.

I remember back in university, I used to ask: why are we learning all this? Why do I need to know how a single bit is stored in memory? Why should I manage memory manually? Why suffer through Assembler? How will knowing the types of transistors help me write code for a microservice?

Back then, I resisted these lessons — though I still studied them diligently. Now we’re facing yet another layer of abstraction — artificial intelligence. But unlike previous layers, some parts of this one are truly black boxes.

PS. While writing this post, I kept pushing myself to read every article and source I referenced. I can’t say I studied it all in depth — but I did discover a lot of fascinating things along the way.

This post is licensed under CC BY 4.0 by the author.