Artificial (Not-So) Intelligence: A Rant on AI, Human Nature, and Modern Religion

Yesterday, I needed to replace my car’s rear blinker lights, so I’m going to rant about AI, human nature, and religion in the modern world. I’m an “engineer,” so this is the kind of repair I typically do myself since it’s much cheaper. I googled for the lightbulb model but couldn’t find which one my car needed without disassembling the light panels (my fault, I was in a hurry because stores were about to close). So, I decided to ask ChatGPT for the precise light and the precise car model.
Very quickly and with a firm answer, ChatGPT spat out: “bau15s 16w 12V,” which sounded pretty convincing. I went back to Google to check if the answer made sense, and okay, this was indeed a blinker light bulb model. So, I hurried to the car parts store. At the store, I found the bau15s model, which, according to ChatGPT, was a rear blinker light replacement, priced at €7.95. I thought, “that’s expensive,” but checked out anyway.
I got home, ripped the car apart, and discovered the lightbulb model was wrong. Thanks, ChatGPT! Luckily, the manufacturer left some spare light replacements inside the light panel, so I could change the failing light. Now, I need to return the lightbulbs I bought and hopefully avoid trusting ChatGPT for this kind of thing again. This experience sparked an idea.
If the latest AI models fail at something this “trivial,” how many failures have we already swallowed, and how many more will we encounter, using “agents” in industrial processes, generating code for software development, or planning logistical operations? Large Language Models (LLMs) are a great piece of technology, no doubt. But LLMs are not sentient, not conscious, and of course not aware of anything. They are just messy, word-spilling black boxes, tuned to output what seems “reasonable” to the user.
Check out the "Chinese Room argument", a thought experiment proposed 40 years ago by John Searle that challenges the idea that computers can truly “understand or think.” It wasn’t designed with the current LLM fever in mind, but it’s still relevant. The experiment suggests that a computer program, even if it can produce human-like language, does not necessarily possess genuine understanding.
The experiment involves a person who doesn’t understand Chinese but can follow a set of English instructions to manipulate Chinese symbols, effectively “responding” to input in Chinese as if they “understood” the language. This leads to the eternal question: where does “genuine understanding” come from in humans? If the answer lies in our brain’s reasoning and cognition processes, then the Chinese Room experiment makes no sense for machines, since we are often explained as machinery too.
All AI, machine learning, and related fields seem to avoid this topic and don’t promote the debate much. Maybe they’re not interested because it confronts basic human nature and undermines their marketing strategy. If what makes us human is purely material cognition machinery, then we are just “bio-machines.” In that case, nothing has a higher basis for morals or hierarchies. Chaos and anarchy follow, and nihilism reigns supreme.