Rohit PatelFounder and CEO of QuickAI and Director of Meta Superintelligence Labs. He works on reinforcement learning, agents and assessments.
As the AI community focuses on reasoning models, models that generate “reasoning texts” before answering, I’m starting to wonder: How did we get to a point where logic is the weak link? What have we built? And what separates us from AI of today and the future?
For decades, popular culture painted a clear picture of AI: machines would be masters of logic, but utterly baffled by the things we thought were uniquely human. Data from Star Trek, an android with a supercomputer for a brain, could process endless facts, but could not contain art, emotion or creativity. T-800 van Terminator 2: Judgment Daya learning machine, had to learn the meaning of a smile or why people cry. Into the crucial moment Me, Robot is when the robot, Sonny, starts drawing, because this act of creation proves that he was different from other robots.
Based on these stories we have built a powerful story about ourselves. We came to believe that what makes us human is our intuition, our artistry, our understanding of nuance. Things that machines couldn’t do.
The two systems of thought
To understand the intelligence we create, we must first understand our own intelligence. The work of psychologist Daniel Kahneman provides a useful framework, suggesting that our minds work in two different ways, made famous by his book: Thinking, fast and slow:
• System 1 is our intuition. It’s fast, automatic and effortless. It’s the gut feeling that allows you to instantly recognize a friend’s face in a crowd, finish the phrase “salt and…” or feel a twinge of fear at a disturbing image. It works by matching patterns and making quick connections.
• System 2 is our reasoning. It is slow, deliberate and requires conscious effort. This is the system you use to solve a math problem like 17 times 24, carefully park a car in a tight space, or follow a complex line of reasoning. It is the logical, analytical voice that we identify as our “thinking” self.
True human intelligence is not one or the other; it is a constant interplay between the two. Our fast, intuitive system 1 makes suggestions. Our slow, rational system 2 steps in to analyze, question and correct them when necessary.
From pure reason to flawed intuition
Early AI research from the 1950s and 1960s, now called ‘Good Old-Fashioned AI’ (GOFAI), was a direct attempt to build a pure System 2 machine. It was based on formal logic and huge databases of if-then rules. This approach is great for closed systems with clear rules, such as chess or algebra, but is brittle. It can’t handle the real world because you can’t write a rule for every possible situation.
The deep learning revolution that led to today’s large language models (LLMs) followed a different path. Instead of programming rules, researchers built massive artificial neural networks inspired by the brain and trained them on Internet-sized data sets. In doing so, they inadvertently created something that works less like a perfect reasoner (system 2) and more like a super-powerful but flawed intuition engine (system 1).
An LLM works by generating word-by-word additions to texts. It excels at continuing patterns, making plausible connections, and generating intuitive responses based on the vast amount of data it has seen. Therefore, it can easily write a poem or create a picture. The hallucinations are the same engine that drives creativity. An LLM attempts to complete patterns in ways that are common in data. Sometimes the generated patterns are present in the training data (facts or regurgitation), and sometimes they are the statistically likely patterns that have not occurred in the world but could occur (hallucination or creativity). This is also why LLMs can make fundamental logical errors.
The core limitations of today’s LLMs are the weaknesses of a pure System 1 operating without a System 2 to monitor its work.
A new definition of what makes us human
This major turnaround was a shock. We built machines that mastered what we thought was uniquely human, before they mastered what we thought was simple for a machine. It turns out that the fuzzy, intuitive pattern matching of System 1 was more achievable with data and computing power than the clear, logical reasoning of System 2. Machines still excel at logical tasks in closed environments, but they have difficulty reasoning in the messy real world.
This development holds up a mirror to ourselves. If a machine can create art and write beautifully, what does it really mean to be human? Perhaps it is not just our intuition or creativity that defines us, but the seamless dance between our intuitive and reasoning minds. As we continue to develop these new forms of intelligence, we may discover that the journey is not just about creating better machines, but about better understanding ourselves.
Where does that leave us when it comes to the application of AI in business environments? My recommendations to fellow leaders are simple: We will have more success automating tasks than automating jobs. Companies that want to automate specific tasks for their employees will find success in greater efficiency. Fully automating jobs, on the other hand, requires more sophisticated AI that can seamlessly transition between System 1 and System 2 thinking.
Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs, and technology managers. Am I eligible?
