Take a step back from today’s AI hype to ask a more fundamental question: do we truly understand what intelligence is? While large language models dominate headlines and investment, this essay argues that many core scientific questions about intelligence, computation, and understanding remain unresolved. Through perspectives from computation, cognition, and philosophy of science, the series explores the limits of current AI, what is missing from today’s models, and what future paths toward deeper intelligence might look like.

Needless to say, AI has been the most transformative technology of the past few years. The remarkable success of large language models (LLMs) has triggered excitement, imagination, and anticipation at an unprecedented scale, reshaping the global technological and socio-economic landscape. Research output on AI grows daily, and competition among tech companies racing toward AGI has never been more intense. Terms like scaling laws, agents, inference efficiency, GPUs, and data have become catchphrases across research, industry, and investment communities. Everywhere you look, AI dominates the conversation. Headlines promise revolutions. Companies race toward AGI. Investors pour billions into computation and data platforms. Yet amid the frenzy and noise, one simple question often goes unanswered: Do we actually understand what intelligence is?
Most discussions remain anchored in the current LLM paradigm. But despite their undeniable success, many ultimate questions about AI are left unanswered: How far are we from AGI, and is it ever possible at all? Will scaling laws continue to hold? What might future AI technologies look like? To make progress, we need to step back from the fever pitch, and look at the modeling of intelligence from a broader scientific perspective.
This series of posts is my attempt to do just that. I will share thoughts on the nature of AI, its scientific roots, its promises, its limits, and its possible futures. Rather than staying confined to the LLM framework, I want to step out and view intelligence modeling through the lenses of computation, cognition, and the philosophy of science. Here are some of the key topics I will be exploring:
The boundary of computability is the limit of AI
- AI, as a computational model, cannot exceed the boundaries of computation itself.
The core mechanisms of intelligence
- Abstraction, association, and analogy as the foundations of thought.
Why modeling intelligence is so difficult
- The mind observing itself requires frameworks that go beyond logic systems.
How current AI models really work
- Imitation enabled by statistical evidence rather than true understanding.
What’s missing in today’s AI
- Genuine abstraction and representation of the world.
The future of AI
- There is a reason why symbolism will still play a significant role.
Creation and civilization
- How we might approximate the highest levels of intelligence.