On Intelligence
The ephemeral nature of intellect

May 16, 2024 (4mo ago)

On Intelligence

In 1996, the AI Deep Blue defeated world chess champion Garry Kasparov, sparking a global conversation about what truly constitutes intelligence.

Now, nearly three decades later, we find ourselves still grappling with the same existential queries – but our artificial counterparts have grown far more sophisticated. I am certainly not a graduate-level researcher who will express a novel perspective on “what AGI means” or “when AGI will be achieved.” But I am very interested in the nature of intelligence.

What separates our carbon-based cognition from the silicon synapses firing in data centers across the globe? How can we as humans measure the unmeasurable – the spark of awareness, or the depth of understanding?

Definitions of intelligence range from simple data processing to economic output, from self-awareness to emotional experiences. As Joshua Tenenbaum suggests, our minds are essentially "collections of general-purpose neural networks with few initial constraints." We're not so different from machines in that respect – we often rely on mental shortcuts, or heuristics, to make decisions.

Take navigation, for example. We think in terms of landmarks and angles, much like how Google Maps calculates the best route. But machines can crunch numbers much faster than we can blink. This begs the question: what kinds of intelligence actually matter for success? One could argue that traits like drive and resourcefulness are equally or more important as computational ability in predicting outcomes.

Alan Turing's famous "imitation game" proposed a simple test: if a machine can convincingly pretend to be human in conversation, it's exhibiting intelligence. But this test doesn't actually prove intelligence, just the ability to simulate it. John Searle's "Chinese Room" thought experiment drives this point home, illustrating how a computer might pass the Turing test without truly comprehending language. It’s not actually understanding, just simulating it.

But then, how do we know we're also not just "simulating understanding"? Our use of language is similarly just connecting pathways between a database of vocabulary and grammar. The true difference between a GPT tool call to a dictionary API and our neuronal equivalent is unclear.

When it comes to consciousness and sentience, things become even murkier. Turing himself acknowledged that proving machine consciousness is as impossible as proving human consciousness – we can only truly know our own minds. Ask Alexa if she's conscious, and she'll confidently say, "I know who I am." But that's just clever programming, not true self-awareness.

The consciousness versus not-conscious argument ultimately boils down to the same “man of science, man of faith” conflict that powers conversations about religion. Just because you can’t prove something is true doesn’t mean it isn’t, but that also doesn’t mean that you should believe that it is.

There's a bittersweet irony in our quest to recreate our own intelligence. We yearn to birth a new form of consciousness, yet in doing so we may render our own obsolete.

In Kazuo Ishigiro’s “Klara and the Sun’, there is a new race of artificially intelligent humans, “lifted” individuals who are essentially designer babies genetically altered at birth to become gifted students. Narrated from the perspective of an AI robot, the novel explores how intelligence is often captured in the context of adaptibility and experience. How we as humans benefit from thousands of years of evolutionary pre-training preparing us for new tasks. And how the future of artificial intelligence is not limited to synthetic chips and servers.

If trends continue, computers will eventually surpass human intelligence in every way. They're already leagues ahead in mechanical intelligence, crushing us at games like chess and Go. But beating humans at games with defined rules is not representative of true utility. In the "wicked world," as David Epstein describes in Range, "the rules of the game are often unclear or incomplete, there may or may not be obvious, repetitive patterns, and feedback is often delayed, inaccurate, or both."

A friend of mine enjoys discussing the concept of “economic AGI” – that is, artificial general intelligence evaluated by its ability to equal or surpass human capabilities in every-day white-collar economic tasks (such as that of an entry-level analyst or consultant). If the main benchmark we’re measuring for human intelligence is economic output, then we are incredibly close to a world in which this form of AGI has been achieved. But does this truly capture the essence of intelligence?

We often consider well-read knowledgeable people “smart”, but that is the technological equivalent of having a large database backend and a simple retrieval pipeline. A larger database means more knowledge, yes, but not a higher intelligence quotient or level of reasoning ability.

Perhaps we're not so different from the simulacra. Yet there remains that tantalizing hint of something more, a spark of creativity and intuition that has birthed art and music and literature throughout human history.

Will artificial minds one day dream electric sheep, penning poetry about the beauty of a sunset rendered in pixels? Or will consciousness forever elude our silicon children, leaving them as hollow reflections?

Despite performing well on traditional metrics, all existing large language models falter on the ARC-AGI task, which aims to be a better benchmark of human-like general fluid intelligence – measuring skill-acquisition efficiency and highlighting the concepts of scope, generalization difficulty, priors, and experience.

And when considering the concept of a 10X engineer (a person who can produce ten times the output of an average developer), AI is both novice and expert. We consider a person who can do a complex math problem in seconds a genius. Computers are very well equipped for this, but that doesn’t automatically make them such.

It can take AI systems thousands or hundreds of thousands of attempts to successfully accomplish something reliably, and the learning process is therefore far slower than that of a human in pure trial count. But because machines are constrained by energy, not time, they are brilliantly fast.

Does this matter? Or are our conventional ideas about what “intelligence” means bad proxies for machine intelligence?

With the advent of the transformer, AI is often no longer even attempting to mimic human biology. Why do we assume that the artificial brains we are creating have to function the same as human ones, and that we must evaluate them by the same benchmarks?

At the same time, if machines can exceed our capabilities for the tasks we consider measures of human intellect, then perhaps it does not matter how this is being accomplished under the hood. We understand that ultimately, machine learning isn’t magic – it’s math.

As we stand on the precipice of profound change, we must rapidly adjust to a world where human intellect is no longer on top. General intelligence is just the start; superintelligence is next. Given this, will our human intellect be but a spark in time?

One could read this post as simply the synthesis of dozens of conversations this summer in San Francisco. An amalgamation of ideas and questions that don’t ultimately lead to anything – we know we can’t prove many of these ideas, and we have difficulty even defining them, and so we just discuss in circular conversation. But that is also the point.

I have no final answers, only curiosity and a growing sense of wonder. I can't help but marvel at what has been accomplished in the last few years. If scaling laws hold true, new forms of consciousness will continue to emerge and reshape our understanding of what it means to think and to be.