Types of Intelligence

Posted on May 21, 2025

A logical flaw I see in common discourse is a set of mistaken assumptions that: (1) intelligence is an attribute, (2) that all intelligence is the same, and (3) two instances of intelligence can be directly and quantitatively compared such that intelligence(1) is greater or less than intelligence(2). Part of the logical error stems from our notion of IQ (intelligence quotient) as a quantitative measure which falsely implies that IQ is intelligence.

Another way to make this implicit assumption explicit: “Intelligence is the attribute that makes humans more capable than other animals, and more intelligence resembles a more capable human and less intelligence means lesser cognition which resembles more primitive animals. As we envision the possibility of something more intelligent than a human (say, a super-AI) it will continue along this linear extrapolation and resemble less a primitive animal and look more like a human but better than a human, in whatever way humans are better.” (This assumes super-AI will be more intelligent than humans, which is a conversation for another time.)

This is an easy interpolation. But it’s overly simplistic, it is wrong, and to follow that line of logic leads to erroneous thinking.

Let’s explore different types of intelligence. Some will quibble and say “[fill in the blank] is not intelligence”, which is exactly the point of writing this because many people will implicitly believe it to be intelligence, and if they don’t think they believe it they at least presume it to be intelligence in their logic. As with all my dailies, this post is not intended to provide an exhaustive list nor intended to be 100% correct. I intend only to move a line of thought forward.

Knowledge

The corpus of knowledge is vast, and subsuming a larger part of that corpus in an entity makes it more capable of mental cognition. (Again, whether more knowledge is actually more intelligence or not - I don’t think it is personally - is not my point. My point is that I’d bet the median reader implicitly assumes more knowledge to equate to more intelligence. I’ve made this point enough times now and will stop repeating myself.)

Along this vector of cognition the machines are already significantly more capable than the most knowledgeable human on the planet, having subsumed nearly the entirety of our corpus of knowledge into their training data.

Reasoning

Breaking down a problem or goal into multiple steps and constructing a logical train of thought. For a period it was trendy in AI circles to refer to this as the difference between Type 1 and Type 2 thinking.

Humans do this both implicitly when tracing back our conclusions into their derived parts and also choose to do it explicitly when thinking through a challenging problem. Most animals have either no or very primitive ability to reason.

One distinction between the way humans reason and the way machines reason is, I suspect, the direction of reasoning. Humans will reason intuitively and reason forward as well as trace backwards and look for evidence of errors in the chain of thought. Machines, generating tokens sequentially, only reason forwards. I suppose a model might be trained via RLHF to look for errors in their chain of thought, and alternatively you could derive variations as an approximation to backwards review (you could argue O-class models are solving for this limitation), but the inability for a machine to intuitively arrive at the thought “hold on, I’ve found myself at a place that doesn’t make sense, I need to go back and figure out where I went wrong” seems to me like a major limitation vs the way humans reason.

Speaking of..

Intuition

If reasoning is Type 2 thinking then intuition is Type 1 thinking. It’s rapid, effortless and is how intelligence arrives at a conclusion without expending unnecessary energy. Maybe one way to frame intuition is that it’s unconscious intelligence.

In some ways next token prediction could be construed to be transformer based machines’ equivalent of intuitive intelligence.

I’d argue this is somewhat different from…

Projection

Both human and animal minds are continuously projecting in time period N what their environment will look like in the next time period N+1. This is a subconscious process that occurs regardless of the animal’s effort expenditure as long as they are conscious.

Apply pressure with your hand to a doorknob -> the knob gives way, the door unlocks, and you are able to open it

A bird is in the sky overhead -> the bird moves forward in the direction it’s facing

Step in a deep puddle -> your socks get wet

Call your spouse stupid -> an argument begins

When there is a mismatch between the projecting our brain makes and the reality we experience then we immediately interrupt and reflect. What was wrong about our assumption? We switch to Reasoning type intelligence to understand where our projection went wrong.

A machine intelligence does not project. If the latest user input in a message dictionary is nonsensical the machine doesn’t think “hold on, this is not the response I expected, what went wrong here.” The machine proceeds with next token inference (intuition intelligence) without using the unexpected response as input to refine.

If I were an AI researcher a topic I would find very exciting to explore would be using Projection based intelligence in parallel to declare reasoning and intuition subprocesses.

Boredom + Frustration

Now we get to the types of intelligence that I believe make humans very special - although I suspect many would argue these are not forms of intelligence.

The first type of uniquely human intelligence is a form of emotional intelligence. To distinguish these from emotional intelligence quotient (EQ) I will simply call out the specific emotions that I believe are particularly valuable to human mental cognition, although they are not exhaustive.

Boredom. The animal brain is a dopamine seeking machine. Dopamine can be derived from material outcomes like food and sex but also from mental outcomes - understanding a problem, accomplishing a goal. A lack of dopamine leads to angst and restlessness and deriving novel mental processes and behaviors.

Dopamine might be falsely construed to be like the reward function built into a machine intelligence. This would be a mistake. A machine accepts an input and produces an output in response according to its reward function. No input means no output. An animal, and specifically a human, will respond to no input by seeking out a reward. The mind will construe an input to solve in order to satisfy the pain it experiences in the absence of a reward.

Boredom may lead to behavior like creating a novel problems or tasks to work on, or to remixing existing knowledge, intuitions or concepts from seemingly disparate domains for novel outputs. Remixing is an idea I’ve recently been toying with and may write about in the future.

(If I might interject with a personal tangent here, I believe boredom to be one of the most powerful attributes humans have been endowed with, and my boredom is a state that I try to actively manage and, indeed, during periods will try to actively cultivated within myself.)

Frustration. I typically focus on frustration from a negative lens - frustration leads to shutting down prefrontal cortex activity and overriding complex reasoning thinking with base level intuitive thoughts and / or action patterns. These can lead to emotional outbursts, violence, and destructive actions.

This isn’t quite the same thing as switching from Type 2 to Type 1 thinking. It’s the mind agonizing it’s own thought processes to force different outcomes. Many of these outcomes will be wasteful, but some of them will lead to breakthroughs and novel avenues of possibility.

As far as I know, machines have no capability to either give up on an existing thread nor the capability to initiate new threads. This seems to me like a major limitation.

Will

I’m really going out on a limb here, but I will here make the argument that Will is a major component of human and animal intelligence.

Desire, purpose, motivation, greed. All are flavors of Will. Will drives us to outcomes, and cultivated within the realm of Boredom and Frustration, Will initiates the cognitive processes that lead to outcomes.

Synthesis

By Synthesis I am not referring to intuitive type 1 thinking where an intelligent entity might linearly combine ideas together to lead to conclusions.

I’m referring to the passive background process by which an intelligence will combine (remix?) an idea which is presently on their mind or hovering in the background onto an existing cognitive thread. These moments of synthesis are serendipidous and, as far as I know, the process of synthesis is not well understood.

Synthesis is the well from which we derive breakthroughts and make leaps of progress.

From synthesis, Archimedes watches bath water overfill when he steps in and derives the Archimedes Principle, which allows him to solve the problem he’s been thinking about which is determining whether an object is made of true gold.

Newton while he’s idling watches an apple fall from a tree and wonders whether this has relevance to the celestial objects he’s been thinking about, which unlocks the reasoning thread from which he derives his laws of physics.

Synthesis is likely unique to humans as a result of our naturally limited capacity of knowledge and the concurrency of threads that our cognitive minds maintain at any given time as a function of our individual Will. Although you might imagine a machine intelligence designed to brute force synthesis, at first glance without having given it much thought, this seems to me horribly inefficient and I’m not sure that it would even lead to novel breakthroughs, or be able to recognize the novel breakthroughs even if it stumbled upon them.

Diversity

One major strength that we have as humans is our numerous instances and the diveristy and individuality of our instances.

The specific parameters of our Wills, our specific tolerances for Boredom and Frustration, the limited Knowledge that we each have the capacity for leading to differences in Knowledge that we can remix and apply, not to even broach on the fact that we may not all Intuit and Reason in the same way as one another…

These are a major strength of our collective human intelligence. It fosters our capacity for accomplishment and advancement as a species, as we also developed a (biologicallly) unique capacity for cooperation and transmitting information across space and time.

One argument I have been hearing recently in AI Thought Leadership Eliterati circles is that one reason to believe Super Intelligence(tm) will lead to a brighter and more capable economic future will be the ability to replicate multiple copies of the same intelligence.

That a reason to believe ASI will get us to colonize Mars is that we will be able to have a million copies of the Elon Musk of AI, and that those million copies of the Elon Musk AI will solve the problem better and faster.

Will it? I’m not saying that view if not directionally correct, that having more copies of Elon will not be somewhat helpful to the effort, but it is at minimum worth challenging that the result will primarily be a result of these millions of Elon.

It’s not clear to me that the blockers to the knowledge breakthroughs, to the “unlock moments” in the timeline of humanity, are more “horsepower” of the same intelligence.

I for one am much more bullish on a very broad long tail of diverse intelligences - some of them human and some of them machine and some of them others yet to be devised. all with varying parameters of intelligence levels and types - collaborating and competing with one another as well as working and progressing individually, in an entropic manner of collective sphere of intelligence, than I am for one instance of intelligence, regardless how maxed out its stats on one or several or even all the dimensions of intelligence.