I think what makes human intelligence so confusing is that we have two distinct mental faculties, intuition and... reason, for the lack of better word (the Greek word was logos, but we've long forgotten its original meaning). The two combine in different proportions, and the development of both is highly influenced by the individual's experience.
The net result is what looks like multiple kinds of intelligence -- even though under the hood, again, it's just System 1 and System 2.
Fantastic! I wonder, is there any way to read it? I am particularly interested in System 2.
As you have noted in "Transcend", the brain is a prediction machine. System 1 predicts by making statistical inferences. Essentially, it's an AI -- or, rather, AI is an artificial replication of System 1 in humans. But -- and I could be wrong -- no one has offered a model/theory of how System 2 makes its predictions. Yes, "logic & reason", but that says little of what it does under the hood. As I see it, it's not even verbal/symbolic, it's visual in nature. And it's not just me -- "We know that Plato, like the Greeks in general, understands genuine knowledge as seeing.” (Heidegger)
Thank you for this nuanced and historically grounded exploration of MI theory. Your article does an excellent job of tracing the journey from Gardner's original, complex hypothesis to the simplified, often distorted version that permeates education today. This distinction is everything. By presenting Gardner's own caveats and the theory's subsequent bastardization, you provide a much fairer assessment than the typical outright dismissal.
This history supports the idea of a "Science Spectrum Theory." On one end, we have well-established, predictive science. On the other, we have demonstrable pseudoscience. The popular version of MI—with its eight neat categories and direct implications for lesson planning—firmly sits near the pseudoscience end. However, Gardner's original work resides in a more ambiguous middle ground. It was a heuristic, a philosophical proposition, and a taxonomic challenge to the psychological establishment. It was never a proven neurological model, but as you note, it wasn't intended to be one in the way it was later sold. Its scientific weakness is in its lack of falsifiability and predictive power, but its cultural and intellectual influence is undeniable.
This is where the analogy to intelligence testing is useful. Just as IQ tests can be criticized for largely measuring how good one is at taking IQ tests (a specific, culturally-loaded skill set), MI theory can be seen as a framework for describing how good people are at different, culturally-recognized pursuits. Neither gives a complete picture of human cognition. The g-factor is a powerful statistical reality, but it exists on a spectrum itself; it doesn't capture the full range of human talent and creativity that MI attempted to catalog. Viewing them as competing, black-and-white theories is less productive than seeing them as different lenses on the complex spectrum of human ability.
In conclusion, the greatest value of your article is in restoring this complexity. Labeling MI a pure "neuromyth" risks throwing the baby out with the bathwater—discarding the valuable insight that human potential is multifaceted along with the unscientific pedagogical practices it inspired. Its true legacy may not be in the science of learning, but in the philosophy of human potential, forcing a long-overdue conversation about the narrowness of our traditional metrics. Acknowledging this allows us to appreciate its historical role while confidently moving past its misapplications in modern pedagogy.
I think what makes human intelligence so confusing is that we have two distinct mental faculties, intuition and... reason, for the lack of better word (the Greek word was logos, but we've long forgotten its original meaning). The two combine in different proportions, and the development of both is highly influenced by the individual's experience.
The net result is what looks like multiple kinds of intelligence -- even though under the hood, again, it's just System 1 and System 2.
I completely agree. In fact, this was the topic of my doctoral dissertation (I came up with a "dual-process theory of human intelligence").
Fantastic! I wonder, is there any way to read it? I am particularly interested in System 2.
As you have noted in "Transcend", the brain is a prediction machine. System 1 predicts by making statistical inferences. Essentially, it's an AI -- or, rather, AI is an artificial replication of System 1 in humans. But -- and I could be wrong -- no one has offered a model/theory of how System 2 makes its predictions. Yes, "logic & reason", but that says little of what it does under the hood. As I see it, it's not even verbal/symbolic, it's visual in nature. And it's not just me -- "We know that Plato, like the Greeks in general, understands genuine knowledge as seeing.” (Heidegger)
Thank you for this nuanced and historically grounded exploration of MI theory. Your article does an excellent job of tracing the journey from Gardner's original, complex hypothesis to the simplified, often distorted version that permeates education today. This distinction is everything. By presenting Gardner's own caveats and the theory's subsequent bastardization, you provide a much fairer assessment than the typical outright dismissal.
This history supports the idea of a "Science Spectrum Theory." On one end, we have well-established, predictive science. On the other, we have demonstrable pseudoscience. The popular version of MI—with its eight neat categories and direct implications for lesson planning—firmly sits near the pseudoscience end. However, Gardner's original work resides in a more ambiguous middle ground. It was a heuristic, a philosophical proposition, and a taxonomic challenge to the psychological establishment. It was never a proven neurological model, but as you note, it wasn't intended to be one in the way it was later sold. Its scientific weakness is in its lack of falsifiability and predictive power, but its cultural and intellectual influence is undeniable.
This is where the analogy to intelligence testing is useful. Just as IQ tests can be criticized for largely measuring how good one is at taking IQ tests (a specific, culturally-loaded skill set), MI theory can be seen as a framework for describing how good people are at different, culturally-recognized pursuits. Neither gives a complete picture of human cognition. The g-factor is a powerful statistical reality, but it exists on a spectrum itself; it doesn't capture the full range of human talent and creativity that MI attempted to catalog. Viewing them as competing, black-and-white theories is less productive than seeing them as different lenses on the complex spectrum of human ability.
In conclusion, the greatest value of your article is in restoring this complexity. Labeling MI a pure "neuromyth" risks throwing the baby out with the bathwater—discarding the valuable insight that human potential is multifaceted along with the unscientific pedagogical practices it inspired. Its true legacy may not be in the science of learning, but in the philosophy of human potential, forcing a long-overdue conversation about the narrowness of our traditional metrics. Acknowledging this allows us to appreciate its historical role while confidently moving past its misapplications in modern pedagogy.
Isn't it the case that his hypothesis has been shown to not really hold up?
Atleast everything I have read and heard from intelligence researchers (the public ones) seems to suggest so.