In the last post, we discussed the traditional conceptions of intelligence, and then introduced the most recent observations and definition by developmental biologist and expert in regenerative medicine, Dr. Michael Levin. As insightful and useful as this definition is, we have to ask the question: Is Levin’s definition of cognition as “goal-directedness” enough to define intelligence, especially human intelligence?
Taking a step back and looking at intelligence through the lens of philosophy places Levin’s definition and observations into a wider context and allows other disciplines to have a shot at elucidating the characteristics of human cognition.
In an unusual coincidence, the results of one cognitive neuroscientist’s research corresponded with Aristotle’s 4th-century BCE description of human intellectual capacities. Recall the introduction defining intelligence in Part 1: Aristotle described a “passive” and “active” intellect–not only do we actively perceive sensible objects, but the mind subconsciously and consciously acts on those perceptions.
While searching for a mathematical solution to discern the brain’s hierarchical organization, Dr. Michael Ferguson and his team identified three fundamental neural networks in the human brain. According to Ferguson:
[The] typical brain function in humans is composed of (in order) the following: primary sensorimotor networks, the so-called default mode network (responsible for mental simulation), and an array of networks that are correlated with fluid intelligence. . . .
Reading Aristotle’s De Anima sometime later, Ferguson was astonished.
Aristotle identifies sensory faculties in the human soul giving rise to memory and imagination, which give rise to intellective faculties, which in turn exercise top-down control on appetitive faculties.
Although the specific details are beyond the scope of this post, Ferguson states that this is an almost one-to-one correspondence to the three neural networks identified in their research.
“Data is not information, information is not knowledge, knowledge is not understanding, understanding is not wisdom.” —Chris Stoll
Where does artificial or engineered intelligence fit in the traditional scheme of intelligence? Computers have outpaced human capabilities for decades, whether in processing speeds or computations. If intelligence is simply about information processing, it is easy to imagine that engineered intelligence or superintelligence is a new kind of intelligent being that is like us but better than us. Some developers claim that the layers of complexity built into these text generators will allow them to “evolve” without further human input. If this is literally true, there is a justifiable concern with the apparent increase in agency in Machine Learning systems. The claim is that they are capable of becoming more like us.
Perhaps this is where we can recall Premak’s significant question: How is AI different from us? Many technology experts make the crucial observation that current AI systems have narrowly defined competencies. Like many animals, they are nowhere near being “like us.” Many of those who believe they are creating a new intelligent agent like us may be engaged in wishful thinking.
It might be helpful to recall that the computational information processing theory of mind is a metaphor. When we talk about intelligence when applied to AI systems, are we using it in the same way when we try to describe human intelligence?
Heated debates about the existential threat that AI poses can be found everywhere. The differences in opinion stem precisely from different meanings given to intelligence. From Jobst Landgrebe and Barry Smith in Why Machines Will Never Rule the World: Artificial Intelligence Without Fear, we learn that the neural architecture required to create truly engineered human intelligence is mathematically impossible. Currently, what we know about mental phenomena is that they can be realized by multiple neural pathways. Those same circuits, however, cause the realization of other mental phenomena. It is impossible to account for all of these uncertainties.
Given the evolutionary nature of biological intelligence, some experts propose a similar trajectory for AI. Oxford University’s Future of Humanity Institute founder, Nick Bostram, expressed his belief that computational brains may one day become the dominant life form on earth. He makes this claim both in his 2014 book Superintelligence: Paths, Dangers, Strategies and in a 2015 TED talk. His colleague, Carl Shulman, recently revealed specific pathways by which AI may “take control” of many aspects of human government, military operations, and society. Whether or not these algorithm based entities can become conscious, free agents “like us,” it certainly would be disastrous if we lost control of their use.
Setting aside the definition of intelligence for a moment, we should examine our attitudes towards this technological extension of ourselves. Gary Smith, Senior Fellow at the Walter Bradley Center for Natural and Artificial Intelligence, reminds us that we should not rely on these “discriminatory black-box algorithms.”
The real danger today is not that computers are smarter than us but that we think computers are smarter than us and consequently trust them to make decisions they should not be trusted to make.
These unknowns–the kind of intelligence being created, our ability to control it, and our expectations of what it can accomplish—are fueling the debates and contribute to the sense of unease regarding AI.
In the next and last post, we will stand at the intersection of biology, philosophy, and technology to see what definitions of intelligence tell us about ourselves.