If you set out to build a computer that could think like a person, where would you start? What would you want to know? How would you test your ideas?
When we build a computer simulation, we often start by studying the thing we want to simulate. We then build a theory or a mathematical model of how it works. And then we test the model against cases where we already know the answer and use the results to update the model. Lather, rinse, and repeat until you get something useful or you determine that your model just doesn’t work.
“All models are wrong, but some models are useful.” —George E. P. Box
We can’t start to understand the mind from the perspective of trying to build one without first studying the brain. Lots of animals have brains, so what is it that is different about the mammalian brains from those of the lower animals? And how do human brains differ from other mammals? Can we isolate key functions and put together a good working theory of operation?
Ray Kurzweil, in How To Create A Mind takes this approach. Kurzweil is an inventor, an engineer, an entrepreneur; he is not a neuroscientist. He quite clearly intends to see this work carried to its conclusion when an electronic mind of or beyond human intelligence becomes reality.
After talking with people who have worked in the field of Artificial Intelligence (AI), it seems appropriate to make a few remarks before continuing. First, the term “Artificial Intelligence” makes some people shudder. This seems to be in part due to the fact that the field didn’t advance as quickly as everyone believed it would in the 20th century. But also in part that modern “smart” computers remain unable to perform anything even close to “common sense.” Even those that use algorithms like the ones Kurzweil proposes. That since the brute force capabilities of, say, Deep Blue or Watson, are so vast, that its “smarts” come simply from its immense computational capabilities and not from its ability to implement particularly smart algorithms. In essence, that since Watson or Siri doesn’t “understand” you the way other humans do, that they never will. End of story.
There is some truth here. Even advanced modern computers can’t make logical inferences like a human can. I just asked Siri, “Tell me something I should know.” She responds with, “That may be beyond my abilities at the moment.” But I am not convinced that is the end of the story. Nate Silver, in The Signal and the Noise, talks a lot about forecasts by experts, and much of what he says suggests we shouldn’t give them too much weight because of just how often their predictions are terrible.
I’m very persuaded by Kurzweil’s Law of Accelerating Returns enabling things in the not too distant future that we can’t imagine as possible today. There is simply too much evidence in support of it to ignore. The capabilities of today’s computers would shock the engineers who built the ENIAC. In 1949, Popular Mechanics suggested that computers might weigh less than 1.5 tons someday. Ken Wilson, founder of Digital Equipment Corporation famously said in 1977 that, “There is no reason anyone would want a computer in their home.” These dates aren’t that far in the past, so it is clear that very bright people can suffer from a remarkable lack of vision. Particularly in an industry where the technological capabilities double in a span of less than two years. So I think it is reasonable to expect that the continuing growth of information processing capability will give us some pretty amazing things in the years to come. Exactly what they’ll be is less certain.
“If a…scientist says that something is possible he is almost certainly right, but if he says that it is impossible he is very probably wrong.” —Arthur C. Clarke
Kurzweil proposes the neocortex as the key differentiating element in advanced thinking. And he proposes pattern recognition as the key neocortical function as part of his Pattern Recognition Theory of Mind. American Neuroscientist Vernon Mountcastle discovered the columnar organization of the neocortex as the fundamental building block in 1957. This organization, the cortical stack, exists pretty much through the entire neocortex, regardless of whether it is processing speech, vision, hearing, etc. Kurzweil proffers that this single processing unit (the cortical stack) with different inputs can execute largely the same algorithm (pattern recognition) to achieve the necessary results, regardless of whether it is working on vision, speech or hearing. We know that one area of the brain can do the work of others when necessary — an effect known as plasticity. This is well documented and gives key support to idea of a common algorithm being used throughout the neocortex, though not specifically to it being pattern recognition.
But the approach is very effective. Kurzweil long ago started a company to create software to do natural language processing. You know it today as Nuance, the folks who make Apple’s Siri assistant work. When trying to develop the algorithms to make natural language processing work, lots of different approaches were tried. It was the pattern recognition approach, implemented using a hidden Markov model, that was the most successful by far. Kurzweil makes the argument that Siri, when attempting to process your request, performs a very similar algorithm to the one that your brain must use to process language, and this should be thought of as a form of intelligence. I find his arguments somewhat persuasive, but I have a colleague who argues quite strongly against that interpretation and supports his position well. It is certainly food for thought while there are no objective answers.
In spite of the fact that the author is not a neuroscientist, there is a lot of neuroscience in these pages. Here you’ll read about the exciting work of Benjamin Libet, V. S. Ramachandran, Michael Gazzaniga, and others, and dig into the concepts of free will versus determinism, decision making, and consciousness. What it all comes down to, in Kurzweil’s view, is that human brains execute a complex algorithm, that modern technology isn’t yet capable of this level of complexity, but that it will someday. Given that, how willing will we be to accept the consciousness of intelligent machines? What will a machine need to do to convince a human it is conscious? Is the Turing test enough? You’ll have to come to your own conclusions here. Given the way my children interact with Siri, I suspect that Kurzweil’s assumption of ready adoption by the next generation (though perhaps not older ones) is probably correct.
This is relevant because Kurzweil predicts that the “someday” when truly intelligent machines will be here will begin in 2029. If you’re familiar at all with any of his previous work, his Law of Accelerating Returns is pervasive in his thought. That technological progress increases at an exponential rate, and in about 2029 is when he predicts technology will be sufficiently mature to support strong AI. This is from the perspective of raw processing capabilities, not from extrapolating the successful demonstrations of any sort of machine intelligence. Mind you, a machine has just passed the King’s Wise Men self awareness test. Kurzweil might be right.
But is brute force processing enough for the emergence of a conscious mind? Kurzweil certainly thinks so. But I don’t believe that USC Neuroscientist Antonio Damasio would agree with him. In his own writings, Damasio argues that consciousness grew out of a concept of self, which in turn is a function of biological value. That as individual biological cells organized into increasingly complex systems, that their own evolutionary survival depended on a higher level of cooperative activity. Each cell’s natural inclination toward survival is the driving force in his view, and the connections that cells make to the brain through the nervous system amplify this survival instinct. Damasio sees feelings and emotions as a part of the mapping that the brain does of the body, a feedback mechanism to understand how it is doing, and that consciousness is built up in this way and for this reason. It is a wildly different view than Kurzweil’s in the sense that the driving force is not related to computational complexity. Instead, it is a hierarchical, evolution-driven, survival behavior. This begs the question, can a machine, without a biological survival instinct, develop a concept of the self and ultimately consciousness? I expect more time will be spent pondering this question in the near future, not less.
“When we reflect upon the manifold phases of life and consciousness which have been evolved already, it would be rash to say that no others can be developed, and that animal life is the end of all things. There was a time when fire was the end of all things: another when rocks and water were so.” —Samuel Butler, 1871
This book was hard to put down. Kurzweil very thoroughly researches and documents his material, and whether you find him to be a genius or perhaps slightly insane, he always makes a strong case for his position. It isn’t easy to go on the record with the sorts of predictions that Kurzweil has come to be known for, and few people do it. But he’s smart, he’s gutsy, and he’s right far more than he’s wrong. Spending a few hundred pages in the brilliance of Kurzweil is time well spent.