Book Review: How To Create A Mind — Ray Kurzweil

If you set out to build a computer that could think like a person, where would you start?  What would you want to know?  How would you test your ideas?

When we build a computer simulation, we often start by studying the thing we want to simulate.  We then build a theory or a mathematical model of how it works.  And then we test the model against cases where we already know the answer and use the results to update the model.  Lather, rinse, and repeat until you get something useful or you determine that your model just doesn’t work.

“All models are wrong, but some models are useful.” —George E. P. Box

We can’t start to understand the mind from the perspective of trying to build one without first studying the brain.  Lots of animals have brains, so what is it that is different about the mammalian brains from those of the lower animals?  And how do human brains differ from other mammals?  Can we isolate key functions and put together a good working theory of operation?

Ray Kurzweil, in How To Create A Mind takes this approach.  Kurzweil is an inventor, an engineer, an entrepreneur; he is not a neuroscientist.  He quite clearly intends to see this work carried to its conclusion when an electronic mind of or beyond human intelligence becomes reality.

After talking with people who have worked in the field of Artificial Intelligence (AI), it seems appropriate to make a few remarks before continuing.  First, the term “Artificial Intelligence” makes some people shudder.  This seems to be in part due to the fact that the field didn’t advance as quickly as everyone believed it would in the 20th century.  But also in part that modern “smart” computers remain unable to perform anything even close to “common sense.”  Even those that use algorithms like the ones Kurzweil proposes.  That since the brute force capabilities of, say, Deep Blue or Watson, are so vast, that its “smarts” come simply from its immense computational capabilities and not from its ability to implement particularly smart algorithms.  In essence, that since Watson or Siri doesn’t “understand” you the way other humans do, that they never will.  End of story.

There is some truth here.  Even advanced modern computers can’t make logical inferences like a human can.  I just asked Siri, “Tell me something I should know.”  She responds with, “That may be beyond my abilities at the moment.”  But I am not convinced that is the end of the story.  Nate Silver, in The Signal and the Noise, talks a lot about forecasts by experts, and much of what he says suggests we shouldn’t give them too much weight because of just how often their predictions are terrible.

I’m very persuaded by Kurzweil’s Law of Accelerating Returns enabling things in the not too distant future that we can’t imagine as possible today.  There is simply too much evidence in support of it to ignore.  The capabilities of today’s computers would shock the engineers who built the ENIAC.  In 1949, Popular Mechanics suggested that computers might weigh less than 1.5 tons someday.  Ken Wilson, founder of Digital Equipment Corporation famously said in 1977 that, “There is no reason anyone would want a computer in their home.”  These dates aren’t that far in the past, so it is clear that very bright people can suffer from a remarkable lack of vision.  Particularly in an industry where the technological capabilities double in a span of less than two years.  So I think it is reasonable to expect that the continuing growth of information processing capability will give us some pretty amazing things in the years to come.  Exactly what they’ll be is less certain.

“If a…scientist says that something is possible he is almost certainly right, but if he says that it is impossible he is very probably wrong.” —Arthur C. Clarke

Kurzweil proposes the neocortex as the key differentiating element in advanced thinking.  And he proposes pattern recognition as the key neocortical function as part of his Pattern Recognition Theory of Mind.  American Neuroscientist Vernon Mountcastle discovered the columnar organization of the neocortex as the fundamental building block in 1957.  This organization, the cortical stack, exists pretty much through the entire neocortex, regardless of whether it is processing speech, vision, hearing, etc.  Kurzweil proffers that this single processing unit (the cortical stack) with different inputs can execute largely the same algorithm (pattern recognition) to achieve the necessary results, regardless of whether it is working on vision, speech or hearing.  We know that one area of the brain can do the work of others when necessary — an effect known as plasticity.  This is well documented and gives key support to idea of a common algorithm being used throughout the neocortex, though not specifically to it being pattern recognition.

But the approach is very effective.  Kurzweil long ago started a company to create software to do natural language processing.  You know it today as Nuance, the folks who make Apple’s Siri assistant work.  When trying to develop the algorithms to make natural language processing work, lots of different approaches were tried.  It was the pattern recognition approach, implemented using a hidden Markov model, that was the most successful by far.  Kurzweil makes the argument that Siri, when attempting to process your request, performs a very similar algorithm to the one that your brain must use to process language, and this should be thought of as a form of intelligence.  I find his arguments somewhat persuasive, but I have a colleague who argues quite strongly against that interpretation and supports his position well.  It is certainly food for thought while there are no objective answers.

In spite of the fact that the author is not a neuroscientist, there is a lot of neuroscience in these pages.  Here you’ll read about the exciting work of Benjamin Libet, V. S. Ramachandran, Michael Gazzaniga, and others, and dig into the concepts of free will versus determinism, decision making, and consciousness.  What it all comes down to, in Kurzweil’s view, is that human brains execute a complex algorithm, that modern technology isn’t yet capable of this level of complexity, but that it will someday.  Given that, how willing will we be to accept the consciousness of intelligent machines?  What will a machine need to do to convince a human it is conscious?  Is the Turing test enough?  You’ll have to come to your own conclusions here.  Given the way my children interact with Siri, I suspect that Kurzweil’s assumption of ready adoption by the next generation (though perhaps not older ones) is probably correct.

This is relevant because Kurzweil predicts that the “someday” when truly intelligent machines will be here will begin in 2029.  If you’re familiar at all with any of his previous work, his Law of Accelerating Returns is pervasive in his thought.  That technological progress increases at an exponential rate, and in about 2029 is when he predicts technology will be sufficiently mature to support strong AI.  This is from the perspective of raw processing capabilities, not from extrapolating the successful demonstrations of any sort of machine intelligence.  Mind you, a machine has just passed the King’s Wise Men self awareness test.  Kurzweil might be right.

But is brute force processing enough for the emergence of a conscious mind?  Kurzweil certainly thinks so.  But I don’t believe that USC Neuroscientist Antonio Damasio would agree with him.  In his own writings, Damasio argues that consciousness grew out of a concept of self, which in turn is a function of biological value.  That as individual biological cells organized into increasingly complex systems, that their own evolutionary survival depended on a higher level of cooperative activity.  Each cell’s natural inclination toward survival is the driving force in his view, and the connections that cells make to the brain through the nervous system amplify this survival instinct.  Damasio sees feelings and emotions as a part of the mapping that the brain does of the body, a feedback mechanism to understand how it is doing, and that consciousness is built up in this way and for this reason.  It is a wildly different view than Kurzweil’s in the sense that the driving force is not related to computational complexity.  Instead, it is a hierarchical, evolution-driven, survival behavior.  This begs the question, can a machine, without a biological survival instinct, develop a concept of the self and ultimately consciousness?  I expect more time will be spent pondering this question in the near future, not less.

“When we reflect upon the manifold phases of life and consciousness which have been evolved already, it would be rash to say that no others can be developed, and that animal life is the end of all things.  There was a time when fire was the end of all things: another when rocks and water were so.” —Samuel Butler, 1871

This book was hard to put down.  Kurzweil very thoroughly researches and documents his material, and whether you find him to be a genius or perhaps slightly insane, he always makes a strong case for his position.  It isn’t easy to go on the record with the sorts of predictions that Kurzweil has come to be known for, and few people do it.  But he’s smart, he’s gutsy, and he’s right far more than he’s wrong.  Spending a few hundred pages in the brilliance of Kurzweil is time well spent.

Advertisements

Book Review: Who’s In Charge — Michael Gazzaniga

Philosophers, theologians, and scientists have debated the concept of free will for centuries.  The very likable concept of free will is at odds with the very nature of our observations — things tend to have causes.  And so the question remains, how much decision-making freedom do we really have?  And what’s this conscious “we” concept, while we’re at it?

In the 1800s, a rather well known mathematician made a bold statement with some long-ranging consequences.  He said that if we knew the position and momentum of every particle in the universe, then we could calculate forward in time and predict the future.  This built on the foundations of the physical laws that Isaac Newton observed and was the start of what became known as determinism.  Since the brain is subject to physical laws (in essence, its functionality is a complex set of chemical reactions), this leaves no room for free will, which pretty much everyone believes they have.

“We may regard the present state of the universe as the effect of its past and the cause of its future.  An intellect which at a certain moment would know all forces that set nature in motion, and all positions of all items of which nature is composed, if this intellect were also vast enough to submit these data to analysis, it would embrace in a single formula the movements of the greatest bodies of the universe and those of the tiniest atom; for such an intellect nothing would be uncertain and the future just like the past would be present before its eyes.”

—Pierre Simon Laplace, A Philosophical Essay on Probabilities

As modern neuroscience has developed, our understanding of the brain and of the mind has progressed.  But as with all good research, every good answer leads us to more questions.  Michael Gazzaniga is a neuroscientist and a professor at one of my alma maters.  In Who’s In Charge he takes you on a journey through the concepts of emergence and consciousness, the distributed nature of the brain, the role of the Interpreter, and ultimately how these might change what we think about free will.

The conscious mind has considerably less control over the human body than it would like to believe.  This is the underlying theme of the book.  Gazzaniga’s personal career with split-brain patients (a treatment for severe epilepsy), and his review of modern neuroscience are convincing to that effect.  While it is nice to think that “we” call the shots, it becomes clear that “we” aren’t always in charge and who this “we” is has some interesting properties.

“The brain has millions of local processors making important decisions.  It is a highly specialized system with critical networks distributed throughout the 1,300 grams of tissue.  There is no one boss in the brain.  You are certainly not the boss of the brain.  Have you ever succeeded in telling your brain to shut up already and go to sleep?”

What our conscious self thinks is largely the result of a process that takes place in the left hemisphere of our brain.  Gazzaniga calls it “the interpreter.”  This process’s job is to make sense of things, to paint a consistent story from the sensory information that enters the brain.  Faced with explaining things that it has no good data for, the interpreter makes things up, a process known as confabulation. There is a story of a young woman undergoing brain surgery (for which you are often awake).  When a certain part of her brain was stimulated, she laughed.  When asked why she laughed, she remarked, “You guys are just so funny standing there.”  This is confabulation.

“What was interesting was that the left hemisphere did not say, “I don’t know,” which truly was the correct answer.  It made up a post hoc answer that fit the situation.  It confabulated, taking cues from what it knew and putting them together in an answer that made sense.” 

But the brain is even stranger than that.  If you touch your finger to your nose, the sensory signals from the finger and from the nose take measurably different times to reach the brain.  Different enough that the brain receives the signal from the nose well before it receives the signal from your finger.  It is the interpreter that alters the times and tells you that the two events happened simultaneously.

This is where neuroscience’s contribution to the sensation of free will comes into play.  Gazzaniga says, “What is going on is the match between ever-present multiple mental states and the impinging contextual forces within which it functions.  Our interpreter then claims we freely made a choice.”  This is supported by Benjamin Libet’s experiments which demonstrated that the brain is “aware” of events well before the conscious mind knows about them.  Libet even goes so far as to declare that conscousness is “out of the loop” in human decision making.  This is still hotly debated, but fascinating.

Gazzaniga argues that consciousness is an emergent property of the brain.  Emergence is, in essence, a property of a complex system that is not predictable from the properties of the parts alone.  It is a sort of cooperative phenomenon of complex systems.  Or, as Gazzaniga more cleverly puts it, “You’d never predict the tango if you only studied neurons.”  Emergence is a part of what’s known as complexity theory, which has increased in popularity in the last decade or so.  But at this point, designating something as an emergent property is still really just a way to say you don’t know why something happens.  And despite all the advances that have been made in neuroscience, we still fundamentally don’t understand consciousness.

Gazzaniga makes the case that the development of society likely had much to do with the development of our more advanced cognitive abilities.  That is, as animals developed more social behavior, that increased cognitive skills were necessary, and that this was probably the driving force in the evolution of the neocortex.

“Oxford University anthropologist Robin Dunbar has provided support for some type of social component driving the evolutionary expansion of the brain.  He has found that each primate species tends to have a typical social group size; that brain size correlates with social group size in primates and apes; that the bigger the neocortex, the larger the social group; and that the great apes require a bigger neocortex per given group size than do the other primates.”

There is some physiological evidence to support the relationship between society and neocortical function in the case of mirror neurons.  First discovered in macaque monkeys, when a monkey grabbed a grape, the same neuron fired in both the grape-grabbing monkey and one who watched him grab the grape.  Humans have mirror neurons too, though in much greater numbers.  They serve to create a sympathetic simulation of an event which drives emotional responses.  That is, the way we understand the emotional states of others is by simulating their mental states.  So when Bill Clinton told Bob Rafsky that he felt his pain, perhaps he really did.

This is not Gazzaniga’s first book and it shows.  The work is well planned and executed.  He uses clear language to describe some of the wonderful discoveries of modern neuroscience, and makes them available for laymen to learn and enjoy.  He discusses his own fascinating research, for which he is well known in his field, and also discusses other hot topics in neuroscience and their implications on modern society and also on the free will debate.  He ends the book discussing how modern neuroscience can and should be used in regards to the legal system, which caught me somewhat by surprise.  It is a fine chapter, but it doesn’t read like the rest of the book, feeling like a separate work that was added in as an afterthought.

I enjoyed Who’s In Charge? immensely.  It is an excellent read and will undoubtedly challenge some of your thoughts and enlighten you about how we think about the mind and the brain today.