Book Review: How To Create A Mind — Ray Kurzweil

If you set out to build a computer that could think like a person, where would you start?  What would you want to know?  How would you test your ideas?

When we build a computer simulation, we often start by studying the thing we want to simulate.  We then build a theory or a mathematical model of how it works.  And then we test the model against cases where we already know the answer and use the results to update the model.  Lather, rinse, and repeat until you get something useful or you determine that your model just doesn’t work.

“All models are wrong, but some models are useful.” —George E. P. Box

We can’t start to understand the mind from the perspective of trying to build one without first studying the brain.  Lots of animals have brains, so what is it that is different about the mammalian brains from those of the lower animals?  And how do human brains differ from other mammals?  Can we isolate key functions and put together a good working theory of operation?

Ray Kurzweil, in How To Create A Mind takes this approach.  Kurzweil is an inventor, an engineer, an entrepreneur; he is not a neuroscientist.  He quite clearly intends to see this work carried to its conclusion when an electronic mind of or beyond human intelligence becomes reality.

After talking with people who have worked in the field of Artificial Intelligence (AI), it seems appropriate to make a few remarks before continuing.  First, the term “Artificial Intelligence” makes some people shudder.  This seems to be in part due to the fact that the field didn’t advance as quickly as everyone believed it would in the 20th century.  But also in part that modern “smart” computers remain unable to perform anything even close to “common sense.”  Even those that use algorithms like the ones Kurzweil proposes.  That since the brute force capabilities of, say, Deep Blue or Watson, are so vast, that its “smarts” come simply from its immense computational capabilities and not from its ability to implement particularly smart algorithms.  In essence, that since Watson or Siri doesn’t “understand” you the way other humans do, that they never will.  End of story.

There is some truth here.  Even advanced modern computers can’t make logical inferences like a human can.  I just asked Siri, “Tell me something I should know.”  She responds with, “That may be beyond my abilities at the moment.”  But I am not convinced that is the end of the story.  Nate Silver, in The Signal and the Noise, talks a lot about forecasts by experts, and much of what he says suggests we shouldn’t give them too much weight because of just how often their predictions are terrible.

I’m very persuaded by Kurzweil’s Law of Accelerating Returns enabling things in the not too distant future that we can’t imagine as possible today.  There is simply too much evidence in support of it to ignore.  The capabilities of today’s computers would shock the engineers who built the ENIAC.  In 1949, Popular Mechanics suggested that computers might weigh less than 1.5 tons someday.  Ken Wilson, founder of Digital Equipment Corporation famously said in 1977 that, “There is no reason anyone would want a computer in their home.”  These dates aren’t that far in the past, so it is clear that very bright people can suffer from a remarkable lack of vision.  Particularly in an industry where the technological capabilities double in a span of less than two years.  So I think it is reasonable to expect that the continuing growth of information processing capability will give us some pretty amazing things in the years to come.  Exactly what they’ll be is less certain.

“If a…scientist says that something is possible he is almost certainly right, but if he says that it is impossible he is very probably wrong.” —Arthur C. Clarke

Kurzweil proposes the neocortex as the key differentiating element in advanced thinking.  And he proposes pattern recognition as the key neocortical function as part of his Pattern Recognition Theory of Mind.  American Neuroscientist Vernon Mountcastle discovered the columnar organization of the neocortex as the fundamental building block in 1957.  This organization, the cortical stack, exists pretty much through the entire neocortex, regardless of whether it is processing speech, vision, hearing, etc.  Kurzweil proffers that this single processing unit (the cortical stack) with different inputs can execute largely the same algorithm (pattern recognition) to achieve the necessary results, regardless of whether it is working on vision, speech or hearing.  We know that one area of the brain can do the work of others when necessary — an effect known as plasticity.  This is well documented and gives key support to idea of a common algorithm being used throughout the neocortex, though not specifically to it being pattern recognition.

But the approach is very effective.  Kurzweil long ago started a company to create software to do natural language processing.  You know it today as Nuance, the folks who make Apple’s Siri assistant work.  When trying to develop the algorithms to make natural language processing work, lots of different approaches were tried.  It was the pattern recognition approach, implemented using a hidden Markov model, that was the most successful by far.  Kurzweil makes the argument that Siri, when attempting to process your request, performs a very similar algorithm to the one that your brain must use to process language, and this should be thought of as a form of intelligence.  I find his arguments somewhat persuasive, but I have a colleague who argues quite strongly against that interpretation and supports his position well.  It is certainly food for thought while there are no objective answers.

In spite of the fact that the author is not a neuroscientist, there is a lot of neuroscience in these pages.  Here you’ll read about the exciting work of Benjamin Libet, V. S. Ramachandran, Michael Gazzaniga, and others, and dig into the concepts of free will versus determinism, decision making, and consciousness.  What it all comes down to, in Kurzweil’s view, is that human brains execute a complex algorithm, that modern technology isn’t yet capable of this level of complexity, but that it will someday.  Given that, how willing will we be to accept the consciousness of intelligent machines?  What will a machine need to do to convince a human it is conscious?  Is the Turing test enough?  You’ll have to come to your own conclusions here.  Given the way my children interact with Siri, I suspect that Kurzweil’s assumption of ready adoption by the next generation (though perhaps not older ones) is probably correct.

This is relevant because Kurzweil predicts that the “someday” when truly intelligent machines will be here will begin in 2029.  If you’re familiar at all with any of his previous work, his Law of Accelerating Returns is pervasive in his thought.  That technological progress increases at an exponential rate, and in about 2029 is when he predicts technology will be sufficiently mature to support strong AI.  This is from the perspective of raw processing capabilities, not from extrapolating the successful demonstrations of any sort of machine intelligence.  Mind you, a machine has just passed the King’s Wise Men self awareness test.  Kurzweil might be right.

But is brute force processing enough for the emergence of a conscious mind?  Kurzweil certainly thinks so.  But I don’t believe that USC Neuroscientist Antonio Damasio would agree with him.  In his own writings, Damasio argues that consciousness grew out of a concept of self, which in turn is a function of biological value.  That as individual biological cells organized into increasingly complex systems, that their own evolutionary survival depended on a higher level of cooperative activity.  Each cell’s natural inclination toward survival is the driving force in his view, and the connections that cells make to the brain through the nervous system amplify this survival instinct.  Damasio sees feelings and emotions as a part of the mapping that the brain does of the body, a feedback mechanism to understand how it is doing, and that consciousness is built up in this way and for this reason.  It is a wildly different view than Kurzweil’s in the sense that the driving force is not related to computational complexity.  Instead, it is a hierarchical, evolution-driven, survival behavior.  This begs the question, can a machine, without a biological survival instinct, develop a concept of the self and ultimately consciousness?  I expect more time will be spent pondering this question in the near future, not less.

“When we reflect upon the manifold phases of life and consciousness which have been evolved already, it would be rash to say that no others can be developed, and that animal life is the end of all things.  There was a time when fire was the end of all things: another when rocks and water were so.” —Samuel Butler, 1871

This book was hard to put down.  Kurzweil very thoroughly researches and documents his material, and whether you find him to be a genius or perhaps slightly insane, he always makes a strong case for his position.  It isn’t easy to go on the record with the sorts of predictions that Kurzweil has come to be known for, and few people do it.  But he’s smart, he’s gutsy, and he’s right far more than he’s wrong.  Spending a few hundred pages in the brilliance of Kurzweil is time well spent.


Book Review: Who’s In Charge — Michael Gazzaniga

Philosophers, theologians, and scientists have debated the concept of free will for centuries.  The very likable concept of free will is at odds with the very nature of our observations — things tend to have causes.  And so the question remains, how much decision-making freedom do we really have?  And what’s this conscious “we” concept, while we’re at it?

In the 1800s, a rather well known mathematician made a bold statement with some long-ranging consequences.  He said that if we knew the position and momentum of every particle in the universe, then we could calculate forward in time and predict the future.  This built on the foundations of the physical laws that Isaac Newton observed and was the start of what became known as determinism.  Since the brain is subject to physical laws (in essence, its functionality is a complex set of chemical reactions), this leaves no room for free will, which pretty much everyone believes they have.

“We may regard the present state of the universe as the effect of its past and the cause of its future.  An intellect which at a certain moment would know all forces that set nature in motion, and all positions of all items of which nature is composed, if this intellect were also vast enough to submit these data to analysis, it would embrace in a single formula the movements of the greatest bodies of the universe and those of the tiniest atom; for such an intellect nothing would be uncertain and the future just like the past would be present before its eyes.”

—Pierre Simon Laplace, A Philosophical Essay on Probabilities

As modern neuroscience has developed, our understanding of the brain and of the mind has progressed.  But as with all good research, every good answer leads us to more questions.  Michael Gazzaniga is a neuroscientist and a professor at one of my alma maters.  In Who’s In Charge he takes you on a journey through the concepts of emergence and consciousness, the distributed nature of the brain, the role of the Interpreter, and ultimately how these might change what we think about free will.

The conscious mind has considerably less control over the human body than it would like to believe.  This is the underlying theme of the book.  Gazzaniga’s personal career with split-brain patients (a treatment for severe epilepsy), and his review of modern neuroscience are convincing to that effect.  While it is nice to think that “we” call the shots, it becomes clear that “we” aren’t always in charge and who this “we” is has some interesting properties.

“The brain has millions of local processors making important decisions.  It is a highly specialized system with critical networks distributed throughout the 1,300 grams of tissue.  There is no one boss in the brain.  You are certainly not the boss of the brain.  Have you ever succeeded in telling your brain to shut up already and go to sleep?”

What our conscious self thinks is largely the result of a process that takes place in the left hemisphere of our brain.  Gazzaniga calls it “the interpreter.”  This process’s job is to make sense of things, to paint a consistent story from the sensory information that enters the brain.  Faced with explaining things that it has no good data for, the interpreter makes things up, a process known as confabulation. There is a story of a young woman undergoing brain surgery (for which you are often awake).  When a certain part of her brain was stimulated, she laughed.  When asked why she laughed, she remarked, “You guys are just so funny standing there.”  This is confabulation.

“What was interesting was that the left hemisphere did not say, “I don’t know,” which truly was the correct answer.  It made up a post hoc answer that fit the situation.  It confabulated, taking cues from what it knew and putting them together in an answer that made sense.” 

But the brain is even stranger than that.  If you touch your finger to your nose, the sensory signals from the finger and from the nose take measurably different times to reach the brain.  Different enough that the brain receives the signal from the nose well before it receives the signal from your finger.  It is the interpreter that alters the times and tells you that the two events happened simultaneously.

This is where neuroscience’s contribution to the sensation of free will comes into play.  Gazzaniga says, “What is going on is the match between ever-present multiple mental states and the impinging contextual forces within which it functions.  Our interpreter then claims we freely made a choice.”  This is supported by Benjamin Libet’s experiments which demonstrated that the brain is “aware” of events well before the conscious mind knows about them.  Libet even goes so far as to declare that conscousness is “out of the loop” in human decision making.  This is still hotly debated, but fascinating.

Gazzaniga argues that consciousness is an emergent property of the brain.  Emergence is, in essence, a property of a complex system that is not predictable from the properties of the parts alone.  It is a sort of cooperative phenomenon of complex systems.  Or, as Gazzaniga more cleverly puts it, “You’d never predict the tango if you only studied neurons.”  Emergence is a part of what’s known as complexity theory, which has increased in popularity in the last decade or so.  But at this point, designating something as an emergent property is still really just a way to say you don’t know why something happens.  And despite all the advances that have been made in neuroscience, we still fundamentally don’t understand consciousness.

Gazzaniga makes the case that the development of society likely had much to do with the development of our more advanced cognitive abilities.  That is, as animals developed more social behavior, that increased cognitive skills were necessary, and that this was probably the driving force in the evolution of the neocortex.

“Oxford University anthropologist Robin Dunbar has provided support for some type of social component driving the evolutionary expansion of the brain.  He has found that each primate species tends to have a typical social group size; that brain size correlates with social group size in primates and apes; that the bigger the neocortex, the larger the social group; and that the great apes require a bigger neocortex per given group size than do the other primates.”

There is some physiological evidence to support the relationship between society and neocortical function in the case of mirror neurons.  First discovered in macaque monkeys, when a monkey grabbed a grape, the same neuron fired in both the grape-grabbing monkey and one who watched him grab the grape.  Humans have mirror neurons too, though in much greater numbers.  They serve to create a sympathetic simulation of an event which drives emotional responses.  That is, the way we understand the emotional states of others is by simulating their mental states.  So when Bill Clinton told Bob Rafsky that he felt his pain, perhaps he really did.

This is not Gazzaniga’s first book and it shows.  The work is well planned and executed.  He uses clear language to describe some of the wonderful discoveries of modern neuroscience, and makes them available for laymen to learn and enjoy.  He discusses his own fascinating research, for which he is well known in his field, and also discusses other hot topics in neuroscience and their implications on modern society and also on the free will debate.  He ends the book discussing how modern neuroscience can and should be used in regards to the legal system, which caught me somewhat by surprise.  It is a fine chapter, but it doesn’t read like the rest of the book, feeling like a separate work that was added in as an afterthought.

I enjoyed Who’s In Charge? immensely.  It is an excellent read and will undoubtedly challenge some of your thoughts and enlighten you about how we think about the mind and the brain today.

Book Review: Thinking Fast and Slow – Daniel Kahneman

There are people who spend their lives peeling an onion.  If they are lucky, it is a sweet bulb, and offers up its layers without too many tears.  If they are very lucky, what the peeling reveals is interesting enough that others are interested and pay attention.  And if they are very, very lucky, well, then the Royal Swedish Academy of Sciences honors their efforts with the most prestigious award that onion peelers can receive.

There were two people peeling this particular onion, but one passed away before their work was fully recognized.  In telling their story on their work on human decision making psychology, Daniel Kahneman recognizes his longtime collaborator Amos Tversky while sharing their work that was recognized with the 2002 Nobel Prize in Economics.

Since before the 1970s, it was well understood that humans were rational decision makers, and that when we strayed from this rational behavior, it was driven by some strong emotion.  Anger, fear, hatred, love — these were the things that pushed us into irrationality.  This makes perfect sense.  This “Utility Theory” was well known and not really challenged because it was, well, obvious.  Tversky and Kahneman, however, challenged its depiction of rational human decision making in their 1979 paper, “Prospect Theory: An Analysis of Decision Under Risk.”  We, it turns out, don’t behave very rationally at times (the Ultimatum Game is a very good example of this).  But what made this paper special was that they went beyond documenting the failure of Utility Theory and pointed their fingers at the design of the machinery of cognition as the reason, rather than the corruption of thought by emotion.  They argued that heuristics and biases were the key players in judgement and decision making.  This was a revolutionary idea.  The first layer of the onion had been peeled back, and as you might expect, it revealed more questions.

“Jumping to conclusions on the basis of limited evidence is so important to an understanding of intuitive thinking, and comes up so often in this book, that I will use a cumbersome abbreviation for it: WYSIATI, which stands for what you see is all there is.  System 1 is radically insensitive to both the quality and the quantity of the information that gives rise to impressions and intuitions.”

Kahneman describes the cognitive machine with a cast of two players.  These are, as he calls them, “System 1” and “System 2.”  It would be easy to call these the subconscious and the conscious; this would be a good first approximation, but it wouldn’t be entirely correct.  System 1 does operate quickly, automatically, and with no sense of voluntary control, as you would expect the subconscious to do.  But it is responsible for feeding System 2 with things like feelings, impressions, and intuitions.  System 2 generally takes what System 1 gives it without too much fuss.  But System 1 behaves in funny ways sometimes, producing some very interesting results.

Interestingly, you can give System 2 a specific task that taxes the cognitive resources and some interesting things happen.  For example, watch the video below and count the number of times the players wearing white shirts pass the ball.

Did you get the correct number?  Did you see the gorilla?  About half of the people who watch the video simply do not see the gorilla.  System 2 is busy counting, while System 1 is supporting that task and not distracting it with irrelevant extraneous information.  Sometimes, it would appear, we are quite blind to the obvious.  And blind to our own blindness.  This “tunnel vision” of sorts happens not only when we are cognitively busy, but also in times of stress or high adrenaline.

I experienced this personally a few years ago during an Emergency Response training exercise.  I was part of the two-man team entering a room where we had to assess the scene and respond accordingly.  The instructor running the exercise had taped a sign to the wall with information that would provide some answers to questions we would have in the room; things that would be obvious in a real situation.  Except it wasn’t obvious.  I didn’t see it.  At all.  A coworker was playing the part of an injured person and lying on the floor and safely removing him from the scene was all we could think about.  As we debriefed I was asked why I did not address the issues on the sign.  I had to go back to see the sign for myself to convince myself they were serious.  I was shocked.

After Prospect Theory, discerning the rules for the cognitive machine became a hot research area in the field of cognitive psychology.  And what they found is astounding.  A brief overview of the heuristics and biases can be found online, and these are discussed in detail in the text.  Some, like the affect heuristic, make perfect sense.  Emotion and belief or action are tied together.  This is a significant influence in how you create your beliefs about the world.  But priming, on the other hand, is downright scary because we have no conscious knowledge that it is taking place.

One of the interesting things that comes out of this research is that not only are humans are not rational thinkers, we aren’t very good statistical thinkers either.  Kahneman and Tversky’s first paper together was “Belief in the law of small numbers.”  The “law of small numbers” asserts, somewhat tongue in cheek, that the “law of large numbers” applies to small numbers as well.  There is much truth about this in how we build very lasting first impressions, quickly finding rules where random chance is a better explanation.  Kahneman’s insights into the day-to-day decisions and judgements that we make, with no thought into them, are priceless.

“The idea that large historical events are determined by luck is profoundly shocking, although it is demonstrably true…It is hard to think of the history of the twentieth century, including its large social movements, without bringing in the role of Hitler, Stalin, and Mao Zedong.  But there was a moment in time, just before an egg was fertilized, when there was a fifty-fifty chance that the embryo that became Hitler could have been a female.  Compounding the three events, there was a probability of one-eighth of a twentieth century without any of the three great villains and it is impossible to argue that history would have been roughly the same in their absence.  The fertilization of these three eggs had momentous consequences, and it makes a joke of the idea that long-term developments are predictable.”

It is pleasant to find an academic that can write a general interest book.  Too frequently the result of such an effort is a dense tome that is closer to a textbook.  Thinking Fast and Slow is enjoyably readable.  But it is more than that.  It is a very complete book—a dissecting of the machinery of the mind.  It pulls back, in 38 chapters, the covers to reveal in plain language the mechanisms that operate our minds every day.  The sorts of things that go on behind the scenes in every decision we make.  But also the myriad ways that advertising professionals can and do manipulate us.

This is not a weekend quick read.  The paperback version weighs in at 512 pages.  That shouldn’t hold you back.  After all, this is a review of an entire lifetime of Nobel Prize-winning research, in clear language without jargon, told with its historical perspective.  There is gold on every page and I’m grateful for every one of them.

Book Review: A More Beautiful Question – Warren Berger

Questions play a significant role in how we learn.  The quest for knowledge and just plain human curiosity are natural drivers for the questions we ask.  But what is different about the most creative, the most innovative—the designers, the inventors, the engineers?  What sets them apart?  The author and journalist Warren Berger sought to find out and details his findings in A More Beautiful Question:  The Power of Inquiry to Spark Breakthrough Ideas.  What he found, in brief, was that these people differ in their exceptional ability to ask questions.  Specifically, they ask the types of questions that force a different perspective.  Something ambitious.  Something actionable.  He calls these “beautiful questions,” a la E.E. Cummings.

“Always the beautiful answer

Who asks a more beautiful question.”  — E.E. Cummings

So what is a beautiful question?  Berger gives us some examples.

  • When Intel co-founders Andrew Grove and Gordon Moore were contemplating the future of Intel, they faced a difficult decision.  Should they stick with making memory chips, or switch to the (perhaps) more promising world of microprocessors?  Their beautiful question: “If we were kicked out of the company, what do you think the new CEO would do?”  Without the emotional attachment to the company’s product history, the right decision was clear, and arguably vindicated by history.
  • When taking a photograph of his three year old daughter in 1943, Edwin Land was asked by her, “Why do we have to wait for the picture?”  Land went on to co-found Polaroid and developed the instant photograph.
  • After losing a foot in a waterskiing accident, Van Phillips asked his beautiful question. “If they can put a man on the moon, why can’t they make a decent foot?”  His answer revolutionized the world of prosthetics.

“So then, one of the primary drivers of questioning is an awareness of what we don’t know—which is a form of higher awareness that separates not only man from monkey but also the smart and curious person from the dullard who doesn’t know or care.” — Warren Berger, A More Beautiful Question

Not satisfied simply to understand this type of questioning, Berger dives into related issues.  The discussion on the decline of question asking as we age was particularly enlightening (Chapter 2 – Why We Stop Questioning). He cites a Newsweek article from 2010 that made an revealing, but mostly overlooked observation:

“Preschool children, on average, ask their parents about 100 questions a day. By middle school, they’ve pretty much stopped asking.” —Po Bronson and Ashley Merryman, The Creativity Crisis, Newsweek, 7/10/2010

Not surprisingly, he asks why this happens in a well-researched and stimulating chapter that is mostly focused on the changes in education.  It is insightful and not too preachy.

Although he points out that there isn’t a specific roadmap to follow, Berger does boil down the fundamentals of the “beautiful” questioning process into its characteristic stages of Why, What If, and How.  I don’t believe too many will find that series of questions shocking, or even anything but expected.  As someone who does R&D for a living I found myself nodding in agreement but hoping for something less…obvious.  But Berger makes it real by including specifics from innovation leaders and how questioning is part of how they work.

Perhaps too much emphasis is placed on building an empire around solving the problem that caused the question to be asked.  Early on in the book, the key observation is that innovative questioners “give form to their ideas and make them real.”  But later, the emphasis is on good questions and how they are formed, which is persistent through the last few chapters.  I’m going to say he ended up in the right place.

Questioning in Business and Questioning for Life are the last two chapters and are full of questions you haven’t likely thought about or asked yourself.  I have a personal quest to think thoughts I haven’t had before, and these were stimulating chapters.  Questions such as “what should we stop doing?” or “what should we learn?” in business (instead of “what should we do?”).  And techniques like “thinking wrong” to “jiggle the synapses” are introduced.  It is a clever read that takes you beyond simply reading for pleasure and challenges the way you look at the world each day.

“Jeff Weiner, the chief executive of LinkedIn, observed that he often asks prospective employees this reasonable and fairly straightforward question: Looking back on your career, twenty or thirty years from now, what do you want to say you’ve accomplished?…You’d be amazed how many people I meet don’t have the answer to that question,” Weiner said.”

Berger has put together a quality work.  The breadth of the content means you’re going to see something you haven’t seen before, and the things you have seen before will likely have a new perspective.  Cap that off with twenty-five pages of notes organized by chapter, and indices for both the questions and the questioners featured in the book, and this one is worth having in hardcover.

“We must embrace the notion that answers are in fact quite boring.  The Irish are especially good or infuriating in this respect.  We answer questions with questions.  But in my opinion that’s a good place to be.  A little perplexed by the perplexity of life.” —Colum McCann