Ray Kurzweil and Neil deGrasse Tyson Talk Future

In this interview, Neil deGrasse Tyson (astrophysicist) talks with futurist Ray Kurzweil about the exponential progression of computing and touches on some of Kurzweil’s key predictions.  If you’re familiar with Kurzweil’s public talks and interviews then you know there are certain salient points he likes to make in regards to the exponential nature of information technology (see Kurzweil’s Law of Accelerating Returns).  I liked this video because it is a good collection of such points as well as a couple insights I hadn’t heard him express previously.  In this video, Tyson acts primarily in the capacity of host.


Runtime: 20:42


This video can also be found here.

Video Info:

Published on May 16, 2017

Future of Earth Year 2030 in Dr Neil deGrasse Tyson & Dr Ray Kurzweil POV. Documentary 2018
https://tinyurl.com/AstrobumTV

Buy Billionaire Peter Thiel’s Zero to One Book here!
http://amzn.to/2x1J8BX

Copyright Disclaimer Under Section 107 of the Copyright Act 1976, allowance is made for “fair use” for purposes such as criticism, comment, news reporting, teaching, scholarship, and research. Fair use is a use permitted by copyright statute that might otherwise be infringing. BBC Documentary 2018 Non-profit, educational or personal use tips the balance in favor of fair use.

Ray Kurzweil – How to Create a Mind

This is one of the longer presentations I’ve seen by Ray Kurzweil.  In the video, Kurzweil discusses some of the concepts behind his latest book, How to Create a Mind.  This talk covers a lot of ground; everywhere from the Kurzweil’s Law (Law of Accelerating Returns), merging with technology, pattern recognizing technology, the effects of economy on life expectancy, solar energy, medical technology, education…  Well, you get the picture.  Check it out.


Runtime: 1:01:00

This video can also be found at https://www.youtube.com/watch?v=iT2i9dGYjkg


Video info:

Published on Jun 17, 2014

 

 

Success.com – Ray Kurzweil: The Exponential Mind

Chris Raymond at success.com interview Ray Kurzweil.  The article is called Ray Kurzweil: The Exponential Mind.  It follows the usual Kurzwelian interview parameters (a little background, explain exponential growth with examples, discuss where technology is taking us), but it also goes into some of the things his critics have to say and talks a bit about Kurzweil’s new role at Google.  


 

Ray Kurzweil: The Exponential Mind

The inventor, scientist, author, futurist and director of engineering at Google aims to help mankind devise a better world by keeping tabs on technology, consumer behavior and more.

Chris Raymond

Ray Kurzweil is not big on small talk. At 3:30 on a glorious early summer afternoon, the kind that inspires idle daydreams, he strides into a glass-walled, fifth-floor conference room overlooking the leafy tech town of Waltham, Mass.

Lowering himself into a chair, he looks at his watch and says, “How much time do you need?”

It doesn’t quite qualify as rude. He’s got a plane to catch this evening, and he’s running nearly two hours behind schedule. But there is a hint of menace to the curtness, a subtle warning to keep things moving. And this is certainly in keeping with Kurzweil’s M.O.

“If you spend enough time with him, you’ll see that there’s very little waste in his day,” says director Barry Ptolemy, who tailed Kurzweil for more than two years while filming the documentary Transcendent Man. “His nose is always to the grindstone; he’s always applying himself to the next job, the next interview, the next book, the next little task.”

It would appear the 66-year-old maverick has operated this way since birth. He decided to become an inventor at age 5, combing his Queens, N.Y., neighborhood for discarded radios and bicycle parts to assemble his prototypes. In 1965, at age 17, he unveiled an early project, a computer capable of composing music, on the Steve Allen TV show I’ve Got a Secret. He made his first trip to the White House that same year, meeting with Lyndon Johnson, along with other young scientists uncovered in a Westinghouse talent search. As a sophomore at MIT, he launched a company that used a computer to help high school students find their ideal college. Then at 20, he sold the firm to a New York publisher for $100,000, plus royalties.

The man has been hustling since he learned how to tie his shoes.

Though he bears a slight resemblance to Woody Allen—beige slacks, open collar, reddish hair, glasses—he speaks with the baritone authority of Henry Kissinger. He brings an engineer’s sense of discipline to each new endeavor, pinpointing the problem, surveying the options, choosing the best course of action. “He’s very good at triage, very good at compartmentalizing,” says Ptolemy.

A bit ironically, Kurzweil describes his first great contribution to society—the technology that first gave computers an audible voice—as a solution he developed in the early 1970s for no problem in particular. After devising a program that allowed the machines to recognize letters in any font, he pursued market research to decide how his advancement could be useful. It wasn’t until he sat next to a blind man on an airplane that he realized his technology could shatter the inherent limitations of Braille; only a tiny sliver of books had been printed in Braille, and no topical sources—newspapers, magazines or office memos—were available in that format.

Kurzweil and a team that included engineers from the National Federation for the Blind built around his existing software to make text-to-speech reading machines a reality by 1976. “What really motivates an innovator is that leap from dry formulas on a blackboard to changes in people’s lives,” Kurzweil says. “It’s very gratifying for me when I get letters from blind people who say they were able to get a job or an education due to the reading technology that I helped create…. That’s really the thrill of being an innovator.”

The passion for helping humanity has pushed Kurzweil to establish double-digit companies over the years, pursuing all sorts of technological advancements. Along the way, his sleepy eyes have become astute at seeing into the future.

In The Age of Intelligent Machines, first published in 1990, Kurzweil started sharing his visions with the public. At the time they sounded a lot like science fiction, but a startling number of his predictions came true. He correctly predicted that by 1998 a computer would win the world chess championship, that new modes of communication would bring about the downfall of the Soviet Union, and that millions of people worldwide would plug into a web of knowledge. Today, he is the author of five best-selling books, including The Singularity Is Near and How to Create a Mind.

This wasn’t his original aim. In 1981, when he started collecting data on how rapidly computer technology was evolving, it was for purely practical reasons.

“Invariably people create technologies and business plans as if the world is never going to change,” Kurzweil says. As a result, their companies routinely fail, even though they successfully build the products they promise to produce. Visionaries see the potential, but they don’t plot it out correctly. “The inventors whose names you recognize were in the right place with the right idea at the right time,” he explains, pointing to his friend Larry Page, who launched Google with Sergey Brin in 1998, right about the time the founders of legendary busts Pets.com and Kozmo.com discovered mankind wasn’t remotely ready for Internet commerce.

How do you master timing? You look ahead.

“My projects have to make sense not for the time I’m looking at, but the world that will exist when I finish,” Kurzweil says. “And that world is a very different place.”

In recent years, companies like Ford, Hallmark and Hershey’s have recognized the value in this way of thinking, hiring expert guides like Kurzweil to help them study the shifting sands and make sense of the road ahead. These so-called “futurists” keep a careful eye on scientific advances, consumer behavior, market trends and cultural leanings. According to Intel’s resident futurist, Brian David Johnson, the goal is not so much to predict the future as to invent it. “Too many people believe that the future is a fixed point that we’re powerless to change,” Johnson recently told Forbes. “But the reality is that the future is created every day by the actions of people.”

Kurzweil subscribes to this notion. He has boundless confidence in man’s ability to construct a better world. This isn’t some utopian dream. He has the data to back it up—and a team of 10 researchers who help him construct his mathematical models. They’ve been plotting the price and computing power of information technologies—processing speed, data storage, that sort of thing—for decades.

In his view, we are on the verge of a great leap forward, an age of unprecedented invention, the kinds of breakthroughs that can lead to peace and prosperity and make humans immortal. In other words, he has barely begun to bend time to his will.

Ray Kurzweil does not own a crystal ball. The secret to his forecasting success is “exponential thinking.”

Our minds are trained to see the world linearly. If you drive at this speed, you will reach your destination at this time. But technology evolves exponentially. Kurzweil calls this the Law of Accelerating Returns.

He leans back in his chair to retrieve his cellphone and holds it aloft between two fingers. “This is several billion times more powerful than the computer I used as an undergraduate,” he says, and goes on to point out that the device is also about 100,000 times smaller. Whereas computers once took up entire floors at university research halls, far more advanced models now fit in our pockets (and smaller spaces) and are becoming more miniscule all the time. This is a classic example of exponential change.

The Human Genome Project is another. Launched in 1990, it was billed from the start as an ambitious, 15-year venture. Estimated cost: $3 billion. When researchers neared the time line’s halfway point with only 3 percent of the DNA sequencing finished, critics were quick to pounce. What they did not see was the annual doubling in output. Thanks to increases in computing power and efficiency, 3 percent became 6 percent and then 12 percent and so on. With a few more doublings, the project was completed a full two years ahead of schedule.

That is the power of exponential change.

“If you take 30 steps linearly, you get to 30,” Kurzweil says. “If you take 30 steps exponentially, you’re at a billion.”

The fruits of these accelerating returns are all around us. It took more than 15 years to sequence HIV beginning in the 1980s. Thirty-one days to sequence SARS in 2003. And today we can map a virus in a single day.

While thinking about the not-too-distant future, when virtual reality and self-driving cars, 3-D printing and Google Glass are norms, Kurzweil dreams of the next steps. In his vision, we’re rapidly approaching the point where human power becomes infinite.

Holding the phone upright, he swipes a finger across the glass.

“When I do this, my fingers are connected to my brain,” Kurzweil says. “The phone is an extension of my brain. Today a kid in Africa with a smartphone has access to all of human knowledge. He has more knowledge at his fingertips than the president of the United States did 15 years ago.” Multiplying by exponents of progress, Kurzweil projects continued shrinkage in computer size and growth in power over the next 25 years. He hypothesizes microscopic nanobots—inexpensive machines the size of blood cells—that will augment our intelligence and immune systems. These tiny technologies “will go into our neocortex, our brain, noninvasively through our capillaries and basically put our neocortex on the cloud.”

Imagine having Wikipedia linked directly to your brain cells. Imagine digital neurons that reverse the effects of Parkinson’s disease.Maybe we can live forever.

He smiles, letting the sweep of his statements sink in. Without question, it is an impressive bit of theater. He loves telling stories, loves dazzling people with his visions. But his zeal for showmanship has been known to backfire.

The biologist P.Z. Myers has called him “one of the greatest hucksters of the age.” Other critics have labeled him crazy and called his ideas hot air. Kurzweil’s public pursuit of immortality doesn’t help matters. In an effort to prolong his life, Kurzweil takes 150 supplements a day, washing them down with cup after cup of green tea and alkaline water. He monitors the effects of these chemistry experiments with weekly blood tests. It’s one of a few eccentricities.

“He’s extremely honest and direct,” Ptolemy says of his friend’s prickly personality. “He talks to people and if he doesn’t like what you’re saying, he’ll just say it. There’s no B.S. If he doesn’t like what he’s hearing, he’ll just say, ‘No. Got anything  else?’”

But it’s hard to argue with the results. Kurzweil claims 86 percent of his predictions for the year 2009 came true. Others insist the figure is actually much lower. But that’s just part of the game. Predicting is hard work.

“He was considered extremely radical 15 years ago,” Ptolemy says. “That’s less the case now. People are seeing these technologies catch up—the iPhone, Google’s self-driving cars, Watson [the IBM computer that bested Jeopardy genius Ken Jennings in 2011]. All these things start happening, and people are like, ‘Oh, OK. I see what’s going on.’”

Ray Kurzweil was born into a family of artists. His mother was a painter; his father, a conductor and musician. Both moved to New York from Austria in the late 1930s, fleeing the horrors of Hitler’s Nazi regime. When Ray was 7 years old, his maternal grandfather returned to the land of his birth, where he was given the chance to hold in his hands documents that once belonged to the great Leonardo da Vinci—painter, sculptor, inventor, thinker. “He described the experience with reverence,” Kurzweil writes, “as if he had touched the work of God himself.”

Ray’s parents raised their son and daughter in the Unitarian Church, encouraging them to study the teachings of various religions to arrive at the truth. Ray is agnostic, in part, he says, because religions tend to rationalize death; but like Da Vinci, he firmly believes in the power of ideas—the ability to overcome pain and peril, to transcend life’s challenges with reason and thought. “He wants to change the world—impact it as much as possible,” Ptolemy says. “That’s what drives him.”

Despite what his critics say, Kurzweil is not blind to the threats posed by modern science. If nanotechnology could bring healing agents into our bodies, nano-hackers or nano-terrorists could spread viruses—the literal, deadly kind. “Technology has been a double-edged sword ever since fire,” he says. “It kept us warm, cooked our food, but also burned down our villages.” That doesn’t mean you keep it under lock and key.

In January of 2013, Kurzweil entered the next chapter of his life, dividing his time between Waltham and San Francisco, where he works with Google engineers to deepen computers’ understanding of human language. “It’s my first job with a company I didn’t start myself,” he deadpans. The idea is to move the company beyond keyword search, to teach computers how to grasp the meaning and ideas in the billions of documents at their disposal, to move them one more step forward on the journey to becoming sentient virtual assistants—picture Joaquin Phoenix’s sweet-talking laptop in 2013’s Kurzweil-influenced movie Her, a Best Picture nominee.

Kurzweil had pitched the idea of breaking computers’ language barrier to Page while searching for investors. Page offered him a full-time salary and Google-scale resources instead, promising to give Kurzweil the independence he needs to complete the project. “It’s a courageous company,” Kurzweil says. “It has a biz model that supports very widespread distribution of these technologies. It’s the only place I could do this project. I would not have the resources, even if I raised all the money I wanted in my own company. I wouldn’t be able to run algorithms on a million computers.”

That’s not to say Page will sit idle while Kurzweil toils away. In the last year, the Google CEO has snapped up eight robotics companies, including industry frontrunner Boston Dynamics. He paid $3.2 billion for Nest Labs, maker of learning thermostats and smoke alarms. He scooped up the artificial intelligence startup DeepMind and lured Geoffrey Hinton, the world’s foremost expert on neural networks—computer systems that function like a brain—into the Google fold.

Kurzweil’s ties to Page run deep. Google (and NASA) provided early funding for Singularity University, the education hub/startup accelerator Kurzweil launched with the XPRIZE’s Peter Diamandis to train young leaders to use cutting-edge technology to make life better for billions of people on Earth.

Kurzweil’s faith in entrepreneurship is so strong that he believes it should be taught in elementary school.

Why?

Because that kid with the cellphone now has a chance to change the world. If that seems far-fetched, consider the college sophomore who started Facebook because he wanted to meet girls or the 15-year-old who recently invented a simple new test for pancreatic cancer. This is one source of his optimism. Another? The most remarkable thing about the mathematical models Kurzweil has assembled, the breathtaking arcs that demonstrate his thinking, is that they don’t halt their climb for any reason—not for world wars, not for the Great Depression.

Once again, that’s the power of exponential growth.

“Things that seemed impossible at one point are now possible,” Kurzweil says. “That’s the fundamental difference between me and my critics.” Despite the thousands of years of evolution hard-wired into his brain, he resists the urge to see the world in linear fashion. That’s why he’s bullish on solar power, artificial intelligence, nanobots and 3-D printing. That’s why he believes the 2020s will be studded with one huge medical breakthrough after another.

“There’s a lot of pessimism in the world,” he laments. “If I  believed progress was linear, I’d be pessimistic, too. Because we would not be able to solve these problems. But I’m optimistic—more than optimistic: I believe we will solve these problems because of the scale of these technologies.”

He looks down at his watch yet again. Mickey Mouse peeks out from behind the timepiece’s sweeping hands. “Just a bit of whimsy,” he says.

Nearly an hour has passed. The world has changed. It’s time to get on with his day.

Post date:

Oct 9, 2014

This article can also be found at http://www.success.com/article/ray-kurzweil-the-exponential-mind

The Singularity Isn’t Near by Paul Allen

This is a piece written by Paul Allen in which he presents his reasons for thinking a singularity will not occur until after 2045.  While I humbly disagree with some of Paul Allen’s assertions in this article, I must say that I respect Allen for admitting that “we are aware that the history of science and technology is littered with people who confidently assert that some event can’t happen, only to be later proven wrong—often in spectacular fashion.”  I also think he makes a salient point (and I am extrapolating this notion based on this article) about needing to have a more complete understanding of cognition before we can really delve into the science of creating a mind from scratch, so to speak.  

Then again, Ray Kurzweil now has the inconceivable resources and support of Google at his fingertips in order to accelerate his own research.  

One other thing I would like to address; in this article, Allen’s main premise is that the exponential growth in technology, which we have witnessed in the past, may not be as stable as many singularitarians would have you believe.  I can respect this view, but I would be remiss if I didn’t point out that Allen’s premise could work in the opposite direction just as easily.  Take the D-Wave quantum computer, for instance.  This computer represents a dramatic leap* forward in technological innovation which could actually compound the Law of Accelerating Returns beyond even its’ current exponential expansion.

*I refrain from using the obvious pun, quantum leap, when describing the D-Wave computer because by definition a quantum leap would actually be the smallest amount of progress one could conceivably make.  I heard that somewhere and thought it was amusing enough to repeat…

Credit: Technology Review

 

Futurists like Vernor Vinge and Ray Kurzweil have argued that the world is rapidly approaching a tipping point, where the accelerating pace of smarter and smarter machines will soon outrun all human capabilities. They call this tipping point the singularity, because they believe it is impossible to predict how the human future might unfold after this point. Once these machines exist, Kurzweil and Vinge claim, they’ll possess a superhuman intelligence that is so incomprehensible to us that we cannot even rationally guess how our life experiences would be altered. Vinge asks us to ponder the role of humans in a world where machines are as much smarter than us as we are smarter than our pet dogs and cats. Kurzweil, who is a bit more optimistic, envisions a future in which developments in medical nanotechnology will allow us to download a copy of our individual brains into these superhuman machines, leave our bodies behind, and, in a sense, live forever. It’s heady stuff.

While we suppose this kind of singularity might one day occur, we don’t think it is near. In fact, we think it will be a very long time coming. Kurzweil disagrees, based on his extrapolations about the rate of relevant scientific and technical progress. He reasons that the rate of progress toward the singularity isn’t just a progression of steadily increasing capability, but is in fact exponentially accelerating—what Kurzweil calls the “Law of Accelerating Returns.” He writes that:

So we won’t experience 100 years of progress in the 21st century—it will be more like 20,000 years of progress (at today’s rate). The “returns,” such as chip speed and cost-effectiveness, also increase exponentially. There’s even exponential growth in the rate of exponential growth. Within a few decades, machine intelligence will surpass human intelligence, leading to The Singularity … [1]

By working through a set of models and historical data, Kurzweil famously calculates that the singularity will arrive around 2045.

This prediction seems to us quite far-fetched. Of course, we are aware that the history of science and technology is littered with people who confidently assert that some event can’t happen, only to be later proven wrong—often in spectacular fashion. We acknowledge that it is possible but highly unlikely that Kurzweil will eventually be vindicated. An adult brain is a finite thing, so its basic workings can ultimately be known through sustained human effort. But if the singularity is to arrive by 2045, it will take unforeseeable and fundamentally unpredictable breakthroughs, and not because the Law of Accelerating Returns made it the inevitable result of a specific exponential rate of progress.

Kurzweil’s reasoning rests on the Law of Accelerating Returns and its siblings, but these are not physical laws. They are assertions about how past rates of scientific and technical progress can predict the future rate. Therefore, like other attempts to forecast the future from the past, these “laws” will work until they don’t. More problematically for the singularity, these kinds of extrapolations derive much of their overall exponential shape from supposing that there will be a constant supply of increasingly more powerful computing capabilities. For the Law to apply and the singularity to occur circa 2045, the advances in capability have to occur not only in a computer’s hardware technologies (memory, processing power, bus speed, etc.) but also in the software we create to run on these more capable computers. To achieve the singularity, it isn’t enough to just run today’s software faster. We would also need to build smarter and more capable software programs. Creating this kind of advanced software requires a prior scientific understanding of the foundations of human cognition, and we are just scraping the surface of this.

This prior need to understand the basic science of cognition is where the “singularity is near” arguments fail to persuade us. It is true that computer hardware technology can develop amazingly quickly once we have a solid scientific framework and adequate economic incentives. However, creating the software for a real singularity-level computer intelligence will require fundamental scientific progress beyond where we are today. This kind of progress is very different than the Moore’s Law-style evolution of computer hardware capabilities that inspired Kurzweil and Vinge. Building the complex software that would allow the singularity to happen requires us to first have a detailed scientific understanding of how the human brain works that we can use as an architectural guide, or else create it all de novo. This means not just knowing the physical structure of the brain, but also how the brain reacts and changes, and how billions of parallel neuron interactions can result in human consciousness and original thought. Getting this kind of comprehensive understanding of the brain is not impossible. If the singularity is going to occur on anything like Kurzweil’s timeline, though, then we absolutely require a massive acceleration of our scientific progress in understanding every facet of the human brain.

But history tells us that the process of original scientific discovery just doesn’t behave this way, especially in complex areas like neuroscience, nuclear fusion, or cancer research. Overall scientific progress in understanding the brain rarely resembles an orderly, inexorable march to the truth, let alone an exponentially accelerating one. Instead, scientific advances are often irregular, with unpredictable flashes of insight punctuating the slow grind-it-out lab work of creating and testing theories that can fit with experimental observations. Truly significant conceptual breakthroughs don’t arrive when predicted, and every so often new scientific paradigms sweep through the field and cause scientists to reëvaluate portions of what they thought they had settled. We see this in neuroscience with the discovery of long-term potentiation, the columnar organization of cortical areas, and neuroplasticity. These kinds of fundamental shifts don’t support the overall Moore’s Law-style acceleration needed to get to the singularity on Kurzweil’s schedule.

The Complexity Brake

The foregoing points at a basic issue with how quickly a scientifically adequate account of human intelligence can be developed. We call this issue the complexity brake. As we go deeper and deeper in our understanding of natural systems, we typically find that we require more and more specialized knowledge to characterize them, and we are forced to continuously expand our scientific theories in more and more complex ways. Understanding the detailed mechanisms of human cognition is a task that is subject to this complexity brake. Just think about what is required to thoroughly understand the human brain at a micro level. The complexity of the brain is simply awesome. Every structure has been precisely shaped by millions of years of evolution to do a particular thing, whatever it might be. It is not like a computer, with billions of identical transistors in regular memory arrays that are controlled by a CPU with a few different elements. In the brain every individual structure and neural circuit has been individually refined by evolution and environmental factors. The closer we look at the brain, the greater the degree of neural variation we find. Understanding the neural structure of the human brain is getting harder as we learn more. Put another way, the more we learn, the more we realize there is to know, and the more we have to go back and revise our earlier understandings. We believe that one day this steady increase in complexity will end—the brain is, after all, a finite set of neurons and operates according to physical principles. But for the foreseeable future, it is the complexity brake and arrival of powerful new theories, rather than the Law of Accelerating Returns, that will govern the pace of scientific progress required to achieve the singularity.

So, while we think a fine-grained understanding of the neural structure of the brain is ultimately achievable, it has not shown itself to be the kind of area in which we can make exponentially accelerating progress. But suppose scientists make some brilliant new advance in brain scanning technology. Singularity proponents often claim that we can achieve computer intelligence just by numerically simulating the brain “bottom up” from a detailed neural-level picture. For example, Kurzweil predicts the development of nondestructive brain scanners that will allow us to precisely take a snapshot a person’s living brain at the subneuron level. He suggests that these scanners would most likely operate from inside the brain via millions of injectable medical nanobots. But, regardless of whether nanobot-based scanning succeeds (and we aren’t even close to knowing if this is possible), Kurzweil essentially argues that this is the needed scientific advance that will gate the singularity: computers could exhibit human-level intelligence simply by loading the state and connectivity of each of a brain’s neurons inside a massive digital brain simulator, hooking up inputs and outputs, and pressing “start.”

However, the difficulty of building human-level software goes deeper than computationally modeling the structural connections and biology of each of our neurons. “Brain duplication” strategies like these presuppose that there is no fundamental issue in getting to human cognition other than having sufficient computer power and neuron structure maps to do the simulation.[2] While this may be true theoretically, it has not worked out that way in practice, because it doesn’t address everything that is actually needed to build the software. For example, if we wanted to build software to simulate a bird’s ability to fly in various conditions, simply having a complete diagram of bird anatomy isn’t sufficient. To fully simulate the flight of an actual bird, we also need to know how everything functions together. In neuroscience, there is a parallel situation. Hundreds of attempts have been made (using many different organisms) to chain together simulations of different neurons along with their chemical environment. The uniform result of these attempts is that in order to create an adequate simulation of the real ongoing neural activity of an organism, you also need a vast amount of knowledge about thefunctional role that these neurons play, how their connection patterns evolve, how they are structured into groups to turn raw stimuli into information, and how neural information processing ultimately affects an organism’s behavior. Without this information, it has proven impossible to construct effective computer-based simulation models. Especially for the cognitive neuroscience of humans, we are not close to the requisite level of functional knowledge. Brainsimulation projects underway today model only a small fraction of what neurons do and lack the detail to fully simulate what occurs in a brain. The pace of research in this area, while encouraging, hardly seems to be exponential. Again, as we learn more and more about the actual complexity of how the brain functions, the main thing we find is that the problem is actually getting harder.

The AI Approach

Singularity proponents occasionally appeal to developments in artificial intelligence (AI) as a way to get around the slow rate of overall scientific progress in bottom-up, neuroscience-based approaches to cognition. It is true that AI has had great successes in duplicating certain isolated cognitive tasks, most recently with IBM’s Watson system for Jeopardy! question answering. But when we step back, we can see that overall AI-based capabilities haven’t been exponentially increasing either, at least when measured against the creation of a fully general human intelligence. While we have learned a great deal about how to build individual AI systems that do seemingly intelligent things, our systems have always remained brittle—their performance boundaries are rigidly set by their internal assumptions and defining algorithms, they cannot generalize, and they frequently give nonsensical answers outside of their specific focus areas. A computer program that plays excellent chess can’t leverage its skill to play other games. The best medical diagnosis programs contain immensely detailed knowledge of the human body but can’t deduce that a tightrope walker would have a great sense of balance.

Why has it proven so difficult for AI researchers to build human-like intelligence, even at a small scale? One answer involves the basic scientific framework that AI researchers use. As humans grow from infants to adults, they begin by acquiring a general knowledge about the world, and then continuously augment and refine this general knowledge with specific knowledge about different areas and contexts. AI researchers have typically tried to do the opposite: they have built systems with deep knowledge of narrow areas, and tried to create a more general capability by combining these systems. This strategy has not generally been successful, although Watson’s performance on Jeopardy! indicates paths like this may yet have promise. The few attempts that have been made to directly create a large amount of general knowledge of the world, and then add the specialized knowledge of a domain (for example, the work ofCycorp), have also met with only limited success. And in any case, AI researchers are only just beginning to theorize about how to effectively model the complex phenomena that give human cognition its unique flexibility: uncertainty, contextual sensitivity, rules of thumb, self-reflection, and the flashes of insight that are essential to higher-level thought. Just as in neuroscience, the AI-based route to achieving singularity-level computer intelligence seems to require many more discoveries, some new Nobel-quality theories, and probably even whole new research approaches that are incommensurate with what we believe now. This kind of basic scientific progress doesn’t happen on a reliable exponential growth curve. So although developments in AI might ultimately end up being the route to the singularity, again the complexity brake slows our rate of progress, and pushes the singularity considerably into the future.

The amazing intricacy of human cognition should serve as a caution to those who claim the singularity is close. Without having a scientifically deep understanding of cognition, we can’t create the software that could spark the singularity. Rather than the ever-accelerating advancement predicted by Kurzweil, we believe that progress toward this understanding is fundamentally slowed by the complexity brake. Our ability to achieve this understanding, via either the AI or the neuroscience approaches, is itself a human cognitive act, arising from the unpredictable nature of human ingenuity and discovery. Progress here is deeply affected by the ways in which our brains absorb and process new information, and by the creativity of researchers in dreaming up new theories. It is also governed by the ways that we socially organize research work in these fields, and disseminate the knowledge that results. At Vulcanand at the Allen Institute for Brain Science, we are working on advanced tools to help researchers deal with this daunting complexity, and speed them in their research. Gaining a comprehensive scientific understanding of human cognition is one of the hardest problems there is. We continue to make encouraging progress. But by the end of the century, we believe, we will still be wondering if the singularity is near.

Paul G. Allen, who cofounded Microsoft in 1975, is a philanthropist and chairman of Vulcan, which invests in an array of technology, aerospace, entertainment, and sports businesses. Mark Greaves is a computer scientist who serves as Vulcan’s director for knowledge systems.

[1] Kurzweil, “The Law of Accelerating Returns,” March 2001.

[2] We are beginning to get within range of the computer power we might need to support this kind of massive brain simulation. Petaflop-class computers (such as IBM’s BlueGene/P that was used in the Watson system) are now available commercially. Exaflop-class computers are currently on the drawing boards. These systems could probably deploy the raw computational capability needed to simulate the firing patterns for all of a brain’s neurons, though currently it happens many times more slowly than would happen in an actual brain.

UPDATE: Ray Kurzweil responds here.

This article can also be found at http://www.technologyreview.com/view/425733/paul-allen-the-singularity-isnt-near/