Even thought I don’t agree with some of the stances Michio Kaku takes on emergent technologies (some of his predictions are a bit linear for my taste), I still consider myself a fan of his work and I always respect what he has to say.
The Technological Singularity and Merging With Machines
The term “singularity,” which is often heard today, comes originally from my field, theoretical physics. It denotes a point in space and time where the gravitational field becomes infinite. At the center of a black hole, for example, we might find a singularity. It also refers to a mathematical term where a certain function also becomes infinite. But the type of singularity that you have probably been hearing about the most lately is called “The Technological Singularity” and although its not a new concept, it’s definitely becoming more of a mainstream topic of conversation.
Countless books on the subject are being published on a consistent basis, and Ray Kurzweil just recently launched his documentary, “The Transcendent Man” which shares his vision of a world in which humans merge with machines and is currently screening in sold-out screenings around the planet, web forums, blogs and video sites.
Recently it was part of a TIME Magazine cover story entitled “2045: The Year Man Becomes Immortal” which includes a five page narrative. Not to mention that there are an increased number of institutes, dozens of annual singularity conferences and even the 2008 founding of the Singularity University by X-Prize’s Peter Diamandis & Ray Kurzweil which is based at the NASA Ames campus in Silicon Valley. The Singularity University offers a variety of programs including one in particular called “The Exponential Technologies Executive Program” which they state has a main goal to “educate, inform, and prepare executives to recognize the opportunities and disruptive influences of exponentially growing technologies and understand how these fields affect their future, business, and industry.”
My television series Sci Fi Science, on the The Science Channel aired an episode entitled A.I. Uprising which maintained a focus on the coming technological singularity and on the fear that mankind will one day create a machine that could quite possibly threaten our very existence. One cannot rule out the point in time when machine intelligence will eventually surpass human intelligence. These super intelligent machine creations will become self-aware, have their own agenda and may even one day be able to create copies of themselves that are more intelligent than they are.
Common questions I’m often asked are:
- When will this tipping point transpire?
- What are the implications for the creation of a self-aware machine?
- What does it mean for the advancement of the human race i.e. On what level will humans merge with them?
- What happens when machine intelligence exponentially surpasses human intelligence?
But the road to the singularity is not going to be a smooth one. As I originally mentioned in my Big Think interview, “How to Stop Robots from Killing Us“, Moore’s law states that computing power doubles about every 18 months and it’s a curve that has held sway for about 50 years. Chip manufacturing and the technology behind the development of transistors will eventually hit a wall where they are just too small, too powerful and generate way too much heat resulting in a chip meltdown and electrons leaking out due to the Heisenberg Uncertainty Principle.
Needless to say, it’s time to find a replacement for silicon and it’s my belief that eventual replacement will essentially take things to the next level. Graphene is a potential candidate replacement and far superior to that of silicon but the technology to construct a large scale manufacturing of graphene (carbon nanotube sheets) is still up in the air. It’s not clear at all what will replace silicon, but a variety of technologies have been proposed, including molecular transistors, DNA computers, protein computers, quantum dot computers, and quantum computers. However, none of them is ready for prime time. Each has its own formidable technical problems which, at present, keep them on the drawing boards.
Well, because of all these uncertainties, no one knows exactly when this tipping point will happen although there are many predictions when computing power will finally meet and then eventually tower above that of human intelligence. For example, Ray Kurzweil whom I’ve interviewed several times on my radio programs stated in his Big Think interview that he feels by 2020 we’ll have computers that are powerful enough to simulate the human brain but we won’t be finished with the reverse engineering of the brain until about the year 2029. He also estimates that by the year 2045, we’ll have expanded the intelligence of our human machine civilization a billion fold.
But in all fairness, we should also point out there are many different points of view on this question. The New York Times asked a variety of experts at the recent Asilomar Conference on AI in California when machines might become as powerful as humans. The answer was quite surprising. The answers ranged from 20 years to 1,000 years. I once interviewed Marvin Minsky for my national science radio show and asked him the same question. He was very careful to say that he does not make predictions like that.
We should also point out that there are a variety of measures proposed by AI specialists about what do to about it. One simple proposal is to put a chip in the brains of our robots, which automatically shut them off if they get murderous thoughts. Right now, our most advanced robots have the intellectual capability of a cockroach (a mentally challengead cockroach, at that). But over the years, they will become as intelligent as a mouse, rabbit, fox, dog, cat, and eventually a monkey. When they become that smart, they will be able to set their own goals and agendas, and could be dangerous. We might also put a fail safe device in them so that any human could shut them off by a simple verbal command. Or, we might create an elite corps of robot fighters, like in Blade Runner, who have superior powers and can track down and hunt for errant robots.
But the proposal that is getting the most traction is merging with our creations. Perhaps one day in the future, we might find ourselves waking up with a superior body, intellect, and living forever. For more, visit the Facebook Fanpage for my latest book, Physics of the Future.
This article can also be found at http://bigthink.com/dr-kakus-universe/the-technological-singularity-and-merging-with-machines
IBM’s Jon Iwata on the Intelligence of Watson
Published on Aug 5, 2014
Jon Iwata, Senior VP of Marketing and Communications at IBM, shares the origins and purpose of IBM’s supercomputer Watson.
Don’t miss new Big Think videos! Subscribe by clicking here: http://goo.gl/CPTsV5
Transcript: Some years ago the grand challenge in computer science, one of them, was to build a machine that could beat a chess grandmaster. Some may remember this. And we built machines that got better and better at it. But finally built a machine back in the 90s called Deep Blue and it played against Gary Kasparov and it beat Gary Kasparov and I think he’s still quite upset about it. Why did we build that machine? Well it really wasn’t to play chess. It was to take a real challenge, chess, and it would force advances in computer science. And it worked quite well.
Well, that was chess and that was the nature of the grand challenge back then. But today this explosion of data, most of it unstructured data, natural language, Tweets, blog posts, medical images, things like that. Very difficult for traditional computers to understand. It could store it. It could process this data but it doesn’t know what the data really tells you because it’s unstructured. The research team some years ago said what’s a way for us to create a system that is ideal for the coming world of unstructured big data. Natural language. Making sense of a mountain of data. What could we do to force ourselves to solve those problems. And they hit upon the game show Jeopardy. Now I’ve got to tell you that when they came by to see me at IBM corporate headquarters, I don’t know, six years ago, seven years ago, maybe longer and they said we’ve identified the next big challenge similar to the chess machine that beat Kasparov.
I was thinking, you know, wow they’re going to go after some really sophisticated high minded, you know, game theory thing. And they came in and said it was going to be Jeopardy. Now I wasn’t really a Jeopardy watcher back then. I said you mean the TV quiz show? And they said yes. And I said well that seems to be – they remind me of this now – that doesn’t seem to be, you know, very sophisticated or challenging. And they went on to explain to me – and I, of course, had to acknowledge many times to them since then it’s really hard. It’s really hard to win on Jeopardy. And it’s hard for a human and it’s almost impossible for a machine. Because if you play Jeopardy or if you’re just kind of familiar with it, you have to understand puns and allegories, popular culture, rhymes, allusions, double entendres. These are things that computers are baffled by, even some humans. So they went after this and they struck a collaboration with the producers of Jeopardy and they build this system called Watson and it played the two greatest human champions, Ken Jennings and Brad Rutter, some years ago.
I was there watching it do its thing live and it won. And the remarkable thing about Watson – that’s the name of the system – we believe it’s the first cognitive computer and what is that? It is a system that isn’t programmed. It is a system that learns. It is a system that improves itself by ingesting all the data it can and by being trained by humans. And this is a profound shift in computation because whether it’s a powerful supercomputer or it’s your iPad, all of those systems are programmed to do what they do. Your iPad can only do what a software engineer designed it to do. That is not the case with Watson. Watson improves itself through learning. And it is therefore incredibly important in this world of big data, most of it unstructured. We will need systems like Watson to make sense of all the data that’s being produced.
Watson triggers some very strong emotions in people when they learn about it or see it or interact with it. It talks, it answers questions with great confidence. If it doesn’t know the answer to the question it sometimes asks you another question to help it reason on the question. It generates hypotheses and tells you it’s level of confidence in its recommendations. And so we as humans – we use all kinds of words that we’re familiar with to try to understand what this thing is doing. We say “is it thinking? Is it sentient? Does it create?” Some people get very excited and optimistic because Watson seems to be the answer to a lot of problems. It never forgets. A doctor can’t read every piece of medical literature that’s created every day. Watson can. By the way, Watson’s at work at Memorial Sloan Kettering Cancer Research, at MD Anderson Cancer Research, at the Cleveland Clinic and at Walpoint learning medicine.
Directed/Produced by Jonathan Fowler, Victoria Brown, and Dillon Fitton