In the following video, futurist and inventor Ray Kurzweil responds to some reasonable concerns about artificial general intelligence. While I am an advocate of caution over fear, I’m not sure that Kurzweil’s comparisons to previous technologies sets solid grounds for dismissing such concerns. We can reference past technologies and see a positive trend for such technologies being statistically more beneficial than destructive. For example, nuclear technology can be used for creating weaponry, but it’s far more commonly used for creating energy. However, all past technologies have one thing in common; they couldn’t out think us. Kurzweil’s only solution to this is to “merge with the machines.” Kurzweil also goes on to say that this merging is already taking place which, indeed, appears to be the case. My concern is that intelligence may not be the only factor in creating harmony within humanity. If empathy is solely a product of intelligence the I say bring it on! However, I think more attention needs to be paid to the developmental roots of empathy as we continue down the path of creating artificial sentience. The goal shouldn’t be in creating artificial general intelligence so much as it should be in creating artificial general empathy.
This video can also be found here. If you like the video, please make sure to stop by and give it a like.
Published on Mar 28, 2017
I interviewed Ray in his Google office in Mountain View, CA, February 15, 2017. Ray gave generously of his time, and his replies to my questions were very focused, full of excellent content.
At the end of 2013, I made a documentary (https://youtu.be/5igUX43gkiU) about Ray, which includes clips from another interview. My goal was to provide a short video introduction to the life & thoughts of Ray, for those who know nothing about him and who want to know more. I hope you will check it out.
This video explain’s Google’s DeepMind AI. The goal of the DeepMind project is to develop general purpose algorithms which, along with the development of artificial intelligence, will teach us more about the human brain. Part of the DeepMind research was a project called AlphGo which beat Go world champion Lee Sedol in 2015. Due to the intuitive nature of the game, it was not expected that an AI would succeed in beating a Go champion for at least another decade. DeepMind uses a type of machine learning called deep reinforcement learning. Machine learning is not to be confused with expert systems. Expert systems work within predefined (pre-programmed) parameters whereas machine learning relies upon pattern recognition algorithms.
This is an article from TIME by Ray Kurzweil called Don’t Fear Artificial Intelligence. Basically, Kurzweil’s stance is that “technology is a double-edged sword” and that it always has been, but that’s no reason to abandon the research. Kurzweil also states that, “Virtually everyone’s mental capabilities will be enhanced by it within a decade.” I hope it makes people smarter and not just more intelligent!
Kurzweil is the author of five books on artificial intelligence, including the recent New York Times best seller “How to Create a Mind.”
Two great thinkers see danger in AI. Here’s how to make it safe.
Stephen Hawking, the pre-eminent physicist, recently warned that artificial intelligence (AI), once it surpasses human intelligence, could pose a threat to the existence of human civilization. Elon Musk, the pioneer of digital money, private spaceflight and electric cars, has voiced similar concerns.
If AI becomes an existential threat, it won’t be the first one. Humanity was introduced to existential risk when I was a child sitting under my desk during the civil-defense drills of the 1950s. Since then we have encountered comparable specters, like the possibility of a bioterrorist creating a new virus for which humankind has no defense. Technology has always been a double-edged sword, since fire kept us warm but also burned down our villages.
The typical dystopian futurist movie has one or two individuals or groups fighting for control of “the AI.” Or we see the AI battling the humans for world domination. But this is not how AI is being integrated into the world today. AI is not in one or two hands; it’s in 1 billion or 2 billion hands. A kid in Africa with a smartphone has more intelligent access to knowledge than the President of the United States had 20 years ago. As AI continues to get smarter, its use will only grow. Virtually everyone’s mental capabilities will be enhanced by it within a decade.
We will still have conflicts among groups of people, each enhanced by AI. That is already the case. But we can take some comfort from a profound, exponential decrease in violence, as documented in Steven Pinker’s 2011 book, The Better Angels of Our Nature: Why Violence Has Declined. According to Pinker, although the statistics vary somewhat from location to location, the rate of death in war is down hundredsfold compared with six centuries ago. Since that time, murders have declined tensfold. People are surprised by this. The impression that violence is on the rise results from another trend: exponentially better information about what is wrong with the world—another development aided by AI.
There are strategies we can deploy to keep emerging technologies like AI safe. Consider biotechnology, which is perhaps a couple of decades ahead of AI. A meeting called the Asilomar Conference on Recombinant DNA was organized in 1975 to assess its potential dangers and devise a strategy to keep the field safe. The resulting guidelines, which have been revised by the industry since then, have worked very well: there have been no significant problems, accidental or intentional, for the past 39 years. We are now seeing major advances in medical treatments reaching clinical practice and thus far none of the anticipated problems.
Consideration of ethical guidelines for AI goes back to Isaac Asimov’s three laws of robotics, which appeared in his short story “Runaround” in 1942, eight years before Alan Turing introduced the field of AI in his 1950 paper “Computing Machinery and Intelligence.” The median view of AI practitioners today is that we are still several decades from achieving human-level AI. I am more optimistic and put the date at 2029, but either way, we do have time to devise ethical standards.
There are efforts at universities and companies to develop AI safety strategies and guidelines, some of which are already in place. Similar to the Asilomar guidelines, one idea is to clearly define the mission of each AI program and to build in encrypted safeguards to prevent unauthorized uses.
Ultimately, the most important approach we can take to keep AI safe is to work on our human governance and social institutions. We are already a human-machine civilization. The best way to avoid destructive conflict in the future is to continue the advance of our social ideals, which has already greatly reduced violence.
AI today is advancing the diagnosis of disease, finding cures, developing renewable clean energy, helping to clean up the environment, providing high-quality education to people all over the world, helping the disabled (including providing Hawking’s voice) and contributing in a myriad of other ways. We have the opportunity in the decades ahead to make major strides in addressing the grand challenges of humanity. AI will be the pivotal technology in achieving this progress. We have a moral imperative to realize this promise while controlling the peril. It won’t be the first time we’ve succeeded in doing this.
Kurzweil is the author of five books on artificial intelligence, including the recent New York Times best seller How to Create a Mind.
In this video, Ben Goertzel talks a little about how he got into AGI research and about the research, itself. I first heard of Ben Goertzel about four years ago, right when I was first studying computer science and considering a career in AI programming. At the time, I was trying to imagine how you would build an emotionally intelligent machine. I really enjoyed hearing some of his ideas at the time and still do. Also at the time, I was listening to a lot of Tony Robbins so you could imagine, I came up with some pretty interesting theories on artificial intelligence and empathetic machines. Maybe if I get enough requests I’ll write a special post on some of those ideas. You just let me know if you’re interested.
This is an interview with Peter Voss of Optimal talking about artificial general intelligence. One of the things Voss talks about is the skepticism which is a common reaction when talking about creating strong AI and why (as Tony Robbins always says) the past does not equal the future. He also talks about why he thinks that Ray Kurzweil’s predictions that AGI won’t be achieved for another 20 is wrong – (and I gotta say, he makes a good point). If you are interested in artificial intelligence or ethics in technology then you’ll want to watch this one…
And don’t worry, the line drawing effect at the beginning of the video only lasts a minute.
Peter Voss is the founder and CEO of Adaptive A.I. Inc, an R&D company developing a high-level general intelligence (AGI) engine. He is also founder and CTO of Smart Action Company LLC, which builds and supplies AGI-based virtual contact-center agents — intelligent, automated phone operators.
Peter started his career as an entrepreneur, inventor, engineer and scientist at age 16. After several years of experience in electronics engineering, at age 25 he started a company to provide advanced custom hardware and software solutions. Seven years later the company employed several hundred people and was successfully listed on the Johannesburg Stock Exchange.
After selling his interest in the company in 1993, he worked on a broad range of disciplines — cognitive science, philosophy and theory of knowledge, psychology, intelligence and learning theory, and computer science — which served as the foundation for achieving new breakthroughs in artificial general intelligence. In 2001 he started Adaptive AI Inc., and last year founded Smart Action Company as its commercialization division.
Peter considers himself a free-minds-and-markets Extropian, and often writes and presents on philosophical topics including rational ethics, freewill and artificial minds. He is also deeply involved with futurism and life-extension.
My main occupation is research in high-level, general (domain independent, autonomous) Artificial Intelligence — “Adaptive A.I. Inc.”
I believe that integrating insights from the following areas of cognitive science are crucial for rapid progress in this field:
Philosophy/ epistemology – understanding the true nature of knowledge
Cognitive psychology (incl. developmental & psychometric) for analysis of cognition – and especially – general conceptual intelligence.
Computer science – self-modifying systems, combining new connectionist pattern manipulation techniques with ‘traditional’ AI engineering.
Anyone who shares my passion – and/ or concerns – for this field is welcome to contact me for brainstorming and possible collaboration.
My other big passion is for exploring what I call Optimal Living: Maximizing both the quantity & quality of life. I see personal responsibility and optimizing knowledge acquisition as key. Specific interests include:
Rationality, as a means for knowledge. I’m largely sympathetic to the philosophy of Objectivism, and have done quite a bit of work on developing a rational approach to (personal & social) ethics.
Health (quality): physical, financial, cognitive, and emotional (passions, meaningful relationships, appreciation of art, etc.). Psychology: IQ & EQ.
Longevity (quantity): general research, CRON (calorie restriction), cryonics
Environment: economic, social, political systems conducive to Optimal Living.
These interests logically lead to an interest in Futurism , in technology for improving life – overcoming limits to personal growth & improvement. The transhumanist philosophy of Extropianism best embodies this quest. Specific technologies that seem to hold most promise include AI, Nanotechnology, & various health & longevity approaches mentioned above.
I always enjoy meeting new people to explore ideas, and to have my views critiqued. To this end I am involved in a number of discussion groups and salons (e.g. ‘Kifune’ futurist dinner/discussion group). Along the way I’m trying to develop and learn the complex art of constructive dialog.
Dr Sean O hEigeartaigh
James Martin Academic Project Manager with the Oxford Martin Programme on the Impacts of Future Technology
Seán has a background in genetics, having recently finished his phD in molecular evolution in Trinity College Dublin where he focused on programmed ribosomal frameshifting and comparative genomic approaches to improve genome annotation. He is also the cofounder of a successful voluntary arts organisation in Ireland that now runs popular monthly events and an annual outdoor festival.
The Future of Humanity Institute is the leading research centre looking at big-picture questions for human civilization. The last few centuries have seen tremendous change, and this century might transform the human condition in even more fundamental ways. Using the tools of mathematics, philosophy, and science, we explore the risks and opportunities that will arise from technological change, weigh ethical dilemmas, and evaluate global priorities. Our goal is to clarify the choices that will shape humanity’s long-term future.
Why does neuroscience need neuroinformatics? Watch this 3 min video to find out!
The mission of the International Neuroinformatics Coordinating Facility (INCF) is to facilitate the work of neuroscientists around the world, and to catalyze and coordinate the global development of neuroinformatics.
Thank you to all featured community members for collaborating with our team in making this video!
Special thanks to:
– The Neuroscience department at the Karolinska Institute, Stockholm, Sweden.
– The PDC Center for High-Performance Computing at the KTH Royal Institute of Technology, Stockholm, Sweden.
– The Neurological X-ray Clinic at the Karolinska University Hospital, Stockholm, Sweden.
The International Neuroinformatics Coordinating Facility (INCF), together with its 17 member countries, coordinates collaborative informatics infrastructure for neuroscience data integration and manages scientific programs to develop standards for data sharing, analysis, modeling and simulation in order to catalyze insights into brain function in health and disease.
For press inquiries about INCF please contact: email@example.com
Neurorobotics engineers from the Human Brain Project (HBP) have recently taken the first steps towards building a “virtual mouse” by placing a simplified computer model of the mouse brain into a virtual mouse body. This new kind of tool will be made available to scientists, both HBP and worldwide. Read more:https://www.humanbrainproject.eu/-/a-…
This webpage (found at the Digital Agenda for Europe website) explains FET Flagships and two top flagship topics. They are multidisciplinary approaches to unlocking technologies which have the potential to radically change the future of humanity. The two flagship topics covered (in embedded videos) are Graphene and the Human Brain Project.
The Future & Emerging Technologies (“FET”) Flagships are visionary, large-scale, science-driven research initiatives which tackle scientific and technological challenges across scientific disciplines.
The Future and Emerging Technologies (FET) Flagships were developed over a two-and-a-half year preparatory phase. They will have a transformational impact on science, technology and society overall. They foster coordinated efforts between the EU and its Member States’ national and regional programmes. Highly ambitious, they rely on cooperation among a range of disciplines, communities and programmes, requiring sustained support up to 10 years.
Graphene investigates and exploits the unique properties of a revolutionary carbon-based material. It possesses an extraordinary combination of physical and technical properties: it is the thinnest material, it conducts electricity, it is stronger than steele and entails unique optical properties.
To better understand Graphene, check out the following:
New Graphene video:How Chalmers University manufactures scalable and high-performing solid Graphene samples, the raw material used by the over 100 research groups within the Graphene Flagship.
Understanding the human brain is one of the greatest challenges facing 21st century science. Using a unique simulation-based approach, the Human Brain Project aims to provide researchers worldwide with a tool to understand how the human brain really works. If we rise to the challenge, this initiative will revolutionise the future of neuroscience, medicine, and computing.
To better understand HPB, several resources are available:
The Human Brain Project Youtube Video Channel – check out video guides on various aspects of the project: Neuromorphic Computing, Future Medicin, Future Neuroscience , Future Computing, Ethics & Society, Neuroinformatics, Medical Informatics Platforms, High Performance Computing, Brain Stimulation Platform, Neurobotics, Mathematical and Theoretical Foundations of Brain Research;
The ERA-NET, called FLAG-ERA, gathers ministries and most funding organisations in Europe, participating either directly or as associated members, with the goal of supporting the FET Flagship initiatives ‘Graphene’ and ‘The Human Brain Project’ and more generally the FET Flagship concept.
FLAG-ERA offers a platform to coordinate a wide range of sources of funding towards the realization of the very ambitious research goals of the two Flagship initiatives. The funding organisation will coordinate their funding framework conditions, adapt their thematic programs and elaborate new joint support mechanisms according to the identified needs. In particular, they can launch transnational calls enabling researchers from different countries to propose joint contributions to the Flagships.
FLAG-ERA also offers support to the four non-selected “runner-ups” Flagship pilots to progress towards their goals with adapted means.
FuturICT – understanding and managing complex, global, socially interactive systems, with a focus on sustainability and resilience.
Guardian Angels – technologies for extremely energy-efficient, smart, electronic personal companions that will assist humans from infancy to old age.
IT Future of Medicine – a data-driven, individualised medicine of the future, based on the molecular, physiological, and anatomical data from individual patients.
A call was published in July 2010, and six pilot projects were chosen for the so-called preparatory actions. At the end of 2012, 25 world-renowned experts evaluated the pilots’ work and two winning projects were announced by Vice-President Neelie Kroes on 28th January 2013.
“Graphene” will investigate and exploit the unique properties of a revolutionary carbon-based material. Graphene is an extraordinary combination of physical and chemical properties: it is the thinnest material, it conducts electricity much better than copper, it is 100-300 times stronger than steel and it has unique optical properties. The use of graphene was made possible by European scientists in 2004, and the substance is set to become the wonder material of the 21st century, as plastics were to the 20th century, including by replacing silicon in ICT products.
The “Human Brain Project” will create the world’s largest experimental facility for developing the most detailed model of the brain, for studying how the human brain works and ultimately to develop personalised treatment of neurological and related diseases. This research lays the scientific and technical foundations for medical progress that has the potential to will dramatically improve the quality of life for millions of Europeans.
Written by: Peter Brietbart and Marco Vega
Animation & Design Lead: Many Artists Who Do One Thing (Mihai Badic)
Animation Script: Mihai Badic and Peter Brietbart
Narrated by: Holly Hagan-Walker
Music and SFX: Steven Gamble
Design Assistant: Melita Pupsaite
Additional Animation: Nicholas Temple
Other Contributors: Callum Round, Asifuzzaman Ahmed, Steffan Dafydd, Ben Kokolas, Cristopher Rosales
Special Thanks: David Pearce, Dino Kazamia, Ana Sandoiu, Dave Gamble, Tom Davis, Aidan Walker, Hani Abusamra, Keita Lynch