Ray Kurzweil and Neil deGrasse Tyson Talk Future

In this interview, Neil deGrasse Tyson (astrophysicist) talks with futurist Ray Kurzweil about the exponential progression of computing and touches on some of Kurzweil’s key predictions.  If you’re familiar with Kurzweil’s public talks and interviews then you know there are certain salient points he likes to make in regards to the exponential nature of information technology (see Kurzweil’s Law of Accelerating Returns).  I liked this video because it is a good collection of such points as well as a couple insights I hadn’t heard him express previously.  In this video, Tyson acts primarily in the capacity of host.


Runtime: 20:42


This video can also be found here.

Video Info:

Published on May 16, 2017

Future of Earth Year 2030 in Dr Neil deGrasse Tyson & Dr Ray Kurzweil POV. Documentary 2018
https://tinyurl.com/AstrobumTV

Buy Billionaire Peter Thiel’s Zero to One Book here!
http://amzn.to/2x1J8BX

Copyright Disclaimer Under Section 107 of the Copyright Act 1976, allowance is made for “fair use” for purposes such as criticism, comment, news reporting, teaching, scholarship, and research. Fair use is a use permitted by copyright statute that might otherwise be infringing. BBC Documentary 2018 Non-profit, educational or personal use tips the balance in favor of fair use.

How Google’s Deep Dream Works

The following video is a general description of how Google’s Deep Dream computer vision (image recognition) works.  Deep Dream uses a convolutional neural network to look for patterns in an image or video based on large sets of images previously analyzed for defining characteristics.  The beauty of Deep Dream is that it can modify images to accent fragments within the image to more closely resemble the aspects from images it has previously evaluated.  The effects of such augmentation can be quite striking; creating strange, almost psychedelic, results.


Runtime: 13:42


This video can also be found here.

Video Info:

Published on Aug 26, 2016

Surreal images created by Google’s Deep Dream code flooded the internet in 2015 but how does deep dream do it? Image analyst Dr Mike Pound.

Inside a Neural Network: https://youtu.be/BFdMrDOx_CM
Cookie Stealing: https://youtu.be/T1QEs3mdJoc
FPS & Digital Video: https://youtu.be/yniSnYtkrwQ
Password Cracking: https://youtu.be/7U-RbOKanYs
Gamer’s Paradise: https://youtu.be/HZzdXR0bV8o

Images seen/manipulated in this video: https://drive.google.com/open?id=0Bwd…

http://www.facebook.com/computerphile
https://twitter.com/computer_phile

This video was filmed and edited by Sean Riley.

Computer Science at the University of Nottingham: http://bit.ly/nottscomputer

Computerphile is a sister project to Brady Haran’s Numberphile. More at http://www.bradyharan.com

  • Category

  • License

    • Standard YouTube License

AI Births New AI

This short video from producer Dagogo Altraide (found on the the ColdFusion YouTube channel) discusses a Google Brain AI (artificial intelligence) which created it’s own AI offspring that “performs better than anything else in it’s feild” according to the video (this particular field is computer vision).  Google uses an approach called automatic machine learning (or AutoML) to recursively generate new architectures and give feedback to the controller neural net.  The “parent” neural net is called AutoML and the “child” neural net is called NASNet.  NASNet is a real time image recognition AI.  NASNet performed with more accuracy and efficiency than any AI of it’s kind created by humans.


Runtime: 5:52


This video can also be found here.

Video Info:

Published on Jan 15, 2018

Subscribe here: https://goo.gl/9FS8uF
Check out the previous episode: https://www.youtube.com/watch?v=cDxi6…

Become a Patron!: https://www.patreon.com/ColdFusion_TV
CF Bitcoin address: 13SjyCXPB9o3iN4LitYQ2wYKeqYTShPub8

In this video we talk about Auto ML by Google brain. Auto ML is one of the first successful automated AI projects.

Hi, welcome to ColdFusion (formerly known as ColdfusTion).
Experience the cutting edge of the world around us in a fun relaxed atmosphere.

Sources:

http://automl.info

http://www.ml4aad.org/automl/

https://futurism.com/google-artificia…

https://futurism.com/googles-new-ai-i…

https://research.googleblog.com/2017/…

https://www.youtube.com/watch?v=92-Do…

http://research.nvidia.com/publicatio…

//Soundtrack//

0:00 Tchami – After Life (Feat. Stacy Barthe)

0:40 Delectatio – Everything Is A Dream

1:45 Ricky Eat Acid – A Smoothie Robot For My Moon Mansion

3:37 Catching Flies – The Long Journey Home

4:40 Gryffin – Heading Home (feat. Josef Salvat)

5:18 Faux Tales – Weightless (feat. Luke Cusato)

» Google + | http://www.google.com/+coldfustion

» Facebook | https://www.facebook.com/ColdFusionTV

» My music | http://burnwater.bandcamp.com or
» http://www.soundcloud.com/burnwater
» https://www.patreon.com/ColdFusion_TV
» Collection of music used in videos: https://www.youtube.com/watch?v=YOrJJ…

Producer: Dagogo Altraide

» Twitter | @ColdFusion_TV

Google’s DeepMind AI

This video explain’s Google’s DeepMind AI.  The goal of the DeepMind project is to develop general purpose algorithms which, along with the development of artificial intelligence, will teach us more about the human brain.  Part of the DeepMind research was a project called AlphGo which beat Go world champion Lee Sedol in 2015.  Due to the intuitive nature of the game, it was not expected that an AI would succeed in beating a Go champion for at least another decade.  DeepMind uses a type of machine learning called deep reinforcement learning.  Machine learning is not to be confused with expert systems.  Expert systems work within predefined (pre-programmed) parameters whereas machine learning relies upon pattern recognition algorithms.


Runtime:13:44


This video can also be found here.

Video Info:

Published on May 1, 2016

Subscribe here: https://goo.gl/9FS8uF
Become a Patreon!: https://www.patreon.com/ColdFusion_TV
Visual animal AI: https://www.youtube.com/watch?v=DgPaC…

Hi, welcome to ColdFusion (formally known as ColdfusTion).
Experience the cutting edge of the world around us in a fun relaxed atmosphere.

Sources:

Why AlphaGo is NOT an “Expert System”: https://googleblog.blogspot.com.au/20…

“Inside DeepMind” Nature video:
https://www.youtube.com/watch?v=xN1d3…

“AlphaGo and the future of Artificial Intelligence” BBC Newsnight: https://www.youtube.com/watch?v=53YLZ…

http://www.nature.com/nature/journal/…

http://www.ft.com/cms/s/2/063c1176-d2…

http://www.nature.com/nature/journal/…

https://www.technologyreview.com/s/53…

https://medium.com/the-physics-arxiv-…

https://www.deepmind.com/

http://www.forbes.com/sites/privacynotice/2014/02/03/inside-googles-mysterious-ethics-board/#5dc388ee4674

https://medium.com/the-physics-arxiv-…

http://www.theverge.com/2016/3/10/111…

https://en.wikipedia.org/wiki/Demis_H…

https://en.wikipedia.org/wiki/Google_…

//Soundtrack//

Disclosure – You & Me (Ft. Eliza Doolittle) (Bicep Remix)

Stumbleine – Glacier

Sundra – Drifting in the Sea of Dreams (Chapter 2)

Dakent – Noon (Mindthings Rework)

Hnrk – fjarlæg

Dr Meaker – Don’t Think It’s Love (Real Connoisseur Remix)

Sweetheart of Kairi – Last Summer Song (ft. CoMa)

Hiatus – Nimbus

KOAN Sound & Asa – This Time Around (feat. Koo)

Burn Water – Hide

» Google + | http://www.google.com/+coldfustion

» Facebook | https://www.facebook.com/ColdFusionTV

» My music | t.guarva.com.au/BurnWater http://burnwater.bandcamp.com or
» http://www.soundcloud.com/burnwater
» https://www.patreon.com/ColdFusion_TV
» Collection of music used in videos: https://www.youtube.com/watch?v=YOrJJ…

Producer: Dagogo Altraide

Editing website: http://www.cfnstudios.com

Coldfusion Android Launcher: https://play.google.com/store/apps/de…

» Twitter | @ColdFusion_TV

Nell Watson Discusses Quantum Mechanical Processes

In this video, Nell Watson, co-founder (along with Alexander Vandevelde and Wim Devos) of QuantaCorp (previously Poikos), discusses “quantum mechanical processes” and how the study of natural biological processes can lead to better computational algorithms.  In the video, Watson refers to “reservoir computing” which is the idea that “you can turn different physical properties of materials into complex computation.”


Runtime: 14:59


This video can also be found here.

Video Info:

Published on Nov 16, 2017

Recent experiments in Optoelectronic reservoir computing show that computation can be performed within everyday physical media. This suggests intriguing possibilities with regards to the future of programmable matter and ubiquitous computing. It also raises the question of whether such computational phenomena may be found within nature, contributing to the seemingly-intelligent responses of plants, for example, or assisting the manifestation of certain complex biochemical processes.
Nell Watson has a longstanding interest in the philosophy of technology, and how extensions of human capacity drive emerging social trends. Watson lectures globally on a broad spectrum of AI-related topics. In 2010, Nell founded Poikos, a machine learning-driven AI for body measurement. She is also Co-Founder of OpenEth.org. This talk was given at a TEDx event using the TED conference format but independently organized by a local community. Learn more at https://www.ted.com/tedx

Intel’s 49-qubit Quantum Chip and Mobileye’s Self Driving Car, Presented at CES 2018

This video is a look at Intel’s new 49-qubit quantum chip and Mobileye’s self driving/driver assist technology.  This presentation took place at CES 2018Loihi is the name of Intel’s prototype neuromorphic quantum chip.

Mike Davies, Jim Held, and Jon Tse appear in the presentation’s video expo to discuss the neuromorphic chip’s inspiration, goals, and possible applications.

Senior Vice President of Intel and CEO/CTO of Mobileye, Amnon Shashua, also presents self driving and driver assist technology.


Runtime: 20:20


This video can also be found here.

Video Info:

Published on Jan 9, 2018

Intel’s CES 2018 keynote focused on its 49-qubit quantum computing chip, VR applications for content, its AI self-learning chip, and an autonomous vehicles platform.

 

Don’t Fear Artificial Intelligence by Ray Kurzweil

This is an article from TIME by Ray Kurzweil called Don’t Fear Artificial Intelligence.  Basically, Kurzweil’s stance is that “technology is a double-edged sword” and that it always has been, but that’s no reason to abandon the research.  Kurzweil also states that, “Virtually every­one’s mental capabilities will be enhanced by it within a decade.”  I hope it makes people smarter and not just more intelligent! 


Don’t Fear Artificial Intelligence

Retro toy robot
Getty Images

Kurzweil is the author of five books on artificial ­intelligence, including the recent New York Times best seller “How to Create a Mind.”

Two great thinkers see danger in AI. Here’s how to make it safe.

Stephen Hawking, the pre-eminent physicist, recently warned that artificial intelligence (AI), once it sur­passes human intelligence, could pose a threat to the existence of human civilization. Elon Musk, the pioneer of digital money, private spaceflight and electric cars, has voiced similar concerns.

If AI becomes an existential threat, it won’t be the first one. Humanity was introduced to existential risk when I was a child sitting under my desk during the civil-­defense drills of the 1950s. Since then we have encountered comparable specters, like the possibility of a bioterrorist creating a new virus for which humankind has no defense. Technology has always been a double-edged sword, since fire kept us warm but also burned down our villages.

The typical dystopian futurist movie has one or two individuals or groups fighting for control of “the AI.” Or we see the AI battling the humans for world domination. But this is not how AI is being integrated into the world today. AI is not in one or two hands; it’s in 1 billion or 2 billion hands. A kid in Africa with a smartphone has more intelligent access to knowledge than the President of the United States had 20 years ago. As AI continues to get smarter, its use will only grow. Virtually every­one’s mental capabilities will be enhanced by it within a decade.

We will still have conflicts among groups of people, each enhanced by AI. That is already the case. But we can take some comfort from a profound, exponential decrease in violence, as documented in Steven Pinker’s 2011 book, The Better Angels of Our Nature: Why Violence Has Declined. According to Pinker, although the statistics vary somewhat from location to location, the rate of death in war is down hundredsfold compared with six centuries ago. Since that time, murders have declined tensfold. People are surprised by this. The impression that violence is on the rise results from another trend: exponentially better information about what is wrong with the world—­another development aided by AI.

There are strategies we can deploy to keep emerging technologies like AI safe. Consider biotechnology, which is perhaps a couple of decades ahead of AI. A meeting called the Asilomar ­Conference on Recombinant DNA was organized in 1975 to ­assess its potential dangers and devise a strategy to keep the field safe. The resulting guidelines, which have been revised by the industry since then, have worked very well: there have been no significant problems, accidental or intentional, for the past 39 years. We are now seeing major ad­vances in medical treatments reaching clinical practice and thus far none of the anticipated problems.

Consideration of ethical guidelines for AI goes back to Isaac Asimov’s three laws of robotics, which appeared in his short story “Runaround” in 1942, eight years before Alan Turing introduced the field of AI in his 1950 paper “Computing Machinery and Intelligence.” The median view of AI practitioners today is that we are still several decades from achieving human-­level AI. I am more optimistic and put the date at 2029, but either way, we do have time to devise ethical standards.

There are efforts at universities and companies to develop AI safety strategies and guidelines, some of which are already in place. Similar to the Asilomar guidelines, one idea is to clearly define the mission of each AI program and to build in encrypted safeguards to prevent unauthorized uses.

Ultimately, the most important approach we can take to keep AI safe is to work on our human governance and social institutions. We are already a human-­machine civilization. The best way to avoid destructive conflict in the future is to continue the advance of our social ideals, which has already greatly reduced violence.

AI today is advancing the diagnosis of disease, finding cures, developing renewable clean energy, helping to clean up the environment, providing high-­quality education to people all over the world, helping the disabled (including providing Hawking’s voice) and contributing in a myriad of other ways. We have the opportunity in the decades ahead to make major strides in addressing the grand challenges of humanity. AI will be the pivotal technology in achieving this progress. We have a moral imperative to realize this promise while controlling the peril. It won’t be the first time we’ve succeeded in doing this.

Kurzweil is the author of five books on artificial ­intelligence, including the recent New York Times best seller How to Create a Mind.


 

This article can also be found here.
 

 

Hugo de Garis – Singularity Skepticism (Produced by Adam Ford)

This is Hugo de Garis talking about why people tend to react with a great deal of skepticism.  To address the skeptics, de Garis explains Moore’s Law and goes into it’s many implications.  Hugo de Garis makes a statement toward the end about how people will begin to come around when they begin to see their household electronics getting smarter and smarter.


Runtime: 12:31


This video can also be found here and here.

Video Info:

Published on Jul 31, 2012

Hugo de Garis speaks about why people are skeptical about the possibility of machine intelligence, and also reasons for believing machine intelligence is possible, and quite probably will be an issue that we will need to face in the coming decades.

If the brain guys can copy how the brain functions closely enough…we will arrive at a machine based on neuroscience ideas and that machine will be intelligent and conscious

 

 

Ben Goertzel – Beginnings [on Artificial Intelligence – Thanks to Adam A. Ford for this video.]

In this video, Ben Goertzel talks a little about how he got into AGI research and about the research, itself.  I first heard of Ben Goertzel about four years ago, right when I was first studying computer science and considering a career in AI programming.  At the time, I was trying to imagine how you would build an emotionally intelligent machine.  I really enjoyed hearing some of his ideas at the time and still do.  Also at the time, I was listening to a lot of Tony Robbins so you could imagine, I came up with some pretty interesting theories on artificial intelligence and empathetic machines.  Maybe if I get enough requests I’ll write a special post on some of those ideas.  You just let me know if you’re interested.


Runtime: 10:33


This video can also be found at here and here.

Video Info:

Published on Jul 27, 2012

Ben Goertzel talks about his early stages in thinking about AI, and two books : The Hidden Pattern, and Building Better Minds.

The interview was done in Melbourne Australia while Ben was down to speak at the Singularity Summit Australia 2011.

http://2011.singularitysummit.com.au

Interviewed, Filmed & Edited by Adam A. Ford
http://goertzel.org

 

Peter Voss Interview on Artificial General Intelligence

This is an interview with Peter Voss of Optimal talking about artificial general intelligence.  One of the things Voss talks about is the skepticism which is a common reaction when talking about creating strong AI and why (as Tony Robbins always says) the past does not equal the future.  He also talks about why he thinks that Ray Kurzweil’s predictions that AGI won’t be achieved for another 20 is wrong – (and I gotta say, he makes a good point).  If you are interested in artificial intelligence or ethics in technology then you’ll want to watch this one…  

And don’t worry, the line drawing effect at the beginning of the video only lasts a minute.


Runtime: 39:55


This video can also be found at https://www.youtube.com/watch?v=4W_vtlSjNk0

Video Info:

Published on Jan 8, 2013

Peter Voss is the founder and CEO of Adaptive A.I. Inc, an R&D company developing a high-level general intelligence (AGI) engine. He is also founder and CTO of Smart Action Company LLC, which builds and supplies AGI-based virtual contact-center agents — intelligent, automated phone operators.

Peter started his career as an entrepreneur, inventor, engineer and scientist at age 16. After several years of experience in electronics engineering, at age 25 he started a company to provide advanced custom hardware and software solutions. Seven years later the company employed several hundred people and was successfully listed on the Johannesburg Stock Exchange.

After selling his interest in the company in 1993, he worked on a broad range of disciplines — cognitive science, philosophy and theory of knowledge, psychology, intelligence and learning theory, and computer science — which served as the foundation for achieving new breakthroughs in artificial general intelligence. In 2001 he started Adaptive AI Inc., and last year founded Smart Action Company as its commercialization division.

Peter considers himself a free-minds-and-markets Extropian, and often writes and presents on philosophical topics including rational ethics, freewill and artificial minds. He is also deeply involved with futurism and life-extension.


http://www.optimal.org/peter/peter.htm

My main occupation is research in high-level, general (domain independent, autonomous) Artificial Intelligence — “Adaptive A.I. Inc.”

I believe that integrating insights from the following areas of cognitive science are crucial for rapid progress in this field:

Philosophy/ epistemology – understanding the true nature of knowledge
Cognitive psychology (incl. developmental & psychometric) for analysis of cognition – and especially – general conceptual intelligence.
Computer science – self-modifying systems, combining new connectionist pattern manipulation techniques with ‘traditional’ AI engineering.
Anyone who shares my passion – and/ or concerns – for this field is welcome to contact me for brainstorming and possible collaboration.

My other big passion is for exploring what I call Optimal Living: Maximizing both the quantity & quality of life. I see personal responsibility and optimizing knowledge acquisition as key. Specific interests include:

Rationality, as a means for knowledge. I’m largely sympathetic to the philosophy of Objectivism, and have done quite a bit of work on developing a rational approach to (personal & social) ethics.
Health (quality): physical, financial, cognitive, and emotional (passions, meaningful relationships, appreciation of art, etc.). Psychology: IQ & EQ.
Longevity (quantity): general research, CRON (calorie restriction), cryonics
Environment: economic, social, political systems conducive to Optimal Living.
These interests logically lead to an interest in Futurism , in technology for improving life – overcoming limits to personal growth & improvement. The transhumanist philosophy of Extropianism best embodies this quest. Specific technologies that seem to hold most promise include AI, Nanotechnology, & various health & longevity approaches mentioned above.

I always enjoy meeting new people to explore ideas, and to have my views critiqued. To this end I am involved in a number of discussion groups and salons (e.g. ‘Kifune’ futurist dinner/discussion group). Along the way I’m trying to develop and learn the complex art of constructive dialog.

Interview done at SENS party LA 20th Dec 2012.