Future Tech… From 2017

Self-driving cars, low cost genome sequencing, robotic surgery, free energy… ok, near free energy…  These are some of the topics discussed in this video compilation by Jonas Bjerg.  The video features clips from Peter Diamandis and Elon Musk and focuses on presently developing technologies and the near future plans to push these technologies to ubiquitous use with the promise of individual abundance.  I’ll leave it to you to consider the ramifications of a transition to a post scarcity economy, but I would like to note that many of these technologies, while potentially beneficial to humanity, are disruptive technologies.  There are currently test programs involving universal basic income (UBI, also mentioned in the video) and it appears we are on the threshold (within the next couple decades) of needing some form of UBI to support all but the most creative of individuals… then again, I’ve recently seen computer programs that might even call the need for creative individuals into question.  I imagine a time in the not so distant future when humans will need to “merge with the machines” as Ray Kurzweil says, or resign ourselves to primitivism or even obsolescence (to avoid the worst case Hollywood-esque scenario) while the posthumans do all the work above our pay grade (of course, one hopes pay is not a requirement for a prosperous life in the future).  It seems this scenario is a far future concept, however a look at the technology being implemented right now implies change is coming sooner than we might think.

 


Runtime: 14:52


This video can also be found here.  If you like this video, make sure to stop by and give them a like.

Video Info:

Published on Dec 3, 2017

This is a summary of everything future discussed at the Singularity University summit 2017, Ted talks with Elon Musk 2017, World Government Summit 2017 and Peter Diamandis views on HOW ABUNDANCE WILL CHANGE THE WORLD as we know it.

SUBSCRIBE HERE: http://youtube.com/c/jonasbjerg?sub_c…

Links:
https://www.youtube.com/watch?v=3cXPW…
(Singularity university summit – Peter Diamandis)
The demonetization of living
Maslows pyramid of needs are trending towards 0-cost
Abundance and nanotechnology (nanobots)
Raw material cost + energy cost + Information = COST 3:47
In the back of abundance 4:00
Most people will have devices for free so that you can buy stuff from them and they can collect data 7:00
Data is the new gold 7:30
Free content 1b hours of free content pr day.
8000X more energy hitting the surface of the planet than we consume, and the poorest have the most sun 8:00
solar roads 9:30
2.9 cents pr kWh
Giga factory in Reno 10:00
cars from 1904 to 1917 100% switch 11:00
by 2025 car ownership will be dead: 12:20
autonomy will demonetize housing 13:20
house being 3d printed 14:26
literacy to basic reading writing in 18 months 16:03
demonetization of healthcare, deep learning protocols 17:00
Watson diagnosed rare leukaemia 18:00
Cost of genome sequencing Morse law 19:05
Sequenced when born, stop what you will get sick from before people get sick
Surgery 20:00
Rapidly demonetizing trends everywhere 21:30
Not scared of AI terminator 23:00
Job loss 24:00
Demonetize the cost of living, (education, entertainment, food etc)
Psychological impact to losing job 24:30
AI software shells 27:30 within ten years
UBI Universal Basic Income

https://www.youtube.com/watch?v=zIwLW…
(Elon Musk Building)
10fold improvement in cost of digging pr. Mile
no sound
The boring company
Autonomy brings more cars on the roads
Every big car company has announced electric cars within 10 years 12:00
By the end of 2017 self-driving coast to coast 15:00
Free self-driving cars 19:30
Self-driving trucks tesla semi out torc any diesel semi uphill 20:30
Solar panels 22:00
Most houses have enough roof area to power all the needs of the house 25:30
Giga factory 100 27:00
100 gWh pr week
1 build announcing another 5 this year 29:00
reusability rocket 33:00

https://www.youtube.com/watch?v=rCoFK…
(Elon Musk World Government Summit 2017)
Multi planetary species is life insurance for life collectively 2:30
Ten years from now full autonomy cars will only be build 8:30
Elon building tunnel under Washington 14:20
3d building, 2d road network 16:00
12-15% driving as a job 18:20
over 2B vehicles in the world 19:00
100M total new vehicle production cap 19:00
life of car 20-25 years 19:00
Prepare government for the future 20:00
AR regulation
Transport 21:00 electric over 30-40 years
Demand for electricity will increase rapidly. Total energy usage = 1/3 electricity 1/3 transport 1/3 heating over time that will predominantly be all electricity 21:30
Universal basic income, no choice 22:50
Fewer and fewer jobs that a robot cannot do better
The output of goods and services will be extremely high with automation, so they will come abundance and come really cheap.
The harder challenge is, how do people then have meaning?
To some degree we are already a cyborg 25:21
Reusable rockets, cost to fly close to plane 29:00

Social Media:
Twitter:
https://www.twitter.com/jonasislive

Snapchat:
Jonasislive

Instagram:
https://www.instagram.com/jonasislive

My gear:
#1 Panasonic Lumix G series (G85/G81)
http://amzn.to/2ruwFTN

#2 Canon g7x mark II
http://amzn.to/2rpzd6D
http://amzn.to/2qk4Cai
http://amzn.to/2qrrLCM

#3 SJCAM 4000
http://amzn.to/2rpfDHr
http://amzn.to/2rpqt0f
http://amzn.to/2qk4W8P

#4 regular iPhone camera
http://amzn.to/2qrs68w
http://amzn.to/2rpc582
http://amzn.to/2qjW82J

#5 huanqi 899b drone
http://amzn.to/2qG2PfR

Don’t Fear Artificial Intelligence by Ray Kurzweil

This is an article from TIME by Ray Kurzweil called Don’t Fear Artificial Intelligence.  Basically, Kurzweil’s stance is that “technology is a double-edged sword” and that it always has been, but that’s no reason to abandon the research.  Kurzweil also states that, “Virtually every­one’s mental capabilities will be enhanced by it within a decade.”  I hope it makes people smarter and not just more intelligent! 


Don’t Fear Artificial Intelligence

Retro toy robot
Getty Images

Kurzweil is the author of five books on artificial ­intelligence, including the recent New York Times best seller “How to Create a Mind.”

Two great thinkers see danger in AI. Here’s how to make it safe.

Stephen Hawking, the pre-eminent physicist, recently warned that artificial intelligence (AI), once it sur­passes human intelligence, could pose a threat to the existence of human civilization. Elon Musk, the pioneer of digital money, private spaceflight and electric cars, has voiced similar concerns.

If AI becomes an existential threat, it won’t be the first one. Humanity was introduced to existential risk when I was a child sitting under my desk during the civil-­defense drills of the 1950s. Since then we have encountered comparable specters, like the possibility of a bioterrorist creating a new virus for which humankind has no defense. Technology has always been a double-edged sword, since fire kept us warm but also burned down our villages.

The typical dystopian futurist movie has one or two individuals or groups fighting for control of “the AI.” Or we see the AI battling the humans for world domination. But this is not how AI is being integrated into the world today. AI is not in one or two hands; it’s in 1 billion or 2 billion hands. A kid in Africa with a smartphone has more intelligent access to knowledge than the President of the United States had 20 years ago. As AI continues to get smarter, its use will only grow. Virtually every­one’s mental capabilities will be enhanced by it within a decade.

We will still have conflicts among groups of people, each enhanced by AI. That is already the case. But we can take some comfort from a profound, exponential decrease in violence, as documented in Steven Pinker’s 2011 book, The Better Angels of Our Nature: Why Violence Has Declined. According to Pinker, although the statistics vary somewhat from location to location, the rate of death in war is down hundredsfold compared with six centuries ago. Since that time, murders have declined tensfold. People are surprised by this. The impression that violence is on the rise results from another trend: exponentially better information about what is wrong with the world—­another development aided by AI.

There are strategies we can deploy to keep emerging technologies like AI safe. Consider biotechnology, which is perhaps a couple of decades ahead of AI. A meeting called the Asilomar ­Conference on Recombinant DNA was organized in 1975 to ­assess its potential dangers and devise a strategy to keep the field safe. The resulting guidelines, which have been revised by the industry since then, have worked very well: there have been no significant problems, accidental or intentional, for the past 39 years. We are now seeing major ad­vances in medical treatments reaching clinical practice and thus far none of the anticipated problems.

Consideration of ethical guidelines for AI goes back to Isaac Asimov’s three laws of robotics, which appeared in his short story “Runaround” in 1942, eight years before Alan Turing introduced the field of AI in his 1950 paper “Computing Machinery and Intelligence.” The median view of AI practitioners today is that we are still several decades from achieving human-­level AI. I am more optimistic and put the date at 2029, but either way, we do have time to devise ethical standards.

There are efforts at universities and companies to develop AI safety strategies and guidelines, some of which are already in place. Similar to the Asilomar guidelines, one idea is to clearly define the mission of each AI program and to build in encrypted safeguards to prevent unauthorized uses.

Ultimately, the most important approach we can take to keep AI safe is to work on our human governance and social institutions. We are already a human-­machine civilization. The best way to avoid destructive conflict in the future is to continue the advance of our social ideals, which has already greatly reduced violence.

AI today is advancing the diagnosis of disease, finding cures, developing renewable clean energy, helping to clean up the environment, providing high-­quality education to people all over the world, helping the disabled (including providing Hawking’s voice) and contributing in a myriad of other ways. We have the opportunity in the decades ahead to make major strides in addressing the grand challenges of humanity. AI will be the pivotal technology in achieving this progress. We have a moral imperative to realize this promise while controlling the peril. It won’t be the first time we’ve succeeded in doing this.

Kurzweil is the author of five books on artificial ­intelligence, including the recent New York Times best seller How to Create a Mind.


 

This article can also be found here.