Quantum Computing History and Development

This video not only includes a brief history of quantum computing, but also takes a look at the rise of understanding in physics which make quantum computers possible.  The video touches on everything from string theory to Moore’s law and the physical limits of the transistors.  I’ve noticed that descriptions of the operation of quantum computers range all the way from mundane to almost esoteric explanations.  I find the subject of quantum computing highly fascinating and we will continue to refine our understanding of how they work here at Dawn of Giants.  Keep following and don’t forget to subscribe if you want to learn more about this incredible subject.


Runtime: 14:24


This video can also be found here.  If you liked this video, make sure to stop by and give them a like.

Video Info:

Published on Nov 3, 2017

A short documentary covering the past, present & future of Quantum Computers. Quantum Computing will one day create the possibilities we all fantasise about in science fiction movies, and will push human evolution to the next level.

COPYRIGHT NOTICE: This video is for ‘non-profit’ educational purposes covered by the ‘fair use’ policy under ‘Section 107’ of the U.S. Copyright Act. For any content/collaboration related queries please contact us via private message.

SEE OUR RELATED VIDEOS….
Mind Over Matter Is Real: Experiments Reveal!
https://youtu.be/uSWY6WhHl_M
Quantum Gravity: A New Theory Of Everything
https://youtu.be/_v9eTvlLi-s
Are We In a Simulated Reality?
https://youtu.be/iK4tPDjXch8

PLEASE SUBSCRIBE & THANKS FOR WATCHING.

Intel’s 49-qubit Quantum Chip and Mobileye’s Self Driving Car, Presented at CES 2018

This video is a look at Intel’s new 49-qubit quantum chip and Mobileye’s self driving/driver assist technology.  This presentation took place at CES 2018Loihi is the name of Intel’s prototype neuromorphic quantum chip.

Mike Davies, Jim Held, and Jon Tse appear in the presentation’s video expo to discuss the neuromorphic chip’s inspiration, goals, and possible applications.

Senior Vice President of Intel and CEO/CTO of Mobileye, Amnon Shashua, also presents self driving and driver assist technology.


Runtime: 20:20


This video can also be found here.

Video Info:

Published on Jan 9, 2018

Intel’s CES 2018 keynote focused on its 49-qubit quantum computing chip, VR applications for content, its AI self-learning chip, and an autonomous vehicles platform.

 

Humans 2.0 with Jason Silva

This is one of the Shots of Awe videos created by Jason Silva.  It’s called HUMAN 2.0.  I don’t think a description is in order here since all the Shots of Awe videos are short and sweet.

Runtime: 2:15

Video Info:

Published on Dec 2, 2014

“Your trivial-seeming self tracking app is part of something much bigger. It’s part of a new stage in the scientific method, another tool in the kit. It’s not the only thing going on, but it’s part of that evolutionary process.” – Ethan Zuckerman paraphrasing Kevin Kelly

Steven Johnson
“Chance favors the connected mind.”
http://www.ted.com/talks/steven_johns…

Additional footage courtesy of Monstro Design and http://nats.aero

For more information on Norton security, please go here: http://us.norton.com/boldlygo

Join Jason Silva every week as he freestyles his way into the complex systems of society, technology and human existence and discusses the truth and beauty of science in a form of existential jazz. New episodes every Tuesday.

Watch More Shots of Awe on TestTube http://testtube.com/shotsofawe

Subscribe now! http://www.youtube.com/subscription_c…

Jason Silva on Twitter http://twitter.com/jasonsilva

Jason Silva on Facebook http://facebook.com/jasonlsilva

Jason Silva on Google+ http://plus.google.com/10290664595165…

This video can also be found at https://www.youtube.com/watch?v=fXB5-iwNah0

The Singularity Isn’t Near by Paul Allen

This is a piece written by Paul Allen in which he presents his reasons for thinking a singularity will not occur until after 2045.  While I humbly disagree with some of Paul Allen’s assertions in this article, I must say that I respect Allen for admitting that “we are aware that the history of science and technology is littered with people who confidently assert that some event can’t happen, only to be later proven wrong—often in spectacular fashion.”  I also think he makes a salient point (and I am extrapolating this notion based on this article) about needing to have a more complete understanding of cognition before we can really delve into the science of creating a mind from scratch, so to speak.  

Then again, Ray Kurzweil now has the inconceivable resources and support of Google at his fingertips in order to accelerate his own research.  

One other thing I would like to address; in this article, Allen’s main premise is that the exponential growth in technology, which we have witnessed in the past, may not be as stable as many singularitarians would have you believe.  I can respect this view, but I would be remiss if I didn’t point out that Allen’s premise could work in the opposite direction just as easily.  Take the D-Wave quantum computer, for instance.  This computer represents a dramatic leap* forward in technological innovation which could actually compound the Law of Accelerating Returns beyond even its’ current exponential expansion.

*I refrain from using the obvious pun, quantum leap, when describing the D-Wave computer because by definition a quantum leap would actually be the smallest amount of progress one could conceivably make.  I heard that somewhere and thought it was amusing enough to repeat…

Credit: Technology Review

 

Futurists like Vernor Vinge and Ray Kurzweil have argued that the world is rapidly approaching a tipping point, where the accelerating pace of smarter and smarter machines will soon outrun all human capabilities. They call this tipping point the singularity, because they believe it is impossible to predict how the human future might unfold after this point. Once these machines exist, Kurzweil and Vinge claim, they’ll possess a superhuman intelligence that is so incomprehensible to us that we cannot even rationally guess how our life experiences would be altered. Vinge asks us to ponder the role of humans in a world where machines are as much smarter than us as we are smarter than our pet dogs and cats. Kurzweil, who is a bit more optimistic, envisions a future in which developments in medical nanotechnology will allow us to download a copy of our individual brains into these superhuman machines, leave our bodies behind, and, in a sense, live forever. It’s heady stuff.

While we suppose this kind of singularity might one day occur, we don’t think it is near. In fact, we think it will be a very long time coming. Kurzweil disagrees, based on his extrapolations about the rate of relevant scientific and technical progress. He reasons that the rate of progress toward the singularity isn’t just a progression of steadily increasing capability, but is in fact exponentially accelerating—what Kurzweil calls the “Law of Accelerating Returns.” He writes that:

So we won’t experience 100 years of progress in the 21st century—it will be more like 20,000 years of progress (at today’s rate). The “returns,” such as chip speed and cost-effectiveness, also increase exponentially. There’s even exponential growth in the rate of exponential growth. Within a few decades, machine intelligence will surpass human intelligence, leading to The Singularity … [1]

By working through a set of models and historical data, Kurzweil famously calculates that the singularity will arrive around 2045.

This prediction seems to us quite far-fetched. Of course, we are aware that the history of science and technology is littered with people who confidently assert that some event can’t happen, only to be later proven wrong—often in spectacular fashion. We acknowledge that it is possible but highly unlikely that Kurzweil will eventually be vindicated. An adult brain is a finite thing, so its basic workings can ultimately be known through sustained human effort. But if the singularity is to arrive by 2045, it will take unforeseeable and fundamentally unpredictable breakthroughs, and not because the Law of Accelerating Returns made it the inevitable result of a specific exponential rate of progress.

Kurzweil’s reasoning rests on the Law of Accelerating Returns and its siblings, but these are not physical laws. They are assertions about how past rates of scientific and technical progress can predict the future rate. Therefore, like other attempts to forecast the future from the past, these “laws” will work until they don’t. More problematically for the singularity, these kinds of extrapolations derive much of their overall exponential shape from supposing that there will be a constant supply of increasingly more powerful computing capabilities. For the Law to apply and the singularity to occur circa 2045, the advances in capability have to occur not only in a computer’s hardware technologies (memory, processing power, bus speed, etc.) but also in the software we create to run on these more capable computers. To achieve the singularity, it isn’t enough to just run today’s software faster. We would also need to build smarter and more capable software programs. Creating this kind of advanced software requires a prior scientific understanding of the foundations of human cognition, and we are just scraping the surface of this.

This prior need to understand the basic science of cognition is where the “singularity is near” arguments fail to persuade us. It is true that computer hardware technology can develop amazingly quickly once we have a solid scientific framework and adequate economic incentives. However, creating the software for a real singularity-level computer intelligence will require fundamental scientific progress beyond where we are today. This kind of progress is very different than the Moore’s Law-style evolution of computer hardware capabilities that inspired Kurzweil and Vinge. Building the complex software that would allow the singularity to happen requires us to first have a detailed scientific understanding of how the human brain works that we can use as an architectural guide, or else create it all de novo. This means not just knowing the physical structure of the brain, but also how the brain reacts and changes, and how billions of parallel neuron interactions can result in human consciousness and original thought. Getting this kind of comprehensive understanding of the brain is not impossible. If the singularity is going to occur on anything like Kurzweil’s timeline, though, then we absolutely require a massive acceleration of our scientific progress in understanding every facet of the human brain.

But history tells us that the process of original scientific discovery just doesn’t behave this way, especially in complex areas like neuroscience, nuclear fusion, or cancer research. Overall scientific progress in understanding the brain rarely resembles an orderly, inexorable march to the truth, let alone an exponentially accelerating one. Instead, scientific advances are often irregular, with unpredictable flashes of insight punctuating the slow grind-it-out lab work of creating and testing theories that can fit with experimental observations. Truly significant conceptual breakthroughs don’t arrive when predicted, and every so often new scientific paradigms sweep through the field and cause scientists to reëvaluate portions of what they thought they had settled. We see this in neuroscience with the discovery of long-term potentiation, the columnar organization of cortical areas, and neuroplasticity. These kinds of fundamental shifts don’t support the overall Moore’s Law-style acceleration needed to get to the singularity on Kurzweil’s schedule.

The Complexity Brake

The foregoing points at a basic issue with how quickly a scientifically adequate account of human intelligence can be developed. We call this issue the complexity brake. As we go deeper and deeper in our understanding of natural systems, we typically find that we require more and more specialized knowledge to characterize them, and we are forced to continuously expand our scientific theories in more and more complex ways. Understanding the detailed mechanisms of human cognition is a task that is subject to this complexity brake. Just think about what is required to thoroughly understand the human brain at a micro level. The complexity of the brain is simply awesome. Every structure has been precisely shaped by millions of years of evolution to do a particular thing, whatever it might be. It is not like a computer, with billions of identical transistors in regular memory arrays that are controlled by a CPU with a few different elements. In the brain every individual structure and neural circuit has been individually refined by evolution and environmental factors. The closer we look at the brain, the greater the degree of neural variation we find. Understanding the neural structure of the human brain is getting harder as we learn more. Put another way, the more we learn, the more we realize there is to know, and the more we have to go back and revise our earlier understandings. We believe that one day this steady increase in complexity will end—the brain is, after all, a finite set of neurons and operates according to physical principles. But for the foreseeable future, it is the complexity brake and arrival of powerful new theories, rather than the Law of Accelerating Returns, that will govern the pace of scientific progress required to achieve the singularity.

So, while we think a fine-grained understanding of the neural structure of the brain is ultimately achievable, it has not shown itself to be the kind of area in which we can make exponentially accelerating progress. But suppose scientists make some brilliant new advance in brain scanning technology. Singularity proponents often claim that we can achieve computer intelligence just by numerically simulating the brain “bottom up” from a detailed neural-level picture. For example, Kurzweil predicts the development of nondestructive brain scanners that will allow us to precisely take a snapshot a person’s living brain at the subneuron level. He suggests that these scanners would most likely operate from inside the brain via millions of injectable medical nanobots. But, regardless of whether nanobot-based scanning succeeds (and we aren’t even close to knowing if this is possible), Kurzweil essentially argues that this is the needed scientific advance that will gate the singularity: computers could exhibit human-level intelligence simply by loading the state and connectivity of each of a brain’s neurons inside a massive digital brain simulator, hooking up inputs and outputs, and pressing “start.”

However, the difficulty of building human-level software goes deeper than computationally modeling the structural connections and biology of each of our neurons. “Brain duplication” strategies like these presuppose that there is no fundamental issue in getting to human cognition other than having sufficient computer power and neuron structure maps to do the simulation.[2] While this may be true theoretically, it has not worked out that way in practice, because it doesn’t address everything that is actually needed to build the software. For example, if we wanted to build software to simulate a bird’s ability to fly in various conditions, simply having a complete diagram of bird anatomy isn’t sufficient. To fully simulate the flight of an actual bird, we also need to know how everything functions together. In neuroscience, there is a parallel situation. Hundreds of attempts have been made (using many different organisms) to chain together simulations of different neurons along with their chemical environment. The uniform result of these attempts is that in order to create an adequate simulation of the real ongoing neural activity of an organism, you also need a vast amount of knowledge about thefunctional role that these neurons play, how their connection patterns evolve, how they are structured into groups to turn raw stimuli into information, and how neural information processing ultimately affects an organism’s behavior. Without this information, it has proven impossible to construct effective computer-based simulation models. Especially for the cognitive neuroscience of humans, we are not close to the requisite level of functional knowledge. Brainsimulation projects underway today model only a small fraction of what neurons do and lack the detail to fully simulate what occurs in a brain. The pace of research in this area, while encouraging, hardly seems to be exponential. Again, as we learn more and more about the actual complexity of how the brain functions, the main thing we find is that the problem is actually getting harder.

The AI Approach

Singularity proponents occasionally appeal to developments in artificial intelligence (AI) as a way to get around the slow rate of overall scientific progress in bottom-up, neuroscience-based approaches to cognition. It is true that AI has had great successes in duplicating certain isolated cognitive tasks, most recently with IBM’s Watson system for Jeopardy! question answering. But when we step back, we can see that overall AI-based capabilities haven’t been exponentially increasing either, at least when measured against the creation of a fully general human intelligence. While we have learned a great deal about how to build individual AI systems that do seemingly intelligent things, our systems have always remained brittle—their performance boundaries are rigidly set by their internal assumptions and defining algorithms, they cannot generalize, and they frequently give nonsensical answers outside of their specific focus areas. A computer program that plays excellent chess can’t leverage its skill to play other games. The best medical diagnosis programs contain immensely detailed knowledge of the human body but can’t deduce that a tightrope walker would have a great sense of balance.

Why has it proven so difficult for AI researchers to build human-like intelligence, even at a small scale? One answer involves the basic scientific framework that AI researchers use. As humans grow from infants to adults, they begin by acquiring a general knowledge about the world, and then continuously augment and refine this general knowledge with specific knowledge about different areas and contexts. AI researchers have typically tried to do the opposite: they have built systems with deep knowledge of narrow areas, and tried to create a more general capability by combining these systems. This strategy has not generally been successful, although Watson’s performance on Jeopardy! indicates paths like this may yet have promise. The few attempts that have been made to directly create a large amount of general knowledge of the world, and then add the specialized knowledge of a domain (for example, the work ofCycorp), have also met with only limited success. And in any case, AI researchers are only just beginning to theorize about how to effectively model the complex phenomena that give human cognition its unique flexibility: uncertainty, contextual sensitivity, rules of thumb, self-reflection, and the flashes of insight that are essential to higher-level thought. Just as in neuroscience, the AI-based route to achieving singularity-level computer intelligence seems to require many more discoveries, some new Nobel-quality theories, and probably even whole new research approaches that are incommensurate with what we believe now. This kind of basic scientific progress doesn’t happen on a reliable exponential growth curve. So although developments in AI might ultimately end up being the route to the singularity, again the complexity brake slows our rate of progress, and pushes the singularity considerably into the future.

The amazing intricacy of human cognition should serve as a caution to those who claim the singularity is close. Without having a scientifically deep understanding of cognition, we can’t create the software that could spark the singularity. Rather than the ever-accelerating advancement predicted by Kurzweil, we believe that progress toward this understanding is fundamentally slowed by the complexity brake. Our ability to achieve this understanding, via either the AI or the neuroscience approaches, is itself a human cognitive act, arising from the unpredictable nature of human ingenuity and discovery. Progress here is deeply affected by the ways in which our brains absorb and process new information, and by the creativity of researchers in dreaming up new theories. It is also governed by the ways that we socially organize research work in these fields, and disseminate the knowledge that results. At Vulcanand at the Allen Institute for Brain Science, we are working on advanced tools to help researchers deal with this daunting complexity, and speed them in their research. Gaining a comprehensive scientific understanding of human cognition is one of the hardest problems there is. We continue to make encouraging progress. But by the end of the century, we believe, we will still be wondering if the singularity is near.

Paul G. Allen, who cofounded Microsoft in 1975, is a philanthropist and chairman of Vulcan, which invests in an array of technology, aerospace, entertainment, and sports businesses. Mark Greaves is a computer scientist who serves as Vulcan’s director for knowledge systems.

[1] Kurzweil, “The Law of Accelerating Returns,” March 2001.

[2] We are beginning to get within range of the computer power we might need to support this kind of massive brain simulation. Petaflop-class computers (such as IBM’s BlueGene/P that was used in the Watson system) are now available commercially. Exaflop-class computers are currently on the drawing boards. These systems could probably deploy the raw computational capability needed to simulate the firing patterns for all of a brain’s neurons, though currently it happens many times more slowly than would happen in an actual brain.

UPDATE: Ray Kurzweil responds here.

This article can also be found at http://www.technologyreview.com/view/425733/paul-allen-the-singularity-isnt-near/

The Future Of Quantum Computing

This is a video featuring the D-Wave CEO, Vern Brownell, presenting at the Compute Midwest conference and was posted by Tech Midwest on YouTube.  The video’s full title is, “The Future Of Quantum Computing: Vern Brownell, D-Wave CEO @ Compute Midwest.”  The video can also be found on the Tech Midwest website and I have also posted the article from Tech Midwest below the video.

Here is the video from it’s YouTube source https://www.youtube.com/watch?v=Zdd88aC0VwA

Runtime: 48:57

From the Tech Midwest website:

How will quantum computing change the world?
Learn about a radical new machine that could solve humanity’s most complex problems.

In this video, Vern Brownell (CEO, D-Wave) shared insights on “The Future Of Quantum Computing” at our Compute Midwest conference.

D-Wave made history by building the world’s first commercial quantum computer.

At a cost of over 10 million dollars, Google, NASA and Lockheed Martin were first to buy this machine.

Imagine the future with a technology so powerful, it could help find cures for cancer, new pharmaceutical drug R&D and much more.

_DSC3768 copy
Vern Brownell, CEO of D-Wave, Speaks at Compute Midwest 2014. (Photo CreditWestside Studio)

 

Vern Brownell

CEO, D-Wave

_DSC3758 copy

(Photo Credit Westside Studio)

Vern Brownell is the CEO of D-Wave, a company that built the world’s first quantum computer.

He joined D-Wave from Egenera, a company he founded and at which he held various executive roles including CEO. Egenera was a pioneer of infrastructure virtualization and today is a global leader in converged infrastructure and cloud management software.

Prior to his tenure at Egenera, Mr. Brownell served as the Chief Technology Officer at Goldman Sachs.

He holds an MBA degree from Anna Maria College and a BEng. degree in Electrical Engineering from Stevens Institute of Technology.

 

About Compute Midwest

computemwVern Brownell, CEO of D-Wave, Speaks at Compute Midwest 2014. (Photo CreditWestside Studio)

As seen in some of the world’s top tech publications like Forbes, Fast Companyand The Next Web, Compute Midwest is a 2 day convergence of tech: new people, new ideas and new frontiers in Kansas City.

You don’t want to miss Compute Midwest 2015! Sign up & get special access to ticket discounts, news & more. Join us this fall to imagine the future & hear the stories of innovators who have built ideas that changed the world.

Video Info (YouTube):

Published on Dec 4, 2014

In this video, Vern Brownell (CEO, D-Wave) shared insights on “The Future Of Quantum Computing” at our Compute Midwest conference.

Learn more about Compute Midwest (http://www.computemidwest.com)

Learn more about D-Wave (http://www.dwavesys.com)

 

Google and NASA’s Quantum Artificial Intelligence Lab

This is a video I posted previously along with an article from Wired (on quantum computing with D-Wave), but it’s a good stand alone video and I thought it deserved to have it’s own place at Dawn of Giants.  The video is presented by Jason Silva, the host of Brain Games (great show!).

The video (“Google and NASA’s Quantum Artificial Intelligence Lab”) can be found on YouTube at https://www.youtube.com/watch?v=CMdHDHEuOUE

Runtiume: 6:28

Video sources:

Published on Oct 11, 2013

A peek at the early days of the Quantum AI Lab: a partnership between NASA, Google, USRA, and a 512-qubit D-Wave Two quantum computer. Learn more at http://google.com/+QuantumAILab

Wired Magazine Reviews D-Wave Quantum Computer

Hello all!  This is an article I found on the Wired website at http://www.wired.com/2014/05/quantum-computing

The article is called “The Revolutionary Quantum Computer That May Not Be Quantum at All” and it is another review of the work being done at D-Wave who claim on their website, “The Quantum Computing Era Has Begun.”

The Revolutionary Quantum Computer That May Not Be Quantum at All

Google owns a lot of computers—perhaps a million servers stitched together into the fastest, most powerful artificial intelligence on the planet. But last August, Google teamed up with NASA to acquire what may be the search giant’s most powerful piece of hardware yet. It’s certainly the strangest.

Located at NASA Ames Research Center in Mountain View, California, a couple of miles from the Googleplex, the machine is literally a black box, 10 feet high. It’s mostly a freezer, and it contains a single, remarkable computer chip—based not on the usual silicon but on tiny loops of niobium wire, cooled to a temperature 150 times colder than deep space. The name of the box, and also the company that built it, is written in big, science-fiction-y letters on one side: D-WAVE. Executives from the company that built it say that the black box is the world’s first practical quantum computer, a device that uses radical new physics to crunch numbers faster than any comparable machine on earth. If they’re right, it’s a profound breakthrough. The question is: Are they?

Hartmut Neven, a computer scientist at Google, persuaded his bosses to go in with NASA on the D-Wave. His lab is now partly dedicated to pounding on the machine, throwing problems at it to see what it can do. An animated, academic-tongued German, Neven founded one of the first successful image-recognition firms; Google bought it in 2006 to do computer-vision work for projects ranging from Picasa to Google Glass. He works on a category of computational problems called optimization—finding the solution to mathematical conundrums with lots of constraints, like the best path among many possible routes to a destination, the right place to drill for oil, and efficient moves for a manufacturing robot. Optimization is a key part of Google’s seemingly magical facility with data, and Neven says the techniques the company uses are starting to peak. “They’re about as fast as they’ll ever be,” he says.

That leaves Google—and all of computer science, really—just two choices: Build ever bigger, more power-hungry silicon-based computers. Or find a new way out, a radical new approach to computation that can do in an instant what all those other million traditional machines, working together, could never pull off, even if they worked for years.

That, Neven hopes, is a quantum computer. A typical laptop and the hangars full of servers that power Google—what quantum scientists charmingly call “classical machines”—do math with “bits” that flip between 1 and 0, representing a single number in a calculation. But quantum computers use quantum bits, qubits, which can exist as 1s and 0s at the same time. They can operate as many numbers simultaneously. It’s a mind-bending, late-night-in-the-dorm-room concept that lets a quantum computer calculate at ridiculously fast speeds.

Unless it’s not a quantum computer at all. Quantum computing is so new and so weird that no one is entirely sure whether the D-Wave is a quantum computer or just a very quirky classical one. Not even the people who build it know exactly how it works and what it can do. That’s what Neven is trying to figure out, sitting in his lab, week in, week out, patiently learning to talk to the D-Wave. If he can figure out the puzzle—what this box can do that nothing else can, and how—then boom. “It’s what we call ‘quantum supremacy,’” he says. “Essentially, something that cannot be matched anymore by classical machines.” It would be, in short, a new computer age.

A former wrestler short-listed for Canada’s Olympic team, D-Wave founder Geordie Rose is barrel-chested and possessed of arms that look ready to pin skeptics to the ground. When I meet him at D-Wave’s headquarters in Burnaby, British Columbia, he wears a persistent, slight frown beneath bushy eyebrows. “We want to be the kind of company that Intel, Microsoft, Google are,” Rose says. “The big flagship $100 billion enterprises that spawn entirely new types of technology and ecosystems. And I think we’re close. What we’re trying to do is build the most kick-ass computers that have ever existed in the history of the world.”

The office is a bustle of activity; in the back rooms technicians peer into microscopes, looking for imperfections in the latest batch of quantum chips to come out of their fab lab. A pair of shoulder-high helium tanks stand next to three massive black metal cases, where more techs attempt to weave together their spilt guts of wires. Jeremy Hilton, D-Wave’s vice president of processor development, gestures to one of the cases. “They look nice, but appropriately for a startup, they’re all just inexpensive custom components. We buy that stuff and snap it together.” The really expensive work was figuring out how to build a quantum computer in the first place.

Like a lot of exciting ideas in physics, this one originates with Richard Feynman. In the 1980s, he suggested that quantum computing would allow for some radical new math. Up here in the macroscale universe, to our macroscale brains, matter looks pretty stable. But that’s because we can’t perceive the subatomic, quantum scale. Way down there, matter is much stranger. Photons—electromagnetic energy such as light and x-rays—can act like waves or like particles, depending on how you look at them, for example. Or, even more weirdly, if you link the quantum properties of two subatomic particles, changing one changes the other in the exact same way. It’s called entanglement, and it works even if they’re miles apart, via an unknown mechanism that seems to move faster than the speed of light.

Knowing all this, Feynman suggested that if you could control the properties of subatomic particles, you could hold them in a state of superposition—being more than one thing at once. This would, he argued, allow for new forms of computation. In a classical computer, bits are actually electrical charge—on or off, 1 or 0. In a quantum computer, they could be both at the same time.

It was just a thought experiment until 1994, when mathematician Peter Shor hit upon a killer app: a quantum algorithm that could find the prime factors of massive numbers. Cryptography, the science of making and breaking codes, relies on a quirk of math, which is that if you multiply two large prime numbers together, it’s devilishly hard to break the answer back down into its constituent parts. You need huge amounts of processing power and lots of time. But if you had a quantum computer and Shor’s algorithm, you could cheat that math—and destroy all existing cryptography. “Suddenly,” says John Smolin, a quantum computer researcher at IBM, “everybody was into it.”

That includes Geordie Rose. A child of two academics, he grew up in the backwoods of Ontario and became fascinated by physics and artificial intelligence. While pursuing his doctorate at the University of British Columbia in 1999, he readExplorations in Quantum Computing, one of the first books to theorize how a quantum computer might work, written by NASA scientist—and former research assistant to Stephen Hawking—Colin Williams. (Williams now works at D-Wave.)

Reading the book, Rose had two epiphanies. First, he wasn’t going to make it in academia. “I never was able to find a place in science,” he says. But he felt he had the bullheaded tenacity, honed by years of wrestling, to be an entrepreneur. “I was good at putting together things that were really ambitious, without thinking they were impossible.” At a time when lots of smart people argued that quantum computers could never work, he fell in love with the idea of not only making one but selling it.

With about $100,000 in seed funding from an entrepreneurship professor, Rose and a group of university colleagues founded D-Wave. They aimed at an incubator model, setting out to find and invest in whoever was on track to make a practical, working device. The problem: Nobody was close.

At the time, most scientists were pursuing a version of quantum computing called the gate model. In this architecture, you trap individual ions or photons to use as qubits and chain them together in logic gates like the ones in regular computer circuits—the ands, ors, nots, and so on that assemble into how a computer thinks. The difference, of course, is that the qubits could interact in much more complex ways, thanks to superposition, entanglement, and interference.

But qubits really don’t like to stay in a state of super­position, what’s called coherence. A single molecule of air can knock a qubit out of coherence. The simple act of observing the quantum world collapses all of its every-number-at-once quantumness into stochastic, humdrum, non­quantum reality. So you have to shield qubits—from everything. Heat or other “noise,” in physics terms, screws up a quantum computer, rendering it useless.

You’re left with a gorgeous paradox: Even if you successfully run a calculation, you can’t easily find that out, because looking at it collapses your superpositioned quantum calculation to a single state, picked at random from all possible superpositions and thus likely totally wrong. You ask the computer for the answer and get garbage.

Lashed to these unforgiving physics, scientists had built systems with only two or three qubits at best. They were wickedly fast but too underpowered to solve any but the most prosaic, lab-scale problems. But Rose didn’t want just two or three qubits. He wanted 1,000. And he wanted a device he could sell, within 10 years. He needed a way to make qubits that weren’t so fragile.

“WHAT WE’RE TRYING TO DO IS BUILD THE MOST KICK-ASS COMPUTERS THAT HAVE EVER EXISTED IN THE HISTORY OF THE WORLD.”

In 2003, he found one. Rose met Eric Ladizinsky, a tall, sporty scientist at NASA’s Jet Propulsion Lab who was an expert in superconducting quantum interference devices, or Squids. When Ladizinsky supercooled teensy loops of niobium metal to near absolute zero, magnetic fields ran around the loops in two opposite directions at once. To a physicist, electricity and magnetism are the same thing, so Ladizinsky realized he was seeing superpositioning of electrons. He also suspected these loops could become entangled, and that the charges could quantum-tunnel through the chip from one loop to another. In other words, he could use the niobium loops as qubits. (The field running in one direction would be a 1; the opposing field would be a 0.) The best part: The loops themselves were relatively big, a fraction of a millimeter. A regular microchip fab lab could build them.

The two men thought about using the niobium loops to make a gate-model computer, but they worried the gate model would be too susceptible to noise and timing errors. They had an alternative, though—an architecture that seemed easier to build. Called adiabatic annealing, it could perform only one specific computational trick: solving those rule-laden optimization problems. It wouldn’t be a general-purpose computer, but optimization is enormously valuable. Anyone who uses machine learning—Google, Wall Street, medicine—does it all the time. It’s how you train an artificial intelligence to recognize patterns. It’s familiar. It’s hard. And, Rose realized, it would have an immediate market value if they could do it faster.

In a traditional computer, annealing works like this: You mathematically translate your problem into a landscape of peaks and valleys. The goal is to try to find the lowest valley, which represents the optimized state of the system. In this metaphor, the computer rolls a rock around the problem-­scape until it settles into the lowest-possible valley, and that’s your answer. But a conventional computer often gets stuck in a valley that isn’t really lowest at all. The algorithm can’t see over the edge of the nearest mountain to know if there’s an even lower vale. A quantum annealer, Rose and Ladizinsky realized, could perform tricks that avoid this limitation. They could take a chip full of qubits and tune each one to a higher or lower energy state, turning the chip into a representation of the rocky landscape. But thanks to superposition and entanglement between the qubits, the chip could computationally tunnel through the landscape. It would be far less likely to get stuck in a valley that wasn’t the lowest, and it would find an answer far more quickly.

The guts of a D-Wave don’t look like any other computer. Instead of metals etched into silicon, the central processor is made of loops of the metal niobium, surrounded by components designed to protect it from heat, vibration, and electromagnetic noise. Isolate those niobium loops well enough from the outside world and you get a quantum computer, thousands of times faster than the machine on your desk—or so the company claims. —Cameron Bird

Thomas Porostocky

A. Deep Freezer
A massive refrigeration system uses liquid helium to cool the D-Wave chip to 20 millikelvin—or 150 times colder than interstellar space.

B. Heat Exhaust
Gold-plated copper disks draw heat up and away from the chip to keep vibration and other energy from disturbing the quantum state of the processor.

C. Niobium Loops
A grid of hundreds of tiny niobium loops serve as the quantum bits, or qubits, the heart of the processor. When cooled, they exhibit quantum-mechanical behavior.

D. Noise Shields
The 190-plus wires that connect the components of the chip are wrapped in metal to shield against magnetic fields. Just one channel transmits information to the outside world—an optical fiber cable.

Better yet, Rose and Ladizinsky predicted that a quantum annealer wouldn’t be as fragile as a gate system. They wouldn’t need to precisely time the interactions of individual qubits. And they suspected their machine would work even if only someof the qubits were entangled or tunneling; those functioning qubits would still help solve the problem more quickly. And since the answer a quantum annealer kicks out is the lowest energy state, they also expected it would be more robust, more likely to survive the observation an operator has to make to get the answer out. “The adiabatic model is intrinsically just less corrupted by noise,” says Williams, the guy who wrote the book that got Rose started.

By 2003, that vision was attracting investment. Venture capitalist Steve Jurvetson wanted to get in on what he saw as the next big wave of computing that would propel machine intelligence everywhere—from search engines to self-driving cars. A smart Wall Street bank, Jurvetson says, could get a huge edge on its competition by being the first to use a quantum computer to create ever-smarter trading algorithms. He imagines himself as a banker with a D-Wave machine: “A torrent of cash comes my way if I do this well,” he says. And for a bank, the $10 million cost of a computer is peanuts. “Oh, by the way, maybe I buy exclusive access to D-Wave. Maybe I buy all your capacity! That’s just, like, a no-brainer to me.” D-Wave pulled in $100 million from investors like Jeff Bezos and In-Q-Tel, the venture capital arm of the CIA.

The D-Wave team huddled in a rented lab at the University of British Columbia, trying to learn how to control those tiny loops of niobium. Soon they had a one-qubit system. “It was a crappy, duct-taped-together thing,” Rose says. “Then we had two qubits. And then four.” When their designs got more complicated, they moved to larger-scale industrial fabrication.

As I watch, Hilton pulls out one of the wafers just back from the fab facility. It’s a shiny black disc the size of a large dinner plate, inscribed with 130 copies of their latest 512-qubit chip. Peering in closely, I can just make out the chips, each about 3 millimeters square. The niobium wire for each qubit is only 2 microns wide, but it’s 700 microns long. If you squint very closely you can spot one: a piece of the quantum world, visible to the naked eye.

Hilton walks to one of the giant, refrigerated D-Wave black boxes and opens the door. Inside, an inverted pyramid of wire-bedecked, gold-plated copper discs hangs from the ceiling. This is the guts of the device. It looks like a steampunk chandelier, but as Hilton explains, the gold plating is key: It conducts heat—noise—up and out of the device. At the bottom of the chandelier, hanging at chest height, is what they call the coffee can, the enclosure for the chip. “This is where we go from our everyday world,” Hilton says, “to a unique place in the universe.”

By 2007, D-Wave had managed to produce a 16-qubit system, the first one complicated enough to run actual problems. They gave it three real-world challenges: solving a sudoku, sorting people at a dinner table, and matching a molecule to a set of molecules in a database. The problems wouldn’t challenge a decrepit Dell. But they were all about optimization, and the chip actually solved them. “That was really the first time when I said, holy crap, you know, this thing’s actually doing what we designed it to do,” Rose says. “Back then we had no idea if it was going to work at all.” But 16 qubits wasn’t nearly enough to tackle a problem that would be of value to a paying customer. He kept pushing his team, producing up to three new designs a year, always aiming to cram more qubits together.

When the team gathers for lunch in D-Wave’s conference room, Rose jokes about his own reputation as a hard-driving taskmaster. Hilton is walking around showing off the 512-qubit chip that Google just bought, but Rose is demanding the 1,000-qubit one. “We’re never happy,” Rose says. “We always want something better.”

“Geordie always focuses on the trajectory,” Hilton says. “He always wants what’s next.”

In 2010, D-Wave’s first customers came calling. Lockheed Martin was wrestling with particularly tough optimization problems in their flight control systems. So a manager named Greg Tallant took a team to Burnaby. “We were intrigued with what we saw,” Tallant says. But they wanted proof. They gave D-Wave a test: Find the error in an algorithm. Within a few weeks, D-Wave developed a way to program its machine to find the error. Convinced, Lockheed Martin leased a $10 million, 128-qubit machine that would live at a USC lab.

The next clients were Google and NASA. Hartmut Neven was another old friend of Rose’s; they shared a fascination with machine intelligence, and Neven had long hoped to start a quantum lab at Google. NASA was intrigued, because it often faced wickedly hard best-fit problems. “We have the Curiosity rover on Mars, and if we want to move it from point A to point B there are a lot of possible routes—that’s a classic optimization problem,” says NASA’s Rupak Biswas. But before Google executives would put down millions, they wanted to know the D-Wave worked. In the spring of 2013, Rose agreed to hire a third party to run a series of Neven-designed tests, pitting D-Wave against traditional optimizers running on regular computers. Catherine McGeoch, a computer scientist at Amherst College, agreed to run the tests, but only under the condition that she report her results publicly.

Rose quietly panicked. For all of his bluster—D-Wave routinely put out press releases boasting about its new devices—he wasn’t sure his black box would win the shoot-out. “One of the possible outcomes was that the thing would totally tank and suck,” Rose says. “And then she would publish all this stuff and it would be a horrible mess.”

IS THE D-WAVE ACTUALLY QUANTUM? IF NOISE IS DISENTANGLING THE QUBITS, IT’S JUST AN EXPENSIVE CLASSICAL COMPUTER.

McGeoch pitted the D-Wave against three pieces of off-the-shelf software. One was IBM’s CPLEX, a tool used by ConAgra, for instance, to crunch global market and weather data to find the optimum price at which to sell flour; the other two were well-known open source optimizers. McGeoch picked three mathematically chewy problems and ran them through the D-Wave and through an ordinary Lenovo desktop running the other software.

The results? D-Wave’s machine matched the competition—and in one case dramatically beat it. On two of the math problems, the D-Wave worked at the same pace as the classical solvers, hitting roughly the same accuracy. But on the hardest problem, it was much speedier, finding the answer in less than half a second, while CPLEX took half an hour. The D-Wave was 3,600 times faster. For the first time, D-Wave had seemingly objective evidence that its machine worked quantum magic. Rose was relieved; he later hired McGeoch as his new head of benchmarking. Google and NASA got a machine. D-Wave was now the first quantum computer company with real, commercial sales.

That’s when its troubles began.

Quantum scientists had long been skeptical of D-Wave. Academics tend to get suspicious when the private sector claims massive leaps in scientific knowledge. They frown on “science by press release,” and Geordie Rose’s bombastic proclamations smelled wrong. Back then, D-Wave had published little about its system. When Rose held a press conference in 2007 to show off the 16-bit system, MIT quantum scientist Scott Aaronson wrote that the computer was “about as useful for industrial optimization problems as a roast-beef sandwich.” Plus, scientists doubted D-Wave could have gotten so far ahead of the state of the art. The most qubits anyone had ever got working was eight. So for D-Wave to boast of a 500-qubit machine? Nonsense. “They never seemed properly concerned about the noise model,” as IBM’s Smolin says. “Pretty early on, people became dismissive of it and we all sort of moved on.”

That changed when Lockheed Martin and USC acquired their quantum machine in 2011. Scientists realized they could finally test this mysterious box and see whether it stood up to the hype. Within months of the D-Wave installation at USC, researchers worldwide came calling, asking to run tests.

The first question was simple: Was the D-Wave system actually quantum? It might be solving problems, but if noise was disentangling the qubits, it was just an expensive classical computer, operating adiabatically but not with quantum speed. Daniel Lidar, a quantum scientist at USC who’d advised Lockheed on its D-Wave deal, figured out a clever way to answer the question. He ran thousands of instances of a problem on the D-Wave and charted the machine’s “success probability”—how likely it was to get the problem right—against the number of times it tried. The final curve was U-shaped. In other words, most of the time the machine either entirely succeeded or entirely failed. When he ran the same problems on a classical computer with an annealing optimizer, the pattern was different: The distribution clustered in the center, like a hill; this machine was sort of likely to get the problems right. Evidently, the D-Wave didn’t behave like an old-fashioned computer.

Lidar also ran the problems on a classical algorithm that simulated the way a quantum computer would solve a problem. The simulation wasn’t superfast, but it thought the same way a quantum computer did. And sure enough, it produced the U, like the D-Wave shape. At minimum the D-Wave acts more like a simulation of a quantum computer than like a conventional one.

Even Scott Aaronson was swayed. He told me the results were “reasonable evidence” of quantum behavior. If you look at the pattern of answers being produced, “then entanglement would be hard to avoid.” It’s the same message I heard from most scientists.

But to really be called a quantum computer, you also have to be, as Aaronson puts it, “productively quantum.” The behavior has to help things move faster. Quantum scientists pointed out that McGeoch hadn’t orchestrated a fair fight. D-Wave’s machine was a specialized device built to do optimizing problems. McGeoch had compared it to off-the-shelf software.

Matthias Troyer set out to even up the odds. A computer scientist at the Institute for Theoretical Physics in Zurich, Troyer tapped programming wiz Sergei Isakov to hot-rod a 20-year-old software optimizer designed for Cray supercomputers. Isakov spent a few weeks tuning it , and when it was ready, Troyer and Isakov’s team fed tens of thousands of problems into USC’s D-Wave and into their new and improved solver on an Intel desktop.

This time, the D-Wave wasn’t faster at all. In only one small subset of the problems did it race ahead of the conventional machine. Mostly, it only kept pace. “We find no evidence of quantum speedup,” Troyer’s paper soberly concluded. Rose had spent millions of dollars, but his machine couldn’t beat an Intel box.

What’s worse, as the problems got harder, the amount of time the D-Wave needed to solve them rose—at roughly the same rate as the old-school computers. This, Troyer says, is particularly bad news. If the D-Wave really was harnessing quantum dynamics, you’d expect the opposite. As the problems get harder, it should pull away from the Intels. Troyer and his team concluded that D-Wave did in fact have some quantum behavior, but it wasn’t using it productively. Why? Possibly, Troyer and Lidar say, it doesn’t have enough “coherence time.” For some reason its qubits aren’t qubitting—the quantum state of the niobium loops isn’t sustained.

One way to fix this problem, if indeed it’s a problem, might be to have more qubits running error correction. Lidar suspects D-Wave would need another 100—maybe 1,000—qubits checking its operations (though the physics here are so weird and new, he’s not sure how error correction would work). “I think that almost everybody would agree that without error correction this plane is not going to take off,” Lidar says.

Rose’s response to the new tests: “It’s total bullshit.”

D-Wave, he says, is a scrappy startup pushing a radical new computer, crafted from nothing by a handful of folks in Canada. From this point of view, Troyer had the edge. Sure, he was using standard Intel machines and classical software, but those benefited from decades’ and trillions of dollars’ worth of investment. The D-Wave acquitted itself admirably just by keeping pace. Troyer “had the best algorithm ever developed by a team of the top scientists in the world, finely tuned to compete on what this processor does, running on the fastest processors that humans have ever been able to build,” Rose says. And the D-Wave “is now competitive with those things, which is a remarkable step.”

But what about the speed issues? “Calibration errors,” he says. Programming a problem into the D-Wave is a manual process, tuning each qubit to the right level on the problem-solving landscape. If you don’t set those dials precisely right, “you might be specifying the wrong problem on the chip,” Rose says. As for noise, he admits it’s still an issue, but the next chip—the 1,000-qubit version codenamed Washington, coming out this fall—will reduce noise yet more. His team plans to replace the niobium loops with aluminum to reduce oxide buildup. “I don’t care if you build [a traditional computer] the size of the moon with interconnection at the speed of light, running the best algorithm that Google has ever come up with. It won’t matter, ’cause this thing will still kick your ass,” Rose says. Then he backs off a bit. “OK, everybody wants to get to that point—and Washington’s not gonna get us there. But Washington is a step in that direction.”

Or here’s another way to look at it, he tells me. Maybe the real problem with people trying to assess D-Wave is that they’re asking the wrong questions. Maybe his machine needs harder problems.

On its face, this sounds crazy. If plain old Intels are beating the D-Wave, why would the D-Wave win if the problems got tougher? Because the tests Troyer threw at the machine were random. On a tiny subset of those problems, the D-Wave system did better. Rose thinks the key will be zooming in on those success stories and figuring out what sets them apart—what advantage D-Wave had in those cases over the classical machine. In other words, he needs to figure out what sort of problems his machine is uniquely good at. Helmut Katzgraber, a quantum scientist at Texas A&M, cowrote a paper in April bolstering Rose’s point of view. Katzgraber argued that the optimization problems everyone was tossing at the D-Wave were, indeed, too simple. The Intel machines could easily keep pace. If you think of the problem as a rugged surface and the solvers as trying to find the lowest spot, these problems “look like a bumpy golf course. What I’m proposing is something that looks like the Alps,” he says.

In one sense, this sounds like a classic case of moving the goalposts. D-Wave will just keep on redefining the problem until it wins. But D-Wave’s customers believe this is, in fact, what they need to do. They’re testing and retesting the machine to figure out what it’s good at. At Lockheed Martin, Greg Tallant has found that some problems run faster on the D-Wave and some don’t. At Google, Neven has run over 500,000 problems on his D-Wave and finds the same. He’s used the D-Wave to train image-recognizing algorithms for mobile phones that are more efficient than any before. He produced a car-recognition algorithm better than anything he could do on a regular silicon machine. He’s also working on a way for Google Glass to detect when you’re winking (on purpose) and snap a picture. “When surgeons go into surgery they have many scalpels, a big one, a small one,” he says. “You have to think of quantum optimization as the sharp scalpel—the specific tool.”

The dream of quantum computing has always been shrouded in sci-fi hope and hoopla—with giddy predictions of busted crypto, multiverse calculations, and the entire world of computation turned upside down. But it may be that quantum computing arrives in a slower, sideways fashion: as a set of devices used rarely, in the odd places where the problems we have are spoken in their curious language. Quantum computing won’t run on your phone—but maybe some quantum process of Google’s will be key in training the phone to recognize your vocal quirks and make voice recognition better. Maybe it’ll finally teach computers to recognize faces or luggage. Or maybe, like the integrated circuit before it, no one will figure out the best-use cases until they have hardware that works reliably. It’s a more modest way to look at this long-heralded thunderbolt of a technology. But this may be how the quantum era begins: not with a bang, but a glimmer.

This is a fun little piece on transhumanism…

This is a documentary I found on YouTube called, “Bionics, Transhumanism, and the end of Evolution Full Documentary“.

I like a little more hard science in my documentaries, but this one is worth the watch for the robotics, if nothing else.

Runtime: 52 minutes

 

 

From the site:

Published on Mar 17, 2014

Transhumanism (abbreviated as H+ or h+) is an international cultural and intellectual movement with an eventual goal of fundamentally transforming the human condition by developing and making widely available technologies to greatly enhance human intellectual, physical, and psychological capacities.[1] Transhumanist thinkers[who?] study the potential benefits and dangers of emerging technologies that could overcome fundamental human limitations, as well as the ethics of developing and using such technologies.[2] The most common thesis put forward is that human beings may eventually be able to transform themselves into beings with such greatly expanded abilities as to merit the label “posthuman”.[1]

The contemporary meaning of the term transhumanism was foreshadowed by one of the first professors of futurology, FM-2030, who taught “new concepts of the Human” at The New School in the 1960s, when he began to identify people who adopt technologies, lifestyles and worldviews transitional to “posthumanity” as “transhuman”.[3] This hypothesis would lay the intellectual groundwork for the British philosopher Max More to begin articulating the principles of transhumanism as a futurist philosophy in 1990, and organizing in California an intelligentsia that has since grown into the worldwide transhumanist movement.[3][4][5]

Influenced by seminal works of science fiction, the transhumanist vision of a transformed future humanity has attracted many supporters and detractors from a wide range of perspectives.[3] Transhumanism has been characterized by one critic, Francis Fukuyama, as among the world’s most dangerous ideas,[6] to which Ronald Bailey countered that it is rather the “movement that epitomizes the most daring, courageous, imaginative,

We share information only for educational purposes
Subscribe & Join us :
http://www.youtube.com/user/Extraordi…
Don’t Forget To LIKE this video!

This documentary and the rest of the documentaries presented relate to important times and figures in history, historic places and sites, archaeology, science, conspiracy theories, and education.
The Topics of these video documentaries are varied and cover ancient history, Rome, Greece, Egypt, science, technology, nature, planet earth, the solar system, the universe, World wars, Battles, education, Biographies, television, archaeology, Illuminati, Area 51, serial killers, paranormal, supernatural, cults, government cover-ups, corruption, martial arts, space, aliens, ufos, conspiracy theories, Annunaki, Nibiru, Nephilim, satanic rituals, religion, strange phenomenon, origins of Mankind

Michio Kaku Discussing Transhuman Technologies

In this news piece*, renowned physicist Dr. Michio Kaku talks about recording and transferring thoughts in mice (yes, it’s actually being done in labs right now!) and speculates on the ability to digitally upload consciousness and basically become immortal.

liked the part about thought-directed computers (aka neural or brain-computer interfaces).  I can’t wait until I can do these posts directly from my brain!  

How about the pill that slows down time?  That was a new one to me… Wow!

 

 

*The title of this video is “Uploading Consciousness & Digital Immortality | Interview with Theoretical Physicist Michio Kaku” – The sources are listed below:

Published on Mar 29, 2014

Breaking the Set’s Manuel Rapalo speaks with theoretical physicist, Michio Kaku, about his latest book ‘The Future of the Mind’ discussing a the how realistic it would be to digitally upload memories and consciousness, and why we’re living in the ‘Golden Age’ of studying the human mind.

LIKE Breaking the Set @ http://fb.me/JournalistAbbyMartin
FOLLOW Manuel Rapalo @ http://twitter.com/Manuel_Rapalo

The birth of the computer that will change everything…

Hi all!  I posted a video related to this article and you can find that at “Quantum Computing and Transhumanism“.  

The article below is from Time Magazine’s article, “The Quantum Quest for a Revolutionary Computer.”  The entire article is available only to subscribers, but this is what they let the non-subscribers see and I think it’s worth a look.  I’ll be posting a lot more on quantum computing since it is one of my high interest categories.

Quantum computing uses strange subatomic behavior to exponentially speed up processing. It could be a revolution, or it could be wishful thinking

For years astronomers have believed that the coldest place in the universe is a massive gas cloud 5,000 light-years from Earth called the Boomerang Nebula, where the temperature hovers at around –458°F, just a whisker above absolute zero. But as it turns out, the scientists have been off by about 5,000 lightyears. The coldest place in the universe is actually in a small city directly east of Vancouver called Burnaby. Burnaby is the headquarters of a computer firm called D-Wave. Its flagship product, the D-Wave Two, of which there are five in existence, is a black box 10 ft. high. Inside is a cylindrical cooling apparatus containing a niobium computer chip that’s been chilled to –459.6°F, almost 2° colder than the Boomerang Nebula.

The D-Wave Two is an unusual computer, and D-Wave is an unusual company. It’s small, and it has very few customers, but they’re blue-chip: they include the defense contractor Lockheed Martin; a computing lab that’s hosted by NASA and largely funded by Google; and a U.S. intelligence agency that D-Wave executives decline to name.

The reason D-Wave has so few customers is that it makes a new type of computer called a quantum computer that’s so radical and strange, people are still trying to figure out what it’s for and how to use it. It could represent an enormous new source of computing power–it has the potential to solve problems that would take conventional computers centuries, with revolutionary consequences for fields ranging from cryptography to nanotechnology, pharmaceuticals to artificial intelligence.

 

Like I said, not a whole lot here.  Make sure to check my other posts for more information on this topic, it’s sure to be a hot one!