Peter Voss Interview on Artificial General Intelligence

This is an interview with Peter Voss of Optimal talking about artificial general intelligence.  One of the things Voss talks about is the skepticism which is a common reaction when talking about creating strong AI and why (as Tony Robbins always says) the past does not equal the future.  He also talks about why he thinks that Ray Kurzweil’s predictions that AGI won’t be achieved for another 20 is wrong – (and I gotta say, he makes a good point).  If you are interested in artificial intelligence or ethics in technology then you’ll want to watch this one…  

And don’t worry, the line drawing effect at the beginning of the video only lasts a minute.

Runtime: 39:55

This video can also be found at

Video Info:

Published on Jan 8, 2013

Peter Voss is the founder and CEO of Adaptive A.I. Inc, an R&D company developing a high-level general intelligence (AGI) engine. He is also founder and CTO of Smart Action Company LLC, which builds and supplies AGI-based virtual contact-center agents — intelligent, automated phone operators.

Peter started his career as an entrepreneur, inventor, engineer and scientist at age 16. After several years of experience in electronics engineering, at age 25 he started a company to provide advanced custom hardware and software solutions. Seven years later the company employed several hundred people and was successfully listed on the Johannesburg Stock Exchange.

After selling his interest in the company in 1993, he worked on a broad range of disciplines — cognitive science, philosophy and theory of knowledge, psychology, intelligence and learning theory, and computer science — which served as the foundation for achieving new breakthroughs in artificial general intelligence. In 2001 he started Adaptive AI Inc., and last year founded Smart Action Company as its commercialization division.

Peter considers himself a free-minds-and-markets Extropian, and often writes and presents on philosophical topics including rational ethics, freewill and artificial minds. He is also deeply involved with futurism and life-extension.

My main occupation is research in high-level, general (domain independent, autonomous) Artificial Intelligence — “Adaptive A.I. Inc.”

I believe that integrating insights from the following areas of cognitive science are crucial for rapid progress in this field:

Philosophy/ epistemology – understanding the true nature of knowledge
Cognitive psychology (incl. developmental & psychometric) for analysis of cognition – and especially – general conceptual intelligence.
Computer science – self-modifying systems, combining new connectionist pattern manipulation techniques with ‘traditional’ AI engineering.
Anyone who shares my passion – and/ or concerns – for this field is welcome to contact me for brainstorming and possible collaboration.

My other big passion is for exploring what I call Optimal Living: Maximizing both the quantity & quality of life. I see personal responsibility and optimizing knowledge acquisition as key. Specific interests include:

Rationality, as a means for knowledge. I’m largely sympathetic to the philosophy of Objectivism, and have done quite a bit of work on developing a rational approach to (personal & social) ethics.
Health (quality): physical, financial, cognitive, and emotional (passions, meaningful relationships, appreciation of art, etc.). Psychology: IQ & EQ.
Longevity (quantity): general research, CRON (calorie restriction), cryonics
Environment: economic, social, political systems conducive to Optimal Living.
These interests logically lead to an interest in Futurism , in technology for improving life – overcoming limits to personal growth & improvement. The transhumanist philosophy of Extropianism best embodies this quest. Specific technologies that seem to hold most promise include AI, Nanotechnology, & various health & longevity approaches mentioned above.

I always enjoy meeting new people to explore ideas, and to have my views critiqued. To this end I am involved in a number of discussion groups and salons (e.g. ‘Kifune’ futurist dinner/discussion group). Along the way I’m trying to develop and learn the complex art of constructive dialog.

Interview done at SENS party LA 20th Dec 2012.



When Will We Be Transhuman? Seven Conditions for Attaining Transhumanism by Kyle Munkittrick

Here’s an article from Discover Magazine called When Will We Be Transhuman? Seven Conditions for Attaining Transhumanism.  The article is short, but well thought out.

When Will We Be Transhuman? Seven Conditions for Attaining Transhumanism

By Kyle Munkittrick | July 16, 2011 9:53 am

The future is impossible to predict. But that’s not going to stop people from trying. We can at least pretend to know where it is we want humanity to go. We hope that laws we craft, the technologies we invent, our social habits and our ways of thinking are small forces that, when combined over time, move our species towards a better existence. The question is, How will we know if we are making progress?

As a movement philosophy, transhumanism and its proponents argue for a future of ageless bodies, transcendent experiences, and extraordinary minds. Not everyone supports every aspect of transhumanism, but you’d be amazed at how neatly current political struggles and technological progress point toward a transhuman future. Transhumanism isn’t just about cybernetics and robot bodies. Social and political progress must accompany the technological and biological advances for transhumanism to become a reality.

But how will we able to tell when the pieces finally do fall into place? I’ve been trying to answer that question ever since Tyler Cowen at Marginal Revolution was asked a while back by his readers: What are the exact conditions for counting “transhumanism” as having been attained? In an attempt to answer, I responded with what I saw as the three key indicators:

  1. Medical modifications that permanently alter or replace a function of the human body become prolific.
  2. Our social understanding of aging loses the “virtue of necessity” aspect and society begins to treat aging as a disease.
  3. Rights discourse would shift from who we include among humans (i.e. should homosexual have marriage rights?) to a system flexible enough to easily bring in sentient non-humans.

As I groped through the intellectual dark for these three points, it became clear that the precise technology and how it worked was unimportant. Instead, we need to figure out how technology may change our lives and our ways of living. Unlike the infamous jetpack, which defined the failed futurama of the 20th century, the 21st needs broader progress markers. Here are seven things to look for in the coming centuries that will let us know if transhumanism is here.

When we think of the future, we think of technology. But too often, we think of really pointless technology – flying cars or self-tying sneakers or ray guns. Those things won’t change the way life happens. Not the way the washing machine or the cell phone changed the way life happens. Those are realinventions. It is in that spirit that I considered indicators of transhumanism. What matters is how a technology changes our definition of a “normal” human. Think of it this way: any one of these indicators has been fulfilled when at least a few of the people you interact with on any given day utilize the technology. With that mindset, I propose the following seven changes as indicators that transhumanism has been attained.

1. Prosthetics are Preferred: The arrival of prosthetics and implants for organs and limbs that are as good as or better than the original. A fairly accurate test for the quality of prosthetics would be voluntary amputations. Those who use prosthetics would compete with or surpass non-amputees in physical performances and athletic competitions. Included in this indicator are cochlear, optic implants, bionic limbs and artificial organs that are within species typical functioning and readily available. A key social indicator will be that terminology around being “disabled”and “handicapped” would become anachronous. If you ever find yourself seriously considering having your birth-given hand lopped off and replaced with a cybernetic one, you can tick off this box on your transhuman checklist.

2. Better Brains: There are three ways we could improve our cognition. In order of likelihood of being used in the near future they are: cognitive enhancing drugs, genetic engineering, or neuro-implants/ prosthetic cyberbrains. When the average person wakes up, brews a pot of coffee and pops an over-the-counter stimulant as or more powerful than modafinil, go ahead and count this condition achieved. Genetic engineering and cyberbrains will be improvements in degree and function, but not in purpose. Any one of these becoming commonplace would indicate that we no longer cling to the bias that going beyond the intelligence dished out by the genetic and environmental lottery is “cheating.”

3. Artificial Assistance: Artificial Intelligence (AI) and Augmented Reality (AR) integrated into personal, everyday behaviors. In the same way Google search and Wikipedia changed the way we research and remember, AI and AR could alter the way we think and interact. Daedalus in Deus Ex and Jarvis in Iron Man are great examples of Turing-quality (indistinguishable from human intelligence) AI that interact with the main character as both side kicks and secondary minds. Think of it this way: you walk into a cocktail party. Your cyberbrain’s AI assist analyzes every face in the room and determines those most socially relevant to you. Using AR projected onto your optic implants, the AI highlights each person in your line of sight and, as you approach, provides a dossier of their main interests and personality type. Now apply this level of information access to anything else. Whether it’s grilling a steak or performing a heart transplant, AI assist with AR overlay will radically improve human functioning. When it is expected that mostpeople will have an AI advisor at their side analyzing the situation and providing instructions through their implants, go ahead and count humanity another step closer to being transhuman.

4. Amazing Average Age: The ultimate objective of health care is that people live the longest, healthiest lives possible. Whether that happens due to nanotechnology or genetic engineering or synthetic organs is irrelevant. What matters is that eventually people will age more slowly, be healthier for a larger portion of their lives, and will be living beyond the age of 120. Our social understanding of aging will lose the “virtue of necessity” aspect and society will treat aging as a disease to be mitigated and managed. When the average expected life span exceeds 120, the conditions for transhuman longevity will have arrived.

5. Responsible Reproduction: Having children will be framed almost exclusively in the light of responsibility. Human reproduction is, at the moment, not generally worthy of the term “procreation.” Procreation implies planned creation and conscientious rearing of a new human life. As it stands, anyone with the necessary biological equipment can accidentally spawn a whelp and, save for extreme physical neglect, is free to all but abandon it to develop in an arbitrary and developmentally damaging fashion. Children – human beings as a whole – deserve better. Responsible reproduction will involve, first and foremost, better birth control for men and women. Abortions will be reserved for the rare accidental pregnancy and/or those that threaten the life of the mother. Those who do choose to reproduce will do so via assisted reproductive technologies (ARTs) ensuring pregnancy is quite deliberate. Furthermore, genetic modification, health screening, and, eventually synthetic wombs will enable the child with the best possibility of a good life to be born. Parental licensing may be part of the process; a liberalization of adoption and surrogate pregnancy laws certainly will be. When global births stabilize at replacement rates, ARTs are the preferred method of conception, and responsible child rearing is more highly valued than biological parenthood, we will be procreating as transhumans.

6. My Body, My Choice: Legalization and regulation will be based onsomatic rights. Substances that are ingested – cogno enhancers, recreational drugs, steroids, nanotech – become both one’s right and responsibility. Actions such as abortion, assisted suicide, voluntary amputation, gender reassignment, surrogate pregnancy, body modification, legal unions among adults of any number, and consenting sexual practices would be protected under law. One’s genetic make-up, neurological composition, prosthetic augmentation, and other cybernetic modifications will be limited only by technology and one’s own discretion. Transhumanism cannot happen without a legal structure that allows individuals to control their own bodies. When bodily freedom is as protected and sanctified as free speech, transhumanism will be free to develop.

7. Persons, not People: Rights discourse will shift to personhood instead of common humanity. I have argued we’re already beginning to see a social shifttowards this mentality. Using a scaled system based on traits like sentience, empathy, self-awareness, tool use, problem solving, social behaviors, language use, and abstract reasoning, animals (including humans) will be granted rights based on varying degrees of personhood. Personhood based rights will protect against Gattaca scenarios while ensuring the rights of new forms of intelligence, be they alien, artificial, or animal, are protected. When African grey parrots, gorillas, and dolphins have the same rights as a human toddler, a transhuman friendly rights system will be in place.

Individually, each of these conditions are necessary but not sufficient for transhumanism to have been attained. Only as a whole are they sufficient for transhumanism to have been achieved. I make no claims as to how or when any or all of these conditions will be attained. If forced to guess, I would say all seven conditions will be attained over the course of the next two centuries, with conditions (3) and (4) being the furthest from attainment.

Transhumanism is a long way from being attained. However, with these seven conditions in mind, we can at least determine if we are moving towards or away from a transhuman future.

Follow Kyle on his personal blog, Pop Bioethics, and on facebook andtwitter.

Image of psychedelic human eye by Kate Whitley via dullhunk on Flickr Creative Commons.

This article can also be found at

A New Generation of Transhumanists Is Emerging by Zoltan Istvan

This Huffington Post article (A New Generation of Transhumanists Is Emerging by Zoltan Istvan) talks about transhumanism moving into the mainstream.  I have just one question; with so many reputable news agencies – and even governmental agencies (the NIC and the NCBI, to name just a couple) – reporting on transhumanism and the technological singularity, why is it that no one I talk to has heard about this movement?  This should be dinner conversation at every household table, but I’ve found that most people will dismiss the it out of hand rather than even try to learn more!  I’m not a big fan of the bible (because I’ve actually read the whole thing), but the phrase “pearls before swine” just leaps into my thoughts when I consider this.  I mean, how otherwise intelligent people can choose to remain ignorant and in the dark on such important topics just boggles my mind.  Wake up, people!  (Not you, of course!  You’re good, I’m sure…)

A New Generation of Transhumanists Is Emerging

Posted: 03/10/2014 2:43 pm EDT Updated: 05/10/2014 5:59 am EDT

A new generation of transhumanists is emerging. You can feel it in handshakes at transhumanist meet-ups. You can see it when checking in to transhumanist groups in social media. You can read it in the hundreds of transhumanist-themed blogs. This is not the same bunch of older, mostly male academics that have slowly moved the movement forward during the last few decades. This is a dynamic group of younger people from varying backgrounds: Asians, Blacks, Middle Easterners, Caucasians, and Latinos. Many are females, some are LGBT, and others have disabilities. Many are atheist, while others are spiritual or even formally religious. Their politics run the gamut, from liberals to conservatives to anarchists. Their professions vary widely, from artists to physical laborers to programmers. Whatever their background, preferences, or professions, they have recently tripled the population of transhumanists in just the last 12 months.

“Three years ago, we had only around 400 members, but today we have over 10,000 members,” says Amanda Stoel, co-founder and chief administrator of Facebook groupSingularity Network, one of the largest of hundreds of transhumanist-themed groups on the web.

Transhumanism is becoming so popular that even the comic strip Dilbert, which appears online and in 2000 newspapers, recently made jokes about it.

Despite its growing popularity, many people around the world still don’t know what “transhuman” means. Transhuman literally means beyond human. Transhumanists consist of life extensionists, techno-optimists, Singularitarians, biohackers, roboticists, AI proponents, and futurists who embrace radical science and technology to improve the human condition. The most important aim for many transhumanists is to overcome human mortality, a goal some believe is achievable by 2045.

Transhumanism has been around for nearly 30 years and was first heavily influenced by science fiction. Today, transhumanism is increasingly being influenced by actual science and technological innovation, much of it being created by people under the age of 40. It’s also become a very international movement, with many formal groups in dozens of countries.

Despite the movement’s growth, its potential is being challenged by some older transhumanists who snub the younger generation and their ideas. These old-school futurists dismiss activist philosophies and radicalism, and even prefer some younger writers and speakers not have their voices heard. Additionally, transhumanism’s Wikipedia page — the most viewed online document of the movement — is protected by a vigilant posse, deleting additions or changes that don’t support a bland academic view of transhumanism.

Inevitably, this Wikipedia page misses the vibrancy and happenings of the burgeoning movement. The real status and information of transhumanism and its philosophies can be found in public transhumanist gatherings and festivities, in popular student groups like the Stanford University Transhumanist Association, and in social media where tens of thousands of scientists and technologists hang out and discuss the transhuman future.

Jet-setting personality Maria Konovalenko, a 29-year-old Russian molecular biophysicist whose public demonstrations supporting radical life extension have made international news, is a prime example.

“We must do more for transhumanism and life extension,” says Konovalenko, who serves as vice president of Moscow-based Science for Life Extension Foundation. “This is our lives and our futures we’re talking about. To sit back and and just watch the 21st Century roll by will not accomplish our goals. We must take our message to the people in the streets and strive to make real change.”

Transhumanist celebrities like Konovalenko are changing the way the movement gets its message across to the public. Gauging by the rapidly increasing number of transhumanists, it’s working.

A primary goal of many transhumanists is to convince the public that embracing radical technology and science is in the species’ best interest. In a mostly religious world where much of society still believes in heavenly afterlives, some people are skeptical about whether significantly extending human lifespans is philosophically and morally correct. Transhumanists believe the more people that support transhumanism, the more private and government resources will end up in the hands of organizations and companies that aim to improve human lives and bring mortality to an end.


This article can also be found at

National Intelligence Council Predicts a “Very Transhuman Future by 2030”

U.S. government agency –  the National Intelligence Council (NIC) – released “a 140-page document that outlines major trends and technological developments we should expect in the next 20 years.”  The entire 140 page document can be read or downloaded at

U.S. spy agency predicts a very transhuman future by 2030

U.S. spy agency predicts a very transhuman future by 2030

The National Intelligence Council has just released its much anticipated forecasting report, a 140-page document that outlines major trends and technological developments we should expect in the next 20 years. Among their many predictions, the NIC foresees the end of U.S. global dominance, the rising power of individuals against states, a growing middle class that will increasingly challenge governments, and ongoing shortages in water, food and energy. But they also envision a future in which humans have been significantly modified by their technologies — what will herald the dawn of the transhuman era.

This work brings to mind the National Science Foundation’s groundbreaking 2003 report,Converging Technologies for Improving Human Performance — a relatively early attempt to understand and predict how advanced biotechnologies would impact on the human experience. The NIC’s new report, Global Trends 2030: Alternative Worlds, follows in the same tradition — namely one that doesn’t ignore the potential for enhancement technologies.

U.S. spy agency predicts a very transhuman future by 20301

In the new report, the NIC describes how implants, prosthetics, and powered exoskeletons will become regular fixtures of human life — what could result in substantial improvements to innate human capacities. By 2030, the authors predict, prosthetics should reach the point where they’re just as good — or even better — than organic limbs. By this stage, the military will increasingly rely on exoskeletons to help soldiers carry heavy loads. Servicemen will also be adminstered psychostimulants to help them remain active for longer periods.

Many of these same technologies will also be used by the elderly, both as a way to maintain more youthful levels of strength and energy, and as a part of their life extension strategies.

Brain implants will also allow for advanced neural interface devices — what will bridge the gap between minds and machines. These technologies will allow for brain-controlled prosthetics, some of which may be able to provide “superhuman” abilities like enhanced strength, speed — and completely new functionality altogether.

Other mods will include retinal eye implants to enable night vision and other previously inaccessible light spectrums. Advanced neuropharmaceuticals will allow for vastly improved working memory, attention, and speed of thought.

“Augmented reality systems can provide enhanced experiences of real-world situations,” the report notes, “Combined with advances in robotics, avatars could provide feedback in the form of sensors providing touch and smell as well as aural and visual information to the operator.”

But as the report notes, many of these technologies will only be available to those who are able to afford them. The authors warn that it could result in a two-tiered society comprising enhanced and nonenhanced persons, a dynamic that would likely require government oversight and regulation.

Smartly, the report also cautions that these technologies will need to be secure. Developers will be increasingly challenged to prevent hackers from interfering with these devices.

Lastly, other technologies and scientific disciplines will have to keep pace to make much of this work. For example, longer-lasting batteries will improve the practicality of exoskeletons. Progress in the neurosciences will be critical for the development of future brain-machine interfaces. And advances in flexible biocompatible electronics will enable improved integration with cybernetic implants.

The entire report can be read here.

Image: Bruce Rolff/shutterstock.

This article can also be found on io9 at

Transhumanism, medical technology and slippery slopes from the NCBI

This article (Transhumanism, medical technology and slippery slopes from the NCBI) explores transhumanism in the medical industry.  I thought it was bit negatively biased, but the sources are good and disagreement doesn’t equate to invalidation in my book so here it is…


In this article, transhumanism is considered to be a quasi‐medical ideology that seeks to promote a variety of therapeutic and human‐enhancing aims. Moderate conceptions are distinguished from strong conceptions of transhumanism and the strong conceptions were found to be more problematic than the moderate ones. A particular critique of Boström’s defence of transhumanism is presented. Various forms of slippery slope arguments that may be used for and against transhumanism are discussed and one particular criticism, moral arbitrariness, that undermines both weak and strong transhumanism is highlighted.

No less a figure than Francis Fukuyama1 recently labelled transhumanism as “the world’s most dangerous idea”. Such an eye‐catching condemnation almost certainly denotes an issue worthy of serious consideration, especially given the centrality of biomedical technology to its aims. In this article, we consider transhumanism as an ideology that seeks to evangelise its human‐enhancing aims. Given that transhumanism covers a broad range of ideas, we distinguish moderate conceptions from strong ones and find the strong conceptions more problematic than the moderate ones. We also offer a critique of Boström’s2 position published in this journal. We discuss various forms of slippery slope arguments that may be used for and against transhumanism and highlight one particular criticism, moral arbitrariness, which undermines both forms of transhumanism.

What is transhumanism?

At the beginning of the 21st century, we find ourselves in strange times; facts and fantasy find their way together in ethics, medicine and philosophy journals and websites.2,3,4 Key sites of contestation include the very idea of human nature, the place of embodiment within medical ethics and, more specifically, the systematic reflections on the place of medical and other technologies in conceptions of the good life. A reflection of this situation is captured by Dyens5 who writes,

What we are witnessing today is the very convergence of environments, systems, bodies, and ontology toward and into the intelligent matter. We can no longer speak of the human condition or even of the posthuman condition. We must now refer to the intelligent condition.

We wish to evaluate the contents of such dialogue and to discuss, if not the death of human nature, then at least its dislocation and derogation in the thinkers who label themselves transhumanists.

One difficulty for critics of transhumanism is that a wide range of views fall under its label.6 Not merely are there idiosyncrasies of individual academics, but there does not seem to exist an absolutely agreed on definition of transhumanism. One can find not only substantial differences between key authors2,3,4,7,8 and the disparate disciplinary nuances of their exhortations, but also subtle variations of its chief representatives in the offerings of people. It is to be expected that any ideology transforms over time and not least of all in response to internal and external criticism. Yet, the transhumanism critic faces a further problem of identifying a robust target that stays still sufficiently long to locate it properly in these web‐driven days without constructing a “straw man” to knock over with the slightest philosophical breeze. For the purposes of targeting a sufficiently substantial target, we identify the writings of one of its clearest and intellectually robust proponents, the Oxford philosopher and cofounder of the World Transhumanist Association , Nick Boström,2 who has written recently in these pages of transhumanism’s desire to make good the “half‐baked” project3 that is human nature.

Before specifically evaluating Boström’s position, it is best first to offer a global definition for transhumanism and then to locate it among the range of views that fall under the heading. One of the most celebrated advocates of transhumanism is Max More, whose website reads “no more gods, nor more faith, no more timid holding back. The future belongs to posthumanity”.8 We will have a clearer idea then of the kinds of position transhumanism stands in direct opposition to. Specifically, More8 asserts,

“Transhumanism” is a blanket term given to the school of thought that refuses to accept traditional human limitations such as death, disease and other biological frailties. Transhumans are typically interested in a variety of futurist topics, including space migration, mind uploading and cryonic suspension. Transhumans are also extremely interested in more immediate subjects such as bio‐ and nano‐technology, computers and neurology. Transhumans deplore the standard paradigms that attempt to render our world comfortable at the sake of human fulfilment.8

Strong transhumanism advocates see themselves engaged in a project, the purpose of which is to overcome the limits of human nature. Whether this is the foundational claim, or merely the central claim, is not clear. These limitations—one may describe them simply as features of human nature, as the idea of labelling them as limitations is itself to take up a negative stance towards them—concern appearance, human sensory capacities, intelligence, lifespan and vulnerability to harm. According to the extreme transhumanism programme, technology can be used to vastly enhance a person’s intelligence; to tailor their appearance to what they desire; to lengthen their lifespan, perhaps to immortality; and to reduce vastly their vulnerability to harm. This can be done by exploitation of various kinds of technology, including genetic engineering, cybernetics, computation and nanotechnology. Whether technology will continue to progress sufficiently, and sufficiently predictably, is of course quite another matter.

Advocates of transhumanism argue that recruitment or deployment of these various types of technology can produce people who are intelligent and immortal, but who are not members of the species Homo sapiens. Their species type will be ambiguous—for example, if they are cyborgs (part human, part machine)—or, if they are wholly machines, they will lack any common genetic features with human beings. A legion of labels covers this possibility; we find in Dyen’s5 recently translated book a variety of cultural bodies, perhaps the most extreme being cyberpunks:

…a profound misalignment between existence and its manifestation. This misalignment produces bodies so transformed, so dissociated, and so asynchronized, that their only outcome is gross mutation. Cyberpunk bodies are horrible, strange and mysterious (think of Alien, Robocop, Terminator, etc.), for they have no real attachment to any biological structure. (p 75)

Perhaps a reasonable claim is encapsulated in the idea that such entities will be posthuman. The extent to which posthuman might be synonymous with transhumanism is not clear. Extreme transhumanists strongly support such developments.

At the other end of transhumanism is a much less radical project, which is simply the project to use technology to enhance human characteristics—for example, beauty, lifespan and resistance to disease. In this less extreme project, there is no necessary aspiration to shed human nature or human genetic constitution, just to augment it with technology where possible and where desired by the person.

Who is for transhumanism?

At present it seems to be a movement based mostly in North America, although there are some adherents from the UK. Among its most intellectually sophisticated proponents is Nick Boström. Perhaps the most outspoken supporters of transhumanism are people who see it simply as an issue of free choice. It may simply be the case that moderate transhumanists are libertarians at the core. In that case, transhumanism merely supplies an overt technological dimension to libertarianism. If certain technological developments are possible, which they as competent choosers desire, then they should not be prevented from acquiring the technologically driven enhancements they desire. One obvious line of criticism here may be in relation to the inequality that necessarily arises with respect to scarce goods and services distributed by market mechanisms.9 We will elaborate this point in the Transhumanism and slippery slopes section.

So, one group of people for the transhumanism project sees it simply as a way of improving their own life by their own standards of what counts as an improvement. For example, they may choose to purchase an intervention, which will make them more intelligent or even extend their life by 200 years. (Of course it is not self‐evident that everyone would regard this as an improvement.) A less vociferous group sees the transhumanism project as not so much bound to the expansion of autonomy (notwithstanding our criticism that will necessarily be effected only in the sphere of economic consumer choice) as one that has the potential to improve the quality of life for humans in general. For this group, the relationship between transhumanism and the general good is what makes transhumanism worthy of support. For the other group, the worth of transhumanism is in its connection with their own conception of what is good for them, with the extension of their personal life choices.

What can be said in its favour?

Of the many points for transhumanism, we note three. Firstly, transhumanism seems to facilitate two aims that have commanded much support. The use of technology to improve humans is something we pretty much take for granted. Much good has been achieved with low‐level technology in the promotion of public health. The construction of sewage systems, clean water supplies, etc, is all work to facilitate this aim and is surely good work, work which aims at, and in this case achieves, a good. Moreover, a large portion of the modern biomedical enterprise is another example of a project that aims at generating this good too.

Secondly, proponents of transhumanism say it presents an opportunity to plan the future development of human beings, the species Homo sapiens. Instead of this being left to the evolutionary process and its exploitation of random mutations, transhumanism presents a hitherto unavailable option: tailoring the development of human beings to an ideal blueprint. Precisely whose ideal gets blueprinted is a point that we deal with later.

Thirdly, in the spirit of work in ethics that makes use of a technical idea of personhood, the view that moral status is independent of membership of a particular species (or indeed any biological species), transhumanism presents a way in which moral status can be shown to be bound to intellectual capacity rather than to human embodiment as such or human vulnerability in the capacity of embodiment (Harris, 1985).9a

What can be said against it?

Critics point to consequences of transhumanism, which they find unpalatable. One possible consequence feared by some commentators is that, in effect, transhumanism will lead to the existence of two distinct types of being, the human and the posthuman. The human may be incapable of breeding with the posthuman and will be seen as having a much lower moral standing. Given that, as Buchanan et al9 note, much moral progress, in the West at least, is founded on the category of the human in terms of rights claims, if we no longer have a common humanity, what rights, if any, ought to be enjoyed by transhumans? This can be viewed either as a criticism (we poor humans are no longer at the top of the evolutionary tree) or simply as a critical concern that invites further argumentation. We shall return to this idea in the final section, by way of identifying a deeper problem with the open‐endedness of transhumanism that builds on this recognition.

In the same vein, critics may argue that transhumanism will increase inequalities between the rich and the poor. The rich can afford to make use of transhumanism, but the poor will not be able to. Indeed, we may come to think of such people as deficient, failing to achieve a new heightened level of normal functioning.9 In the opposing direction, critical observers may say that transhumanism is, in reality, an irrelevance, as very few will be able to use the technological developments even if they ever manifest themselves. A further possibility is that transhumanism could lead to the extinction of humans and posthumans, for things are just as likely to turn out for the worse as for the better (eg, those for precautionary principle).

One of the deeper philosophical objections comes from a very traditional source. Like all such utopian visions, transhumanism rests on some conception of good. So just as humanism is founded on the idea that humans are the measure of all things and that their fulfilment is to be found in the powers of reason extolled and extended in culture and education, so too transhumanism has a vision of the good, albeit one loosely shared. For one group of transhumanists, the good is the expansion of personal choice. Given that autonomy is so widely valued, why not remove the barriers to enhanced autonomy by various technological interventions? Theological critics especially, but not exclusively, object to what they see as the imperialising of autonomy. Elshtain10 lists the three c’s: choice, consent and control. These, she asserts, are the dominant motifs of modern American culture. And there is, of course, an army of communitarians (Bellah et al,10a MacIntyre,10b Sandel,10c Taylor10d and Walzer10e) ready to provide support in general moral and political matters to this line of criticism. One extension of this line of transhumanism thinking is to align the valorisation of autonomy with economic rationality, for we may as well be motivated by economic concerns as by moral ones where the market is concerned. As noted earlier, only a small minority may be able to access this technology (despite Boström’s naive disclaimer for democratic transhumanism), so the technology necessary for transhumanist transformations is unlikely to be prioritised in the context of artificially scarce public health resources. One other population attracted to transhumanism will be the elite sports world, fuelled by the media commercialisation complex—where mere mortals will get no more than a glimpse of the transhuman in competitive physical contexts. There may be something of a double‐binding character to this consumerism. The poor, at once removed from the possibility of such augmentation, pay (per view) for the pleasure of their envy.

If we argue against the idea that the good cannot be equated with what people choose simpliciter, it does not follow that we need to reject the requisite medical technology outright. Against the more moderate transhumanists, who see transhumanism as an opportunity to enhance the general quality of life for humans, it is nevertheless true that their position presupposes some conception of the good. What kind of traits is best engineered into humans: disease resistance or parabolic hearing? And unsurprisingly, transhumanists disagree about precisely what “objective goods” to select for installation into humans or posthumans.

Some radical critics of transhumanism see it as a threat to morality itself.1,11 This is because they see morality as necessarily connected to the kind of vulnerability that accompanies human nature. Think of the idea of human rights and the power this has had in voicing concern about the plight of especially vulnerable human beings. As noted earlier a transhumanist may be thought to be beyond humanity and as neither enjoying its rights nor its obligations. Why would a transhuman be moved by appeals to human solidarity? Once the prospect of posthumanism emerges, the whole of morality is thus threatened because the existence of human nature itself is under threat.

One further objection voiced by Habermas11 is that interfering with the process of human conception, and by implication human constitution, deprives humans of the “naturalness which so far has been a part of the taken‐for‐granted background of our self‐understanding as a species” and “Getting used to having human life biotechnologically at the disposal of our contingent preferences cannot help but change our normative self‐understanding” (p 72).

On this account, our self‐understanding would include, for example, our essential vulnerability to disease, ageing and death. Suppose the strong transhumanism project is realised. We are no longer thus vulnerable: immortality is a real prospect. Nevertheless, conceptual caution must be exercised here—even transhumanists will be susceptible in the manner that Hobbes12 noted. Even the strongest are vulnerable in their sleep. But the kind of vulnerability transhumanism seeks to overcome is of the internal kind (not Hobbes’s external threats). We are reminded of Woody Allen’s famous remark that he wanted to become immortal, not by doing great deeds but simply by not dying. This will result in a radical change in our self‐understanding, which has inescapably normative elements to it that need to be challenged. Most radically, this change in self‐understanding may take the form of a change in what we view as a good life. Hitherto a human life, this would have been assumed to be finite. Transhumanists suggest that even now this may change with appropriate technology and the “right” motivation.

Do the changes in self‐understanding presented by transhumanists (and genetic manipulation) necessarily have to represent a change for the worse? As discussed earlier, it may be that the technology that generates the possibility of transhumanism can be used for the good of humans—for example, to promote immunity to disease or to increase quality of life. Is there really an intrinsic connection between acquisition of the capacity to bring about transhumanism and moral decline? Perhaps Habermas’s point is that moral decline is simply more likely to occur once radical enhancement technologies are adopted as a practice that is not intrinsically evil or morally objectionable. But how can this be known in advance? This raises the spectre of slippery slope arguments.

But before we discuss such slopes, let us note that the kind of approach (whether characterised as closed‐minded or sceptical) Boström seems to dislike is one he calls speculative. He dismisses as speculative the idea that offspring may think themselves lesser beings, commodifications of their parents’ egoistic desires (or some such). None the less, having pointed out the lack of epistemological standing of such speculation, he invites us to his own apparently more congenial position:

We might speculate, instead, that germ‐line enhancements will lead to more love and parental dedication. Some mothers and fathers might find it easier to love a child who, thanks to enhancements, is bright, beautiful, healthy, and happy. The practice of germ‐line enhancement might lead to better treatment of people with disabilities, because a general demystification of the genetic contributions to human traits could make it clearer that people with disabilities are not to blame for their disabilities and a decreased incidence of some disabilities could lead to more assistance being available for the remaining affected people to enable them to live full, unrestricted lives through various technological and social supports. Speculating about possible psychological or cultural effects of germ‐line engineering can therefore cut both ways. Good consequences no less than bad ones are possible. In the absence of sound arguments for the view that the negative consequences would predominate, such speculations provide no reason against moving forward with the technology. Ruminations over hypothetical side effects may serve to make us aware of things that could go wrong so that we can be on the lookout for untoward developments. By being aware of the perils in advance, we will be in a better position to take preventive countermeasures. (Boström, 2003, p 498)

Following Boström’s3 speculation then, what grounds for hope exist? Beyond speculation, what kinds of arguments does Boström offer? Well, most people may think that the burden of proof should fall to the transhumanists. Not so, according to Boström. Assuming the likely enormous benefits, he turns the tables on this intuition—not by argument but by skilful rhetorical speculation. We quote for accuracy of representation (emphasis added):

Only after a fair comparison of the risks with the likely positive consequences can any conclusion based on a cost‐benefit analysis be reached. In the case of germ‐line enhancements, the potential gains are enormous. Only rarely, however, are the potential gains discussed, perhaps because they are too obvious to be of much theoretical interest. By contrast, uncovering subtle and non‐trivial ways in which manipulating our genome could undermine deep values is philosophically a lot more challenging. But if we think about it, we recognize that the promise of genetic enhancements is anything but insignificant. Being free from severe genetic diseases would be good, as would having a mind that can learn more quickly, or having a more robust immune system. Healthier, wittier, happier people may be able to reach new levels culturally. To achieve a significant enhancement of human capacities would be to embark on the transhuman journey of exploration of some of the modes of being that are not accessible to us as we are currently constituted, possibly to discover and to instantiate important new values. On an even more basic level, genetic engineering holds great potential for alleviating unnecessary human suffering. Every day that the introduction of effective human genetic enhancement is delayed is a day of lost individual and cultural potential, and a day of torment for many unfortunate sufferers of diseases that could have been prevented. Seen in this light,proponents of a ban or a moratorium on human genetic modification must take on a heavy burden of proof in order to have the balance of reason tilt in their favor. (Bostrom,3 pp 498–9).

Now one way in which such a balance of reason may be had is in the idea of a slippery slope argument. We now turn to that.

Transhumanism and slippery slopes

A proper assessment of transhumanism requires consideration of the objection that acceptance of the main claims of transhumanism will place us on a slippery slope. Yet, paradoxically, both proponents and detractors of transhumanism may exploit slippery slope arguments in support of their position. It is necessary therefore to set out the various arguments that fall under this title so that we can better characterise arguments for and against transhumanism. We shall therefore examine three such attempts13,14,15 but argue that the arbitrary slippery slope15 may undermine all versions of transhumanists, although not every enhancement proposed by them.

Schauer13 offers the following essentialist analysis of slippery slope arguments. A “pure” slippery slope is one where a “particular act, seemingly innocuous when taken in isolation, may yet lead to a future host of similar but increasingly pernicious events”. Abortion and euthanasia are classic candidates for slippery slope arguments in public discussion and policy making. Against this, however, there is no reason to suppose that the future events (acts or policies) down the slope need to display similarities—indeed we may propose that they will lead to a whole range of different, although equally unwished for, consequences. The vast array of enhancements proposed by transhumanists would not be captured under this conception of a slippery slope because of their heterogeneity. Moreover, as Sternglantz16 notes, Schauer undermines his case when arguing that greater linguistic precision undermines the slippery slope and that indirect consequences often bolster slippery slope arguments. It is as if the slippery slopes would cease in a world with greater linguistic precision or when applied only to direct consequences. These views do not find support in the later literature. Schauer does, however, identify three non‐slippery slope arguments where the advocate’s aim is (a) to show that the bottom of a proposed slope has been arrived at; (b) to show that a principle is excessively broad; (c) to highlight how granting authority to X will make it more likely that an undesirable outcome will be achieved. Clearly (a) cannot properly be called a slippery slope argument in itself, while (b) and (c) often have some role in slippery slope arguments.

The excessive breadth principle can be subsumed under Bernard Williams’s distinction between slippery slope arguments with (a) horrible results and (b) arbitrary results. According to Williams, the nature of the bottom of the slope allows us to determine which category a particular argument falls under. Clearly, the most common form is the slippery slope to a horrible result argument. Walton14 goes further in distinguishing three types: (a) thin end of the wedge or precedent arguments; (b) Sorites arguments; and (c) domino‐effect arguments. Importantly, these arguments may be used both by antagonists and also by advocates of transhumanism. We shall consider the advocates of transhumanism first.

In the thin end of the wedge slippery slopes, allowing P will set a precedent that will allow further precedents (Pn) taken to an unspecified problematic terminus. Is it necessary that the end point has to be bad? Of course this is the typical linguistic meaning of the phrase “slippery slopes”. Nevertheless, we may turn the tables here and argue that [the] slopes may be viewed positively too.17 Perhaps a new phrase will be required to capture ineluctable slides (ascents?) to such end points. This would be somewhat analogous to the ideas of vicious and virtuous cycles. So transhumanists could argue that, once the artificial generation of life through technologies of in vitro fertilisation was thought permissible, the slope was foreseeable, and transhumanists are doing no more than extending that life‐creating and fashioning impulse.

In Sorites arguments, the inability to draw clear distinctions has the effect that allowing P will not allow us to consistently deny Pn. This slope follows the form of the Sorites paradox, where taking a grain of sand from a heap does not prevent our recognising or describing the heap as such, even though it is not identical with its former state. At the heart of the problem with such arguments is the idea of conceptual vagueness. Yet the logical distinctions used by philosophers are often inapplicable in the real world.15,18 Transhumanists may well seize on this vagueness and apply a Sorites argument as follows: as therapeutic interventions are currently morally permissible, and there is no clear distinction between treatment and enhancement, enhancement interventions are morally permissible too. They may ask whether we can really distinguish categorically between the added functionality of certain prosthetic devices and sonar senses.

In domino‐effect arguments, the domino conception of the slippery slope, we have what others often refer to as a causal slippery slope.19 Once P is allowed, a causal chain will be effected allowing Pn and so on to follow, which will precipitate increasingly bad consequences.

In what ways can slippery slope arguments be used against transhumanism? What is wrong with transhumanism? Or, better, is there a point at which we can say transhumanism is objectionable? One particular strategy adopted by proponents of transhumanism falls clearly under the aspect of the thin end of the wedge conception of the slippery slope. Although some aspects of their ideology seem aimed at unqualified goods, there seems to be no limit to the aspirations of transhumanism as they cite the powers of other animals and substances as potential modifications for the transhumanist. Although we can admire the sonic capacities of the bat, the elastic strength of lizards’ tongues and the endurability of Kevlar in contrast with traditional construction materials used in the body, their transplantation into humans is, to coin Kass’s celebrated label, “repugnant” (Kass, 1997).19a

Although not all transhumanists would support such extreme enhancements (if that is indeed what they are), less radical advocates use justifications that are based on therapeutic lines up front with the more Promethean aims less explicitly advertised. We can find many examples of this manoeuvre. Take, for example, the Cognitive Enhancement Research Institute in California. Prominently displayed on its website front page ( we read, “Do you know somebody with Alzheimer’s disease? Click to see the latest research breakthrough.” The mode is simple: treatment by front entrance, enhancement by the back door. Borgmann,20 in his discussion of the uses of technology in modern society, observed precisely this argumentative strategy more than 20 years ago:

The main goal of these programs seems to be the domination of nature. But we must be more precise. The desire to dominate does not just spring from a lust of power, from sheer human imperialism. It is from the start connected with the aim of liberating humanity from disease, hunger, and toil and enriching life with learning, art and athletics.

Who would want to deny the powers of viral diseases that can be genetically treated? Would we want to draw the line at the transplantation of non‐human capacities (sonar path finding)? Or at in vivo fibre optic communications backbone or anti‐degeneration powers? (These would have to be non‐human by hypothesis). Or should we consider the scope of technological enhancements that one chief transhumanist, Natasha Vita More21, propounds:

A transhuman is an evolutionary stage from being exclusively biological to becoming post‐biological. Post‐biological means a continuous shedding of our biology and merging with machines. (…) The body, as we transform ourselves over time, will take on different types of appearances and designs and materials. (…)

For hiking a mountain, I’d like extended leg strength, stamina, a skin‐sheath to protect me from damaging environmental aspects, self‐moisturizing, cool‐down capability, extended hearing and augmented vision (Network of sonar sensors depicts data through solid mass and map images onto visual field. Overlay window shifts spectrum frequencies. Visual scratch pad relays mental ideas to visual recognition bots. Global Satellite interface at micro‐zoom range).

For a party, I’d like an eclectic look ‐ a glistening bronze skin with emerald green highlights, enhanced height to tower above other people, a sophisticated internal sound system so that I could alter the music to suit my own taste, memory enhance device, emotional‐select for feel‐good people so I wouldn’t get dragged into anyone’s inappropriate conversations. And parabolic hearing so that I could listen in on conversations across the room if the one I was currently in started winding down.

Notwithstanding the difficulty of bringing together transhumanism under one movement, the sheer variety of proposals merely contained within Vita More’s catalogue means that we cannot determinately point to a precise station at which we can say, “Here, this is the end we said things would naturally progress to.” But does this pose a problem? Well, it certainly makes it difficult to specify exactly a “horrible result” that is supposed to be at the bottom of the slope. Equally, it is extremely difficult to say that if we allow precedent X, it will allow practices Y or Z to follow as it is not clear how these practices Y or Z are (if at all) connected with the precedent X. So it is not clear that a form of precedent‐setting slippery slope can be strictly used in every case against transhumanism, although it may be applicable in some.

Nevertheless, we contend, in contrast with Boström that the burden of proof would fall to the transhumanist. Consider in this light, a Sorites‐type slope. The transhumanist would have to show how the relationship between the therapeutic practices and the enhancements are indeed transitive. We know night from day without being able to specify exactly when this occurs. So simply because we cannot determine a precise distinction between, say, genetic treatments G1, G2 and G3, and transhumanism enhancements T1, T2 and so on, it does not follow that there are no important moral distinctions between G1 and T20. According to Williams,15 this kind of indeterminacy arises because of the conceptual vagueness of certain terms. Yet, the indeterminacy of so open a predicate “heap” is not equally true of “therapy” or “enhancement”. The latitude they permit is nowhere near so wide.

Instead of objecting to Pn on the grounds that Pn is morally objectionable (ie, to depict a horrible result), we may instead, after Williams, object that the slide from P to Pn is simply morally arbitrary, when it ought not to be. Here, we may say, without specifying a horrible result, that it would be difficult to know what, in principle, can ever be objected to. And this is, quite literally, what is troublesome. It seems to us that this criticism applies to all categories of transhumanism, although not necessarily to all enhancements proposed by them. Clearly, the somewhat loose identity of the movement—and the variations between strong and moderate versions—makes it difficult to sustain this argument unequivocally. Still the transhumanist may be justified in asking, “What is wrong with arbitrariness?” Let us consider one brief example. In aspects of our lives, as a widely shared intuition, we may think that in the absence of good reasons, we ought not to discriminate among people arbitrarily. Healthcare may be considered to be precisely one such case. Given the ever‐increasing demand for public healthcare services and products, it may be argued that access to them typically ought to be governed by publicly disputable criteria such as clinical need or potential benefit, as opposed to individual choices of an arbitrary or subjective nature. And nothing in transhumanism seems to allow for such objective dispute, let alone prioritisation. Of course, transhumanists such as More find no such disquietude. His phrase “No more timidity” is a typical token of transhumanist slogans. We applaud advances in therapeutic medical technologies such as those from new genetically based organ regeneration to more familiar prosthetic devices. Here the ends of the interventions are clearly medically defined and the means regulated closely. This is what prevents transhumanists from adopting a Sorites‐type slippery slope. But in the absence of a telos, of clearly and substantively specified ends (beyond the mere banner of enhancement), we suggest that the public, medical professionals and bioethicists alike ought to resist the potentially open‐ended transformations of human nature. For if all transformations are in principle enchancements, then surely none are. The very application of the word may become redundant. Thus it seems that one strong argument against transhumanism generally—the arbitrary slippery slope—presents a challenge to transhumanism, to show that all of what are described as transhumanist enhancements are imbued with positive normative force and are not merely technological extensions of libertarianism, whose conception of the good is merely an extension of individual choice and consumption.

Limits of transhumanist arguments for medical technology and practice

Already, we have seen the misuse of a host of therapeutically designed drugs used by non‐therapeutic populations for enhancements. Consider the non‐therapeutic use of human growth hormone in non‐clinical populations. Such is the present perception of height as a positional good in society that Cuttler et al22 report that the proportion of doctors who recommended human growth hormone treatment of short non‐growth hormone deficient children ranged from 1% to 74%. This is despite its contrary indication in professional literature, such as that of the Pediatric Endocrine Society, and considerable doubt about its efficacy.23,24 Moreover, evidence supports the view that recreational body builders will use the technology, given the evidence of their use or misuse of steroids and other biotechnological products.25,26 Finally, in the sphere of elite sport, which so valorises embodied capacities that may be found elsewhere in greater degree, precision and sophistication in the animal kingdom or in the computer laboratory, biomedical enhancers may latch onto the genetically determined capacities and adopt or adapt them for their own commercially driven ends.

The arguments and examples presented here do no more than to warn us of the enhancement ideologies, such as transhumanism, which seek to predicate their futuristic agendas on the bedrock of medical technological progress aimed at therapeutic ends and are secondarily extended to loosely defined enhancement ends. In discussion and in bioethical literatures, the future of genetic engineering is often challenged by slippery slope arguments that lead policy and practice to a horrible result. Instead of pointing to the undesirability of the ends to which transhumanism leads, we have pointed out the failure to specify their telos beyond the slogans of “overcoming timidity” or Boström’s3 exhortation that the passive acceptance of ageing is an example of “reckless and dangerous barriers to urgently needed action in the biomedical sphere”.

We propose that greater care be taken to distinguish the slippery slope arguments that are used in the emotionally loaded exhortations of transhumanism to come to a more judicious perspective on the technologically driven agenda for biomedical enhancement. Perhaps we would do better to consider those other all‐too‐human frailties such as violent aggression, wanton self‐harming and so on, before we turn too readily to the richer imaginations of biomedical technologists.


Competing interests: None.


1. Fukuyama F. Transhumanism. Foreign Policy 2004. 12442–44.44
2. Boström N. The fable of the dragon tyrant. J Med Ethics 2005. 31231–237.237 [PMC free article] [PubMed]
3. Boström N. Human genetic enhancements: a transhumanist perspective. J Value Inquiry 2004. 37493–506.506[PubMed]
4. Boström N. Transhumanist values. http://www.nickBostr öm com/ethics/values.h tml (accessed 19 May 2005).
5. Dyens O. The evolution of man: technology takes over. In: Trans Bibbee EJ, Dyens O, eds. Metal and flesh.L. ondon: MIT Press, 2001
6. World Transhumanist Association (accessed 7 Apr 2006)
7. More M. Transhumanism: towards a futurist philosophy. 1996 (accessed 20 Jul 2005)
8. More M. 2005 (accessed 13 Jul 2005)
9. Buchanan A, Brock D W, Daniels N. et alFrom chance to choice: genetics and justice. Cambridge: Cambridge University Press, 2000
9a. Harris J. The Value of Life. London: Routledge. 1985
10. Elshtain B. ed. The body and the quest for control. Is human nature obsolete?. Cambridge, MA: MIT Press, 2004. 155–174.174
10a. Bellah R N. et alHabits of the heart: individualism and commitment in American life. Berkeley: University of California Press. 1996
10b. MacIntyre A C. After virtue. (2nd ed) London: Duckworth. 1985
10c. Sandel M. Liberalism and the limits of justice. Cambridge: Cambridge University Press. 1982
10d. Taylor C. The ethics of authenticity. Boston: Harvard University Press. 1982
10e. Walzer M. Spheres of Justice. New York: Basic Books. 1983
11. Habermas J. The future of human nature. Cambridge: Polity, 2003
12. Hobbes T. In: Oakeshott M, ed. Leviathan. London: MacMillan, 1962
13. Schauer F. Slippery slopes. Harvard Law Rev 1985. 99361–383.383
14. Walton D N. Slippery slope arguments. Oxford: Clarendon, 1992
15. Williams B A O. Which slopes are slippery. In: Lockwood M, ed. Making sense of humanity. Cambridge: Cambridge University Press, 1995. 213–223.223
16. Sternglantz R. Raining on the parade of horribles: of slippery slopes, faux slopes, and Justice Scalia’s dissent in Lawrence v Texas, University of Pennsylvania Law Review, 153. Univ Pa Law Rev 2005. 1531097–1120.1120
17. Schubert L. Ethical implications of pharmacogenetics‐do slippery slope arguments matter? Bioethics 2004.18361–378.378 [PubMed]
18. Lamb D. Down the slippery slope. London: Croom Helm, 1988
19. Den Hartogh G. The slippery slope argument. In: Kuhse H, Singer P, eds. Companion to bioethics. Oxford: Blackwell, 2005. 280–290.290
19a. Kass L. The wisdom of repugnance. New Republic June 2, pp17–26 [PubMed]
20. Borgmann A. Technology and the character of everyday life. Chicago: University of Chicago Press, 1984
21. Vita More N. Who are transhumans?, 2000 (accessed 7 Apr 2006)
22. Cuttler L, Silvers J B, Singh J. et al Short stature and growth hormone therapy: a national study of physician recommendation patterns. JAMA 1996. 276531–537.537 [PubMed]
23. Vance M L, Mauras N. Growth hormone therapy in adults and children. N Engl J Med 1999. 3411206–1216.1216 [PubMed]
24. Anon Guidelines for the use of growth hormone in children with short stature: a report by the Drug and Therapeutics Committee of the Lawson Wilkins Pediatric Endocrine Society. J Pediatr 1995. 127857–867.867[PubMed]
25. Grace F, Baker J S, Davies B. Anabolic androgenic steroid (AAS) use in recreational gym users. J Subst Use2001. 6189–195.195
26. Grace F, Baker J S, Davies B. Blood pressure and rate pressure product response in males using high‐dose anabolic androgenic steroids (AAS) J Sci Med Sport 2003. 6307–12, 2728.12, 2728 [PubMed]

Articles from Journal of Medical Ethics are provided here courtesy of BMJ Group

This article can also be found on the National Center for Biotechnology Information (NCBI) website at

Humans 2.0 with Jason Silva

This is one of the Shots of Awe videos created by Jason Silva.  It’s called HUMAN 2.0.  I don’t think a description is in order here since all the Shots of Awe videos are short and sweet.

Runtime: 2:15

Video Info:

Published on Dec 2, 2014

“Your trivial-seeming self tracking app is part of something much bigger. It’s part of a new stage in the scientific method, another tool in the kit. It’s not the only thing going on, but it’s part of that evolutionary process.” – Ethan Zuckerman paraphrasing Kevin Kelly

Steven Johnson
“Chance favors the connected mind.”…

Additional footage courtesy of Monstro Design and

For more information on Norton security, please go here:

Join Jason Silva every week as he freestyles his way into the complex systems of society, technology and human existence and discusses the truth and beauty of science in a form of existential jazz. New episodes every Tuesday.

Watch More Shots of Awe on TestTube

Subscribe now!…

Jason Silva on Twitter

Jason Silva on Facebook

Jason Silva on Google+…

This video can also be found at

Singularity Timeline

Now here is a link to a cool website!  This is a future timeline for the singularity.  It’s pretty much all speculation, but speculation based on (mostly) sound science.

Due to the structure of the website, I’m just adding a link for this one.  You’ll see why…

Check out

How Much Longer Before Our First AI Catastrophe by George Dvorsky

This is an article called How Much Longer Before Our First AI Catastrophe?  Pretty pessimistic sounding title, right?  It’s actually not a bad article.  The primary focus is not on strong AI (artificial intelligence), as you might be assuming, but on weak AI.  My philosophy is the same as always on this one; be aware and be smart, fear will only cause problems.

How Much Longer Before Our First AI Catastrophe?

How Much Longer Before Our First AI Catastrophe?

What will happen in the days after the birth of the first true artificial intelligence? If things continue apace, this could prove to be the most dangerous time in human history. It will be an era of weak and narrow artificial intelligence, a highly dangerous combination that could wreak tremendous havoc on human civilization. Here’s why we’ll need to be ready.

First, let’s define some terms. The Technological Singularity, which you’ve probably heard of before, is the advent of recursively improving greater-than-human artificial general intelligence (or artificial superintelligence), or the development of strong AI (human-like artificial general intelligence).

But this particular concern has to do with the rise of weak AI — expert systems that match or exceed human intelligence in a narrowly defined area, but not in broader areas. As a consequence, many of these systems will work outside of human comprehension and control.

But don’t let the name fool you; there’s nothing weak about the kind of damage it could do.

Before the Singularity

The Singularity is often misunderstood as AI that’s simply smarter than humans, or the rise of human-like consciousness in a machine. Neither are the case. To a non-trivial degree, much of our AI already exceeds human capacities. It’s just not sophisticated and robust enough to do any significant damage to our infrastructure. The trouble will start to come when, in the case of the Singularity, a highly generalized AI starts to iteratively improve upon itself.

How Much Longer Before Our First AI Catastrophe?

And indeed, when the Singularity hits, it’ll be like, in the words of mathematician I. J. Good, anintelligence explosion — and it will indeed hit us like a bomb. Human control will forever be relegated to the sidelines, in whatever form that might take.

A pre-Singularity AI disaster or catastrophe, on the other hand, will be containable. But just barely. It’ll likely arise from an expert system or super-sophisticated algorithm run amok. And the worry is not so much its power — which is definitely a significant part of the equation — but the speed at which it will inflict the damage. By the time we have a grasp on what’s going on, something terrible may have happened.

Narrow AI could knock out our electric grid, damage nuclear power plants, cause a global-scale economic collapse, misdirect autonomous vehicles and robots, take control of a factory or military installation, or unleash some kind of propagating blight that will be difficult to get rid of (whether in the digital realm or the real world). The possibilities are frighteningly endless.

Our infrastructure is becoming increasingly digital and interconnected — and by consequence, increasingly vulnerable. In a few decades, it will be brittle as glass, with the bulk of human activity dependant upon it.

And it is indeed a possibility. The signs are all there.

Accidents Will Happen

Back in 1988, a Cornell University student named Robert Morris scripted a software program that could measure the size of the Internet. To make it work, he equipped it with a few clever tricks to help it along its way, including an ability to exploit known vulnerabilities in popular utility programs running on UNIX. This allowed the program to break into those machines and copy itself, thus infecting those systems.

How Much Longer Before Our First AI Catastrophe?

On November 2, 1988, Morris introduced his program to the world. It quickly spread to thousands of computers, disrupting normal activities and Internet connectivity for days. Estimates put the cost of the damage anywhere between $10,000 to $100,000. Dubbed the “Morris Worm,” it’s considered the first worm in human history — one that prompted DARPA to fund the establishment of the CERT/CC at Carnegie Mellon University to anticipate and respond to this new kind of threat.

As for Morris, he was charged under the Computer Fraud and Abuse Act and given a $10,000 fine.

But the takeaway from the incident was clear: Despite our best intentions, accidents willhappen. And as we continue to develop and push our technologies forward, there’s always the chance that it will operate outside our expectations — and even our control.

Down to the Millisecond

Indeed, unintended consequences are one thing, containability is quite another. Our technologies are increasingly operating at levels beyond our real-time capacities. The best example of this comes from the world of high-frequency stock trading (HFT).

How Much Longer Before Our First AI Catastrophe?

In HFT, securities are traded on a rapid-fire basis through the use of powerful computers and algorithms. A single investment position can last for a few minutes — or a few milliseconds; there can be as many as 500 transactions made in a single second. This type of computer trading can result in thousands upon thousands of transactions a day, each and every one of them decided by super-sophisticated scripts. The human traders involved (such as they are) just sit back and watch, incredulous to the machinations happening at break-neck speeds.

“Back in the day, I used to be able to explain to a client how their trade was executed. Technology has made the trade process so convoluted and complex that I can’t do that any more,” noted PNC Wealth Management’s Jim Dunigan in a Markets Media article.

Clearly, the ability to assess market conditions and react quickly is a valuable asset to have. And indeed, according to a 2009 study, HFT firms accounted for 60 to 73% of all U.S. equity trading volume; but as of last year that number dropped to 50% — but it’s still considered a highly profitable form of trading.

To date, the most significant single incident involving HFT came at 2:45 on May 5th, 2010. For a period of about five minutes, the Dow Jones Industrial Average plummeted over 1,000 points (approximately 9%); for a few minutes, $1 trillion in market value vanished. About 600 points were recovered 20 minutes later. It’s now called the 2010 Flash Crash, the second largest point swing in history and the biggest one-day point decline.

The incident prompted an investigation by Gregg E. Berman, the U.S. Securities and Exchange Commission (SEC), and the Commodity Futures Trading Commission (CFTC). The investigators posited a number of theories (of which there are many, some of them quite complex), but their primary concern was the impact of HFT. They determined that the collective efforts of the algorithms exacerbated price declines; by selling aggressively, the trader-bots worked to eliminate their positions and withdraw from the market in the face of uncertainty.

The following year, an independent study concluded that technology played an important role, but that it wasn’t the entire story. Looking at the Flash Crash in detail, the authors argued that it was “the result of the new dynamics at play in the current market structure,” and the role played by “order toxicity.” At the same time, however, they noted that HFT traders exhibited trading patterns inconsistent with the traditional definition of market making, and that they were “aggressively [trading] in the direction of price changes.”

HFT is also playing an increasing role in currencies and commodities, making up about 28% of the total volume in futures markets. Not surprisingly, this area has become vulnerable to mini crashes. Following incidents involving the trading of cocoa and sugar, the Wall Street Journalhighlighted the growing concerns:

“The electronic platform is too fast; it doesn’t slow things down” like humans would, said Nick Gentile, a former cocoa floor trader. “It’s very frustrating” to go through these flash crashes, he said…

..The same is happening in the sugar market, provoking outrage within the industry. In a February letter to ICE, the World Sugar Committee, which represents large sugar users and producers, called algorithmic and high-speed traders “parasitic.”

Just how culpable HFT is to the phenomenon of flash crashes is an open question, but it’s clear that the trading environment is changing rapidly. Market analysts now speak in terms of “microstructures,” trading “circuit breakers,” and the “VPIN Flow Toxicity metric.” It’s also difficult to predict how serious future flash crashes could become. If insufficient measures aren’t put into place to halt these events when they happen, and assuming HFT is scaled-up in terms of market breadth, scope, and speed, it’s not unreasonable to think of events in which massive and irrecoverable losses might occur. And indeed, some analysts are already predicting systems that can support 100,000 transactions per second.

More to the point, HFT and flash crashes may not create an economic disaster — but it’s a potent example of how our other mission-critical systems may reach unprecedented tempos. As we defer critical decision making to our technological artifacts, and as they increase in power and speed, we are increasingly finding ourselves outside of the locus of control and comprehension.

When AI Screws Up, It Screws Up Badly

No doubt, we are already at the stage when computers exceed our ability to understand how and why they do the things they do. One of the best examples of this is IBM’s Watson, the expert computer system that trounced the world’s best Jeopardy players in 2011. To make it work, Watson’s developers scripted a series of programs that, when pieced together, created an overarching game-playing system. And they’re not entirely sure how it works.

David Ferrucci, the Leader Researcher of the project, put it this way:

Watson absolutely surprises me. People say: ‘Why did it get that one wrong?’ I don’t know. ‘Why did it get that one right?’ I don’t know.

Which is actually quite disturbing. And not so much because we don’t understand why it succeeds, but because we don’t necessarily understand why it fails. By virtue, we can’t understand or anticipate the nature of its mistakes.

How Much Longer Before Our First AI Catastrophe?

For example, Watson had one memorable gaff that clearly demonstrated how, when an AI fails, it fails big time. During the Final Jeopardy portion, it was asked, “Its largest airport is named for a World War II hero; its second largest, for a World War II battle.” Watson responded with, “What is Toronto?”

Given that Toronto’s Billy Bishop Airport is named after a war hero, that was not a terrible guess. But why this was such a blatant mistake is that the category was “U.S. Cities.” Toronto, not being a U.S. city, couldn’t possibly have been the correct answer.

Again, this is the important distinction that needs to be made when addressing the potential for a highly generalized AI. Weak, narrow systems are extremely powerful, but they’re also extremely stupid; they’re completely lacking in common sense. Given enough autonomy and responsibility, a failed answer or a wrong decision could be catastrophic.

As another example, take the recent initiative to give robots their very own Internet. By providing and sharing information amongst themselves, it’s hoped that these bots can learn without having to be programmed. A problem arises, however, when instructions for a task are mismatched — the result of an AI error. A stupid robot, acting without common sense, would simply execute upon the task even when the instructions are wrong. In another 30 to 40 years, one can only imagine the kind of damage that could be done, either accidentally, or by a malicious script kiddie.

Moreover, because expert systems like Watson will soon be able to conjure answers to questions that are beyond our comprehension, we won’t always know when they’re wrong. And that is a frightening prospect.

The Shape of Things to Come

It’s difficult to know exactly how, when, or where the first true AI catastrophe will occur, but we’re still several decades off. Our infrastructure is still not integrated or robust enough to allow for something really terrible to happen. But by the 2040s (if not sooner), our highly digital and increasingly interconnected world will be susceptible to these sorts of problems.

How Much Longer Before Our First AI Catastrophe?

By that time, our power systems (electric grids, nuclear plants, etc.) could be vulnerable to errors and deliberate attacks. Already today, the U.S. has been able to infiltrate the control system software known to run centrifuges in Iranian nuclear facilities by virtue of its Stuxnet program — an incredibly sophisticated computer virus (if you can call it that). This program represents the future of cyber-espionage and cyber-weaponry — and it’s a pale shadow of things to come.

In future, more advanced versions will likely be able to not just infiltrate enemy or rival systems, it could reverse-engineer it, inflict terrible damage — or even take control. But like the Morris Worm incident showed, it may be difficult to predict the downstream effects of these actions, particularly when dealing with autonomous, self-replicating code. It could also result in an AI arms race, with each side developing programs and counter-programs to get an edge on the other side’s technologies.

How Much Longer Before Our First AI Catastrophe?

And though it might seem like the premise of a scifi novel, an AI catastrophe could also involve the deliberate or accidental takeover of any system running off an AI. This could include integrated military equipment, self-driving vehicles (including airplanes), robots, and factories. Should something like this occur, the challenge will be to disable the malign script (or source program) as quickly as possible, which may not be easy.

More conceptually, and in the years immediately preceding the onset of uncontainable self-improving machine intelligence, a narrow AI could be used (again, either deliberately or unintentionally) to execute upon a poorly articulated goal. The powerful system could over-prioritize a certain aspect, or grossly under-prioritize another. And it could make sweeping changes in the blink of an eye.

Hopefully, if and when this does happen, it will be containable and relatively minor in scope. But it will likely serve as a call to action in anticipation of more catastrophic episodes. As for now, and in consideration of these possibilities, we need to ensure that our systems are secure, smart, and resilient.

Images: Shutterstock/agsandrew; Washington Times; TIME, Potapov Alexander/Shutterstock.

This article can also be found on the io9 website at

Just a Another Definition of “The Singularity”

There are plenty of definitions of the singularity out there and I don’t plan to post any more of these, but I thought this one (from WhatIs)was worth having on Dawn of Giants.  

Singularity (the)

Part of the Nanotechnology glossary:

The Singularity is the hypothetical future creation of superintelligent machines. Superintelligence is defined as a technologically-created cognitive capacity far beyond that possible for humans. Should the Singularity occur, technology will advance beyond our ability to foresee or control its outcomes and the world will be transformed beyond recognition by the application of superintelligence to humans and/or human problems, including poverty, disease and mortality.

Revolutions in genetics, nanotechnology and robotics (GNR) in the first half of the 21stcentury are expected to lay the foundation for the Singularity. According to Singularity theory, superintelligence will be developed by self-directed computers and will increase exponentially rather than incrementally.

Lev Grossman explains the prospective exponential gains in capacity enabled by superintelligent machines in an article in Time:

“Their rate of development would also continue to increase, because they would take over their own development from their slower-thinking human creators. Imagine a computer scientist that was itself a super-intelligent computer. It would work incredibly quickly. It could draw on huge amounts of data effortlessly. It wouldn’t even take breaks to play Farmville.”

Proposed mechanisms for adding superintelligence to humans include brain-computer interfaces, biological alteration of the brain, artificial intelligence (AI) brain implants and genetic engineering. Post-singularity, humanity and the world would be quite different.  A human could potentially scan his consciousness into a computer and live eternally in virtual reality or as a sentient robot. Futurists such as Ray Kurzweil (author of The Singularity is Near) have predicted that in a post-Singularity world, humans would typically live much of the time in virtual reality — which would be virtually indistinguishable from normal reality. Kurzweil predicts, based on mathematical calculations of exponential technological development, that the Singularity will come to pass by 2045.

Most arguments against the possibility of the Singularity involve doubts that computers can ever become intelligent in the human sense. The human brain and cognitive processes may simply be more complex than a computer could be. Furthermore, because the human brain isanalog, with theoretically infinite values for any process, some believe that it cannot ever be replicated in a digital format. Some theorists also point out that the Singularity may not even be desirable from a human perspective because there is no reason to assume that a superintelligence would see value in, for example, the continued existence or well-being of humans.

Science-fiction writer Vernor Vinge first used the term the Singularity in this context in the 1980s, when he used it in reference to the British mathematician I.J. Good’s concept of an “intelligence explosion” brought about by the advent of superintelligent machines. The term is borrowed from physics; in that context a singularity is a point where the known physical laws cease to apply.


This article can also be found at