Hugo de Garis – Singularity Skepticism (Produced by Adam Ford)

This is Hugo de Garis talking about why people tend to react with a great deal of skepticism.  To address the skeptics, de Garis explains Moore’s Law and goes into it’s many implications.  Hugo de Garis makes a statement toward the end about how people will begin to come around when they begin to see their household electronics getting smarter and smarter.

Runtime: 12:31

This video can also be found here and here.

Video Info:

Published on Jul 31, 2012

Hugo de Garis speaks about why people are skeptical about the possibility of machine intelligence, and also reasons for believing machine intelligence is possible, and quite probably will be an issue that we will need to face in the coming decades.

If the brain guys can copy how the brain functions closely enough…we will arrive at a machine based on neuroscience ideas and that machine will be intelligent and conscious



Peter Voss Interview on Artificial General Intelligence

This is an interview with Peter Voss of Optimal talking about artificial general intelligence.  One of the things Voss talks about is the skepticism which is a common reaction when talking about creating strong AI and why (as Tony Robbins always says) the past does not equal the future.  He also talks about why he thinks that Ray Kurzweil’s predictions that AGI won’t be achieved for another 20 is wrong – (and I gotta say, he makes a good point).  If you are interested in artificial intelligence or ethics in technology then you’ll want to watch this one…  

And don’t worry, the line drawing effect at the beginning of the video only lasts a minute.

Runtime: 39:55

This video can also be found at

Video Info:

Published on Jan 8, 2013

Peter Voss is the founder and CEO of Adaptive A.I. Inc, an R&D company developing a high-level general intelligence (AGI) engine. He is also founder and CTO of Smart Action Company LLC, which builds and supplies AGI-based virtual contact-center agents — intelligent, automated phone operators.

Peter started his career as an entrepreneur, inventor, engineer and scientist at age 16. After several years of experience in electronics engineering, at age 25 he started a company to provide advanced custom hardware and software solutions. Seven years later the company employed several hundred people and was successfully listed on the Johannesburg Stock Exchange.

After selling his interest in the company in 1993, he worked on a broad range of disciplines — cognitive science, philosophy and theory of knowledge, psychology, intelligence and learning theory, and computer science — which served as the foundation for achieving new breakthroughs in artificial general intelligence. In 2001 he started Adaptive AI Inc., and last year founded Smart Action Company as its commercialization division.

Peter considers himself a free-minds-and-markets Extropian, and often writes and presents on philosophical topics including rational ethics, freewill and artificial minds. He is also deeply involved with futurism and life-extension.

My main occupation is research in high-level, general (domain independent, autonomous) Artificial Intelligence — “Adaptive A.I. Inc.”

I believe that integrating insights from the following areas of cognitive science are crucial for rapid progress in this field:

Philosophy/ epistemology – understanding the true nature of knowledge
Cognitive psychology (incl. developmental & psychometric) for analysis of cognition – and especially – general conceptual intelligence.
Computer science – self-modifying systems, combining new connectionist pattern manipulation techniques with ‘traditional’ AI engineering.
Anyone who shares my passion – and/ or concerns – for this field is welcome to contact me for brainstorming and possible collaboration.

My other big passion is for exploring what I call Optimal Living: Maximizing both the quantity & quality of life. I see personal responsibility and optimizing knowledge acquisition as key. Specific interests include:

Rationality, as a means for knowledge. I’m largely sympathetic to the philosophy of Objectivism, and have done quite a bit of work on developing a rational approach to (personal & social) ethics.
Health (quality): physical, financial, cognitive, and emotional (passions, meaningful relationships, appreciation of art, etc.). Psychology: IQ & EQ.
Longevity (quantity): general research, CRON (calorie restriction), cryonics
Environment: economic, social, political systems conducive to Optimal Living.
These interests logically lead to an interest in Futurism , in technology for improving life – overcoming limits to personal growth & improvement. The transhumanist philosophy of Extropianism best embodies this quest. Specific technologies that seem to hold most promise include AI, Nanotechnology, & various health & longevity approaches mentioned above.

I always enjoy meeting new people to explore ideas, and to have my views critiqued. To this end I am involved in a number of discussion groups and salons (e.g. ‘Kifune’ futurist dinner/discussion group). Along the way I’m trying to develop and learn the complex art of constructive dialog.

Interview done at SENS party LA 20th Dec 2012.



From the Human Brain to the Global Brain by Marios Kyriazis

This paper (From the Human Brain to the Global Brain by Marios Kyriazis) talks about brain augmentation and the possible (probable?) emergence of a global brain.  This is actually a concept which is quite familiar to me because it is the backdrop to a science fiction novel (possibly series) I’ve been writing in my spare time – limited as that may be, but more on that another time.  I’d just like to point out (and I know I’m not the first) that we already have the framework (the internet) for a rudimentary global brain.  Really, all it lacks is sophistication.


From the Human Brain to the Global Brain


Human intelligence (i.e., the ability to consistently solve problems successfully) has evolved through the need to adapt to changing environments. This is not only true of our past but also of our present. Our brain faculties are becoming more sophisticated by cooperating and interacting with technology, specifically digital communication technology (Asaro, 2008).

When we consider the matter of brain function augmentation, we take it for granted that the issue refers to the human brain as a distinct organ. However, as we live in a complex technological society, it is now becoming clear that the issue is much more complicated. Individual brains cannot simply be considered in isolation, and their function is no longer localized or contained within the cranium, as we now know that information may be transmitted directly from one brain to another (Deadwyler et al., 2013; Pais-Vieira et al., 2013). This issue has been discussed in detail and attempts have been made to study the matter within a wider and more global context (Nicolelis and Laporta, 2011). Recent research in the field of brain to brain interfaces has provided the basis for further research and formation of new hypotheses in this respect (Grau et al., 2014; Rao et al., 2014). This concept of rudimentary “brain nets” may be expanded in a more global fashion, and within this framework, it is possible to envisage a much bigger and abstract “meta-entity” of inclusive and distributed capabilities, called the Global Brain (Mayer-Kress and Barczys, 1995;Heylighen and Bollen, 1996;Johnson et al., 1998; Helbing, 2011; Vidal, in press).

This entity reciprocally feeds information back to its components—the individual human brains. As a result, novel and hitherto unknown consequences may materialize such as, for instance, the emergence of rudimentary global “emotion” (Garcia and Tanase, 2013; Garcia et al., 2013; Kramera et al., 2014), and the appearance of decision-making faculties (Rodriguez et al., 2007). These characteristics may have direct impact upon our biology (Kyriazis, 2014a). This has been long discussed in futuristic and sociology literature (Engelbart, 1988), but now it also becomes more relevant to systems neuroscience partly because of the very promising research in brain-to-brain interfaces. The concept is grounded on scientific principles (Last, 2014a) and mathematical modeling (Heylighen et al., 2012).

Augmenting Brain Function on a Global Scale

It can be argued that the continual enhancement of brain function in humans, i.e., the tendency to an increasing intellectual sophistication, broadly aligns well with the main direction of evolution (Steward, 2014). This tendency to an increasing intellectual sophistication also obeys Ashby’s Law of Requisite Variety (Ashby, 1958) which essentially states that, for any system to be stable, the number of states of its control mechanisms must be greater than the number of states in the system being controlled. This means that, within an ever-increasing technological environment, we must continue to increase our brain function (mostly through using, or merging with, technology such as in the example of brain to brain communication mentioned above), in order to improve integration and maintain stability of the wider system. Several other authors (Maynard Smith and Szathmáry, 1997;Woolley et al., 2010; Last, 2014a) have expanded on this point, which seems to underpin our continual search for brain enrichment.

The tendency to enrich our brain is an innate characteristic of humans. We have been trying to augment our mental abilities, either intentionally or unintentionally, for millennia through the use of botanicals and custom-made medicaments, herbs and remedies, and, more recently, synthetic nootropics and improved ways to assimilate information. Many of these methods are not only useful in healthy people but are invaluable in age-related neurodegenerative disorders such as dementia and Parkinson’s disease (Kumar and Khanum, 2012). Other neuroscience-based methods such as transcranial laser treatments and physical implants (such as neural dust nanoparticles) are useful in enhancing cognition and modulate other brain functions (Gonzalez-Lima and Barrett, 2014).

However, these approaches are limited to the biological human brain as a distinct agent. As shown by the increased research interest in brain to brain communication (Trimper et al., 2014), I argue that the issue of brain augmentation is now embracing a more global aspect. The reason is the continual developments in technology which are changing our society and culture (Long, 2010). Certain brain faculties that were originally evolved for solving practical physical problems have been co-opted and exapted for solving more abstract metaphors, making humans adopt a better position within a technological niche.

The line between human brain function and digital information technologies is progressively becoming indistinct and less well-defined. This blurring is possible through the development of new technologies which enable more efficient brain-computer interfaces (Pfurtscheller and Neuper, 2002), and recently, brain-to-brain interfaces (Grau et al., 2014).

We are now in a position expand on this emergent worldview and examine what trends of systems neuroscience are likely in the near-term future. Technology has been the main drive which brought us to the position we are in today (Henry, 2014). This position is the merging of the physical human brain abilities with virtual domains and automated web services (Kurzweil, 2009). Modern humans cannot purely be defined by their biological brain function. Instead, we are now becoming an amalgam of biological and virtual/digital characteristics, a discrete unit, or autonomous agent, forming part of a wider and more global entity (Figure 1).

global brain

Figure 1. Computer-generated image of internet connections world-wide (Global Brain). The conceptual similarities with the human brain are remarkable. Both networks exhibit a scale-free, fractal distribution, with some weakly-connected units, and some strongly-connected ones which are arranged in hubs of increasing functional complexity. This helps protect the constituents of the network against stresses. Both networks are “small worlds” which means that information can reach any given unit within the network by passing through only a small number of other units. This assists in the global propagation of information within the network, and gives each and every unit the functional potential to be directly connected to all others. Source: The Opte Project/Barrett Lyon. Used under the Creative Commons Attribution-Non-Commercial 4.0 International License.

Large Scale Networks and the Global Brain

The Global Brain (Heylighen, 2007; Iandoli et al., 2009; Bernstein et al., 2012) is a self-organizing system which encompasses all those humans who are connected with communication technologies, as well as the emergent properties of these connections. Its intelligence and information-processing characteristics are distributed, in contrast to that of individuals whose intelligence is localized. Its characteristics emerge from the dynamic networks and global interactions between its individual agents. These individual agents are not merely the biological humans but are something more complex. In order to describe this relationship further, I have introduced the notion of the noeme, an emergent agent, which helps formalize the relationships involved (Kyriazis, 2014a). The noeme is a combination of a distinct physical brain function and that of an “outsourced” virtual one. It is the intellectual “networked presence” of an individual within the GB, a meaningful synergy between each individual human, their social interactions and artificial agents, globally connected to other noemes through digital communications technology (and, perhaps soon, through direct brain to brain interfaces). A comparison can be made with neurons which, as individual discrete agents, form part of the human brain. In this comparison, the noemes act as the individual, information-sharing discrete agents which form the GB (Gershenson, 2011). The modeling of noemes helps us define ourselves in a way that strengthens our rational presence in the digital world. By trying to enhance our information-sharing capabilities we become better integrated within the GB and so become a valuable component of it, encouraging mechanisms active in all complex adaptive systems to operate in a way that prolongs our retention within this system (Gershenson and Fernández, 2012), i.e., prolongs our biological lifespan (Kyriazis, 2014b; Last, 2014b).


This concept is a helpful way of interpreting the developing cognitive relationship between humans and artificial agents as we evolve and adapt to our changing technological environment. The concept of the noeme provides insights with regards to future problems and opportunities. For instance, the study of the function of the noeme may provide answers useful to biomedicine, by coopting laws applicable to any artificial intelligence medium and using these to enhance human health (Kyriazis, 2014a). Just as certain physical or pharmacological therapies for brain augmentation are useful in neurodegeneration in individuals, so global ways of brain enhancement are useful in a global sense, improving the function and adaptive capabilities of humanity as a whole. One way to augment global brain function is to increase the information content of our environment by constructing smart cities (Caragliu et al., 2009), expanding the notion of the Web of Things (Kamilaris et al., 2011), and by developing new concepts in educational domains (Veletsianos, 2010). This improves the information exchange between us and our surroundings and helps augment brain function, not just physically in individuals, but also virtually in society.

Practical ways for enhancing our noeme (i.e., our digital presence) include:

• Cultivate a robust social media base, in different forums.

• Aim for respect, esteem and value within your virtual environment.

• Increase the number of your connections both in virtual and in real terms.

• Stay consistently visible online.

• Share meaningful information that requires action.

• Avoid the use of meaningless, trivial or outdated platforms.

• Increase the unity of your connections by using only one (user) name for all online and physical platforms.

These methods can help increase information sharing and facilitate our integration within the GB (Kyriazis, 2014a). In a practical sense, these actions are easy to perform and can encompass a wide section of modern communities. Although the benefits of these actions are not well studied, nevertheless some initial findings appear promising (Griffiths, 2002; Granic et al., 2014).

Concluding Remarks

With regards to improving brain function, we are gradually moving away from the realms of science fiction and into the realms of reality (Kurzweil, 2005). It is now possible to suggest ways to enhance our brain function, based on novel concepts dependent not only on neuroscience but also on digital and other technology. The result of such augmentation does not only benefit the individual brain but can also improve all humanity in a more abstract sense. It improves human evolution and adaptation to new technological environments, and this, in turn, may have positive impact upon our health and thus longevity (Solman, 2012; Kyriazis, 2014c).

In a more philosophical sense, our progressive and distributed brain function amplification has begun to lead us toward attaining “god-like” characteristics (Heylighen, in press) particularly “omniscience” (through Google, Wikipedia, the semantic web, Massively Online Open Courses MOOCs—which dramatically enhance our knowledge base), and “omnipresence” (cloud and fog computing, Twitter, YouTube, Internet of Things, Internet of Everything). These are the result of the outsourcing of our brain capabilities to the cloud in a distributed and universal manner, which is an ideal global neural augmentation. The first steps have already been taken through brain to brain communication research. The concept of systems neuroscience is thus expanded to encompass not only the human nervous network but also a global network with societal and cultural elements.

Conflict of Interest Statement

The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.


I thank the help and input of the reviewers, particularly the first one who has dedicated a lot of time into improving the paper.


Asaro, P. (2008). “From mechanisms of adaptation to intelligence amplifiers: the philosophy of W. Ross Ashby,” in The Mechanical Mind in History, eds M. Wheeler, P. Husbands, and O. Holland (Cambridge, MA: MIT Press), 149–184.

Google Scholar

Ashby, W. R. (1958). Requisite Variety and its implications for the control of complex systems. Cybernetica (Namur) 1, 2.

Bernstein, A., Klein, M., and Malone, T. W. (2012). Programming the Global Brain. Commun. ACM 55, 1. doi: 10.1145/2160718.2160731

CrossRef Full Text | Google Scholar

Caragliu, A., Del Bo, C., and Nijkamp, P. (2009). Smart Cities in Europe. Serie Research Memoranda 0048, VU University Amsterdam, Faculty of Economics, Business Administration and Econometrics.

Google Scholar

Deadwyler, S. A., Berger, T. W., Sweatt, A. J., Song, D., Chan, R. H., Opris, I., et al. (2013). Donor/recipient enhancement of memory in rat hippocampus. Front. Syst. Neurosci. 7:120. doi: 10.3389/fnsys.2013.00120

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text | Google Scholar

Engelbart, D. C. (1988). A Conceptual Framework for the Augmentation of Man’s Intellect. Computer-Supported Cooperative Work. San Francisco, CA: Morgan Kaufmann Publishers Inc. ISBN: 0-93461-57-5

Garcia, D., Mavrodiev, P., and Schweitzer, F. (2013). Social Resilience in Online Communities: The Autopsy of Friendster. Available online at: (Accessed October 8, 2014).

Garcia, D., and Tanase, D. (2013). Measuring Cultural Dynamics Through the Eurovision Song Contest. Available online at: (Accessed October 8, 2014).

Gershenson, C. (2011). The sigma profile: a formal tool to study organization and its evolution at multiple scales.Complexity 16, 37–44. doi: 10.1002/cplx.20350

CrossRef Full Text | Google Scholar

Gershenson, C., and Fernández, N. (2012). Complexity and information: measuring emergence, self-organization, and homeostasis at multiple scales. Complexity 18, 29–44. doi: 10.1002/cplx.21424

CrossRef Full Text | Google Scholar

Gonzalez-Lima, F., and Barrett, D. W. (2014). Augmentation of cognitive brain function with transcranial lasers. Front. Syst. Neurosc. 8:36. doi: 10.3389/fnsys.2014.00036

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text | Google Scholar

Granic, I., Lobel, A., and Engels, R. C. M. E. (2014). The Benefits of Playing Video Games. American Psychologist. Available online at: (Accessed October 5, 2014).

Grau, C., Ginhoux, R., Riera, A., Nguyen, T. L., Chauvat, H., Berg, M., et al. (2014). Conscious brain-to-brain communication in humans using non-invasive technologies. PLoS ONE 9:e105225. doi: 10.1371/journal.pone.0105225

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text | Google Scholar

Griffiths, M. (2002). The educational benefits of videogames. Educ. Health 20, 47–51.

Pubmed Abstract | Pubmed Full Text | Google Scholar

Helbing, D. (2011). FuturICT-New Science and Technology to Manage Our Complex, Strongly Connected World. Available online at: November 6, 2014).

Henry, C. (2014). IT and the Legacy of Our Cultural Heritage EDUCAUSE Review, Vol. 49 (Louisville, CO: D. Teddy Diggs).

Heylighen, F., and Bollen, J. (1996). “The World-Wide Web as a Super-Brain: from metaphor to model,” in Cybernetics and Systems’ 96, ed R. Trappl (Vienna: Austrian Society For Cybernetics), 917–922.

Google Scholar

Heylighen, F. (2007). The Global Superorganism: an evolutionary-cybernetic model of the emerging network society. Soc. Evol. Hist. 6, 58–119

Google Scholar

Heylighen, F., Busseniers, E., Veitas, V., Vidal, C., and Weinbaum, D. R. (2012). Foundations for a Mathematical Model of the Global Brain: architecture, components, and specifications (No. 2012-05). GBI Working Papers. Available online at: (Accessed November 6, 2014).

Heylighen, F. (in press). “Return to Eden? promises and perils on the road to a global superintelligence,” in The End of the Beginning: Life, Society and Economy on the Brink of the Singularity, eds B. Goertzel and T. Goertzel.

Google Scholar

Johnson, N. L., Rasmussen, S., Joslyn, C., Rocha, L., Smith, S., and Kantor, M. (1998). “Symbiotic Intelligence: self-organizing knowledge on distributed networks driven by human interaction,” in Artificial Life VI, Proceedings of the Sixth International Conference on Artificial Life (Los Angeles, CA), 403–407.

Google Scholar

Iandoli, L., Klein, M., and Zollo, G. (2009). Enabling on-line deliberation and collective decision-making through large-scale argumentation: a new approach to the design of an Internet-based mass collaboration platform. Int. J. Decis. Supp. Syst. Technol. 1, 69–92 doi: 10.4018/jdsst.2009010105

CrossRef Full Text | Google Scholar

Kamilaris, A., Pitsillides, A., and Trifa, A. (2011). The Smart Home meets the Web of Things. Int. J. Ad Hoc Ubiquit. Comput. 7, 145–154. doi: 10.1504/IJAHUC.2011.040115

CrossRef Full Text | Google Scholar

Kramera, A. D., Guillory, J. E., and Hancock, J. T. (2014). Experimental Evidence of Massive-Scale Emotional Contagion Through Social Networks. Available online at: (Accessed October 10, 2014).

Kumar, G. P., and Khanum, F. (2012). Neuroprotective potential of phytochemicals. Pharmacogn Rev. 6, 81–90. doi: 10.4103/0973-7847.99898

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text | Google Scholar

Kurzweil, R. (2005). The Singularity is Near: When Humans Transcend Biology. New York, NY: Penguin books-Viking Publisher. ISBN: 978-0-670-03384-3.

Google Scholar

Kurzweil, R. (2009). “The coming merging of mind and machine,” in Scientific American. Available online at: (Accessed November 5, 2014).

Kyriazis, M. (2014a). Technological integration and hyper-connectivity: tools for promoting extreme human lifespans.Complexity. doi: 10.1002/cplx.21626

CrossRef Full Text

Kyriazis, M. (2014b). Reversal of informational entropy and the acquisition of germ-like immortality by somatic cells. Curr. Aging Sci. 7, 9–16. doi: 10.2174/1874609807666140521101102

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text | Google Scholar

Kyriazis, M. (2014c). Information-Sharing, Adaptive Epigenetics and Human Longevity. Available online at: (Accessed October 8, 2014).

Last, C. (2014a). Global Brain and the future of human society. World Fut. Rev. 6, 143–150. doi: 10.1177/1946756714533207

CrossRef Full Text | Google Scholar

Last, C. (2014b). Human evolution, life history theory and the end of biological reproduction. Curr. Aging Sci. 7, 17–24. doi: 10.2174/1874609807666140521101610

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text | Google Scholar

Long, S. M. (2010). Exploring Web 2.0: The Impact of Digital Communications Technologies on Youth Relationships and Sociability. Available online at: November 5, 2014).

Mayer-Kress, G., and Barczys, C. (1995). The global brain as an emergent structure from the Worldwide Computing Network, and its implications for modeling. Inform. Soc. 11, 1–27 doi: 10.1080/01972243.1995.9960177

CrossRef Full Text | Google Scholar

Maynard Smith, J., and Szathmáry, E. (1997). The Major Transitions in Evolution. Oxford: Oxford University Press.

Nicolelis, M., and Laporta, A. (2011). Beyond Boundaries: The New Neuroscience of Connecting Brains with Machines—and How It Will Change Our Lives. Times Books, Henry Hold, New York. ISBN: 0-80509052-5.

Pais-Vieira, M., Lebedev, M., Kunicki, C., Wang, J., and Nicolelis, M. (2013). A brain-to-brain interface for real-time sharing of sensorimotor information. Sci. Rep. 3:1319. doi: 10.1038/srep01319

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text | Google Scholar

Pfurtscheller, G., and Neuper, C. (2002). Motor imagery and direct brain-computer communication. Proc. IEEE 89, 1123–1134. doi: 10.1109/5.939829

CrossRef Full Text | Google Scholar

Rao, R. P. N., Stocco, A., Bryan, M., Sarma, D., and Youngquist, T. M. (2014). A direct brain-to-brain interface in humans.PLoS ONE 9:e111332. doi: 10.1371/journal.pone.0111332

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text | Google Scholar

Rodriguez, M. A., Steinbock, D. J., Watkins, J. H., Gershenson, C., Bollen, J., Grey, V., et al. (2007). Smartocracy: Social Networks for Collective Decision Making (p. 90b). Los Alamitos, CA: IEEE Computer Society.

Google Scholar

Solman, P. (2012). As Humans and Computers Merge… Immortality? Interview with Ray Kurzweil. PBS. 2012-07-03. Available online at: (Retrieved November 5, 2014).

Steward, J. E. (2014). The direction of evolution: the rise of cooperative organization. Biosystems 123, 27–36. doi: 10.1016/j.biosystems.2014.05.006

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text | Google Scholar

Trimper, J. B., Wolpe, P. R., and Rommelfanger, K. S. (2014). When “I” becomes “We”: ethical implications of emerging brain-to-brain interfacing technologies. Front. Neuroeng. 7:4 doi: 10.3389/fneng.2014.00004

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text | Google Scholar

Veletsianos, G. (Ed.). (2010). Emerging Technologies in Distance Education. Edmonton, AB: AU Publisher.

Google Scholar

Vidal, C. (in press). “Distributing cognition: from local brains to the global brain,” in The End of the Beginning: Life, Society and Economy on the Brink of the Singularity, eds B. Goertzel and T. Goertzel.

Google Scholar

Woolley, A. W., Chabris, C. F., Pentland, A., Hashmi, N., and Malone, T. W. (2010). Evidence for a collective intelligence factor in the performance of human groups.Science 330, 686–688. doi: 10.1126/science.1193147

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text | Google Scholar

Keywords: global brain, complex adaptive systems, human longevity, techno-cultural society, noeme, systems neuroscience

Citation: Kyriazis M (2015) Systems neuroscience in focus: from the human brain to the global brain? Front. Syst. Neurosci. 9:7. doi: 10.3389/fnsys.2015.00007

Received: 14 October 2014; Accepted: 14 January 2015;
Published online: 06 February 2015.

Edited by:

Manuel Fernando Casanova, University of Louisville, USA

Reviewed by:

Mikhail Lebedev, Duke University, USA
Andrea Stocco, University of Washington, USA

Copyright © 2015 Kyriazis. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.



This article can also be found at

Quantified Self Revolution

The quantified self revolution is the idea that as the data we accumulate on a daily basis grows and becomes more complete, our understanding of ourselves deepen and we can use that data to create better internal and external environments for ourselves.  The article is called Quantified Self revolution : Hello Human 2.0 and features a video in the Shots of Awe series with Jason Silva called Explore The “Quantified Self” Revolution with Jason Silva.

Quantified Self revolution : Hello Human 2.0

This week’s Quantified Self roundup features a filmmaker’s perception of the Quantified Self revolution, a platform that tracks everything in your life, and a design-savvy fitness tracker.

Human 2.0

Renowned filmmaker Jason Silva recently released a new video in his YouTube channel Shots of Awe wherein he talks about how amazing the Quantified Self revolution is.  Silva talks about how sensors obtaining all these data from millions of people can be used to better analyze a person and could well be the beginning of Human 2.0.

It’s certainly an interesting concept to ponder, and that’s what Silva does best.  The filmmaker’s been on a guest on theCUBE on at least one occasion, and we revisit his most recent appearance here, where he discusses Big Data and its impact on the consumer.



The biggest challenge right now in the Quantified Self revolution may be app fatigue, so it certainly doesn’t help that there are so many apps and gadgets available today.  To keep you get focused and on task, there’s a new platform that will help you keep track of all the tracking that matters.

Tictrac  allows you to sync all the apps and gadgets you are currently using so everything you need is in one place.  It tracks anything from your email, blood pressure, supermarket foods, food intake, your baby, and even the calories you burned during your workout.  And if you have just started with the quantified self revolution and your fitness tracker or app doesn’t offer other tracker, you can use Tictrac to track anything you please.

It is available in both mobile and web platform so you can check your progress anytime.

<p><a href=”″>Tictrac</a&gt; from <a href=””>Tictrac</a&gt; on <a href=””>Vimeo</a&gt;.</p>

Via Heartbeat Bracelet


If you’re still looking for a fashionable fitness tracker, you might want to check out the Via Heartbeat Bracelet.  The Via Heartbeat is still a Kickstarter project and has a long way to go to achieve its $300,000 funding goal.

What makes this fitness tracker different is that it looks simple but elegant and if people aren’t familiar with it, no one would guess that it’s a fitness tracker.  It is designed to fit comfortably and stay in place no matter how rigorous your workout routine is.  Via it’s web app, you can set your goals.  This will automatically sync with your bracelet and will glow various colors depending on which of your goals is being achieved.

If you are interested with this project, make sure to check out the project and help fund it.

This article can also be found on the Silicone Angle website at

The first video can also be found at

The second video can also be found at

The third video can also be found on the Kickstarter website at

Video Info:

Published on Nov 5, 2013

“We will measure everything… and feed that information back into the system.”

The Quantified Self Revolution. You’ve heard the buzz term, but it’s the idea that as we extend computation into everything, and as we extend sensors into everything, we’re increasingly extending those sensors into ourselves – creating a data rich, always on stream of information about our biological functioning.

Join Jason Silva every week as he freestyles his way into the complex systems of society, technology and human existence and discusses the truth and beauty of science in a form of existential jazz. New episodes every Tuesday.

Watch More Shots of Awe on TestTube

Subscribe now!…

Jason Silva on Twitter

Jason Silva on Facebook

Jason Silva on Google+…

Tictrac is a Lifestyle Design Platform that empowers people through their own data. Users can connect with health and fitness apps and devices they may already use, from blood pressure monitors, wireless weight scales, sleep/stress trackers, diet and activity monitors to email, calendar, weather and much more. We currently sync with over 50 services/devices from fitness (MyFitnessPal, Runkeeper, Endomondo) medical (Withings, VitaDock) personal (Fitbit, BodyMedia, Garmin) even social (Facebook, Twitter, Klout) adding new API integrations every week.

Users can visualise their data about themselves in personal dashboards that give them insights into how to improve their lifestyle. Users can also cross reference disparate sets of data to see how one aspect of their lives may affect another. They can then share their dashboards with professionals like their physician, personal trainer or coach who can interpret that data and tailor their programmes accordingly.

External Links

Transhumanism and Money by Zeev Kirsch

Other than the Star Wars/Star Trek mixup at the beginning, this was a pretty good read about transhumanity and money and by Zeev Kirsch at  (Just messing with you, Zeev.  I know it was just a typo.)

Transhumanism and Money

Money is at the very center of how human beings communicate with one another in complex societies and yet it is almost completely ignored in all private k-12 education in the united states and most nations. Money isn’t economics, Money is human behavior; it is social and individual psychology. Particularly now, as the world body of nations and central banks escalate currency wars(and real wars), more people are turning their attention to money. As a long time reader of futurism, science fiction, and for the last decade, transhumanist literature, I’ve wondered why these genres have all generally ignored money as a question. If future technological development inevitably depends on the productivity of complex societies comprised of many individuals operating at arms length, then why do transhumanists and futurists ignore money? This is true even in popular culture of futurism. In Star Wars First Contact, Captain Picard travels back in time and must explain to a compatriot that ‘in the future’ there is no money because society does not need it any more. The future of money needs more than a pop culture non-explanation. Practical futurism which seeks to actually create the future, instead of hope and change for it, must embrace all horizons of where our present transitions towards our future. ALL transhumanist visions require complex human coordination to be achieved and thus, going forward from here to utopia(s), they require (m)oneys. Therefore, deliberate ignorance of the money question threatens to retard transhumanist progress from actualization. Tim Collins, a noteable mind in the transhumanist movement presents his own views on practical futurism in what he calls the ‘grinder way’, I applaud him for his deep thinking and subsequent Action on the subject when it comes to human device augmentation. I’m certain he would extend the philosophy of the grinders way to include a renewed transhumanist focus upon the money question.

Let us begin at the beginning, the very beginning of humanity;  Primates. Research on the social behaviors  and psychology of primates has been escalating in the past decade. Chimpanzees, it turns out, don’t like to share, but they will share with fellow chipanzees under certain circumstances. In one particular repeated observation in the wild, they will trade food for sex. This means the male must obtain the food , transport the food before it spoils and then tender the food to a female in expectation for a sexual encounter. Trading bananas for sex isn’t money of course, but it’s a primitive form of prostitution transaction distinguished from the other more prevalent chimpanzee sexual relationships lacking the food-trading component. This clearly doesn’t tell us anything about Money in human society but it does tell us that human behavior underlying the creation of money far precedes the evolution of homo-sapien. The chimps are bartering for sex, and bartering is one of the behaviors that underlies the early creation of money. Barter encapsulates the use value of money.

Millions of moons later, at some point in the evolution of Mankind between chimposapiens and homosapiens, Primitive Mankind began squirreling away consumables that did not rot as quickly. From this we can assume Mankind  began exchanging objects and services not only for their instant utility, but for their future utility, as a set of long term promises and expectations. This encapsulates the savings value of money. You can save something for its use value for a later date. Less than ten thousand years ago man began working advanced stone tools, metals, the sledge and roller turned into the wheel, and yes, one day (m)oneys arrived in many many forms and evolved alongside the societies that were creating and advancing their use.  It is a long long story that we will never truly know, but we know that at various points things like shells, goats, shaped stones, even human beings, were used for trade, but that eventually coining precious metals became the most popular money substrate within a few thousand years ago. Since then the rise of paper notes has taken over as the predominant substrate of (m)oneys in the world. Money has evolved through various (m)onies.

An essential jump in the modern era of money, is that modern forms of (m)oney whether metal or paper have abandoned their ‘use’ value and and transitioned to become valued exclusively as a medium of exchange (hence the saying that ‘you cannot eat gold’). While precious metals generally don’t oxidize or burn easily , paper is more vulnerable and more easy to replace ( creating widespread counterfeiting problems relative to counterfeiting of coined metal). Further along the line of money history these specialized forms of money started to be lent out to other people in return for sets of promises, or sometimes for something called ‘interest’ which was an expectation of more Money in return than the amount lend out. Thus was born the ‘time value’ of money which helped precipitate growth of massive networks of promises and expectations themselves having come to define the modern world. The lending value of money turned humanity away from the fight against a history of rotting warehoused forms of money, away from a history of heavy and difficult physical coinage for transport, slowly toward simpler and cheaper methods of structuring promises and expectations. This transition could not have happened without the growth of predictable and stable institutions which have come to define complex societies.

How do we understand Money Now.

Now that we know the history of money, what is the present of money ?I am not here to describe what is going on in current events, nor what the various intellectual giants of Money would explain to you about how various forms of credit comprise or don’t comprise different tiers of our modern Money system.  There are literally thousands of articles a day you can read about that. However, after years of trying to understand what Money is based on studying its past, I would like to offer my own personal definition of money, which is best suited to our present understanding. I believe that ‘Money’ comprises the fungible parts of the dynamic network of promises and expectations  held between individuals and groups in a society;

[Fungible] because there are many commitments , promises and expectations in society that are not fungible. Some of those promises and expectations are interpersonal and even ideological in nature. For example, Some are commitments and expectations based on Loving relationships, or strict Hierarchical positions in secular and non-secular institutions that cannot be exchanged in a more or less fungible manner. Interestingly, the definition of which relationships and objects are fungible changes with the values of societies and individuals themselves. Money is thus intimately connected with our personal and social value systems.

[Dynamic] because promises and expectations are not discrete platonic quantities to be metered out in units, but fuzzy neurological outputs based on our common understanding of the persons’ and groups’ behaviors and communications.

[Parts] because many things, in addition to legal tender, serve as a money in any given society. The aggregate of all these (m)oneys simultaneously represent all nodes on the infinite network of promises and expectaions comprising Money in society.  Anything people find highly liquid for the purposes of trading goods and services can  and does function as a (m)oney in our society. For example, In the the American Neogulag , cigarettes candy and bagged instant coffee serve as a money for millions of people. Yet as we all know, the major component of Money in the u.s. is the legal tender currency titled the federal reserve note, or colloquially referred to as the Dollar.

The future of (m)oney and Money.

What stops people from abandoning a (m)oney and what stops people from abandoning Money altogether? When you consider the notion of ‘abandoning’ money, it generally represents currency collapse; the failure of a (m)oney to be used for the purposes of exchange lending and savings. A (m)oney may fail slowly through time, or all once. As observed in modern monetary history, the typical path of a  modern (m)oney [ money now almost always being centrally issued in the form of ‘Notes’ ] is that its failure begins in a predictable manner, slowly accelerating up a curve until a convexity point is reached where other (m)oneys or no (m)oney at all become more popular than utilizing the failing (m)oney. This tipping point is reached when a discrete change occurs to the willingness of various institutions and persons willingness to lend money at interest to one another (time value of money) to possess the money over time (savings value of money) and to use that money for payments and sales of services and products and investments ( exchange value of the money)—-All in that order. For example, if people started using currencies other than the Euro, the Euro would be abandoned in favor of other currencies and eventually be out of use, it’s value destroyed. On it’s way there, people would stop lending to each other in loans denominated in Euros, People would dump their savings of reserve Euros and last but not least people would finally stop exchange Euros altogether.  This is how any number of currencies around the world have failed multiple times over the past decades. Luckily , we in the west, believe our system to be far away from any tipping points. But not everyone who is looking towards the future agrees with this outlook. I am not going to give prognostications about the future of the dollar. Needless to say, if you knew what would happen to the dollar, you wouldn’t be telling people about it.

Big ‘M’ Money however, is another story altogether. the question of Money goes beyond any one (m)oney, let alone the dollar. How is it that a transhumanist or other futurist could conceive of a star trek future where a collectivist society of enlightened humans stopped needing to use tokens to represent a network of trusts and promises. In such a society, how would individual desires be expressed in the collective frame work. If I wanted to eat ALL the apples. what would stop me? When would the collective apple limit be reached for me as opposed to for my best friend who is allergic to apples. Clearly any organized network of human beings must have a rule system. Rules implicate allowances, limits, credits, or whatever you would call them. The more collectivized a society is the more the network of promises and expectations between individuals and groups needs to be mediated by the uber-collective, what we normally call government. Would a system of ‘credits, limits, or allowances’ registered as digital entries be  anything other than a digital form of centrally planned money (which appears to be happening in sweden) ? What other system could there be?

I do not think society can operate without a fungible dynamic set of promises and expectations we understand to be MONEY. Wether or not those promises and expectations can be traded more freely by individuals, or are more carefully ordered by the governing systems of that society is another question. So , what other systems could be out there? Bitcoin is selling itself as a very powerful tool for avoiding government control over money, by expediting digital exchange. Many transhumanists and futurists seem very quick to take up the bit-coin mantle. Prescious metals, while far more secure in their non-digital existence, are far more difficult to coin and trade with (especially as compared with distances allowed by digital internet). And yet, many anti-futurists believe they are better off trading their not so precious dollars for precious metals is wiser than trading them for digital registries in a relatively New digital system that relies upon telecommunications networks for maintaining , if not at least, expanding its value. The question people are asking about the future of Money is what will happen to the most popular (m)oneys out there such as the dollar, euro, yen and yuan. I”m not sure. But the increasing popularity of precious metals and alternatives like bit coin (not to mention all sorts of trading syndicates, some even using the internet) are a sure sign that people with excess savings are looking to get out of those currencies.

My question for the Transhumanist community is what do you think about the future of Money and money? Over the years, I’ve perceived that Transhumanism is splitting into two camps which would provide separate perspectives on this question. One camp embraces futurism as necessarily collectivist at the highest level. The other camp embraces a future more focussed on pockets of individualism relying upon deep comittment to technophilia and individualist interest in science; call this the Individualist camp of Transhumanism. They are the camp more attuned to the dangers of central planning and tyrannical collectivist decision paradigms—-(facism, communism, whatever…). I am positing the classic juxtaposition of the orwellian versus the huxleyian fears for the collective. It seems to me Transhumanists trending towards the individualist camp would emphasize the importance of developing robust Money systems where the Collectivists would emphasize the Overall Strength of the entire network of promises and expectations. The former fearing the Transhuman social aggregate will suffer excessively under capricious powerful collectives ( namely government and central banks) , and the latter submitting their faith in collective leadership will provide a network of promises and expectations in the overall best interests of the Transhuman social aggregate.

The Collectivist and Individualist Camps are not entirely mutually exclusive, and like a ying/yang seem to define each other in a relativistic sense. However, both camps seem to be taking note of bit coins recent success. I’ve learned a lot about bit coin, and while there have been many interesting developments with it as of late, I think the transhumanist community is overlooking the actions of a nation in the world that many consider ‘the future’; China. China is buying gold coins not bit coins. So please tranhumanists of both the Collectivist and Individualist persuasion, or both or neither, I am asking you to  help me reconcile why one of the most forward thinking, futurist, and seemingly Collectivist nations on earth has been busy hoarding gold for a number of years. I am not asking you to ignore bit coin or embrace gold, I am simply asking you for a little more help and to pay a little more attention to the great money question.

[ [Disclosure] I am not endorsing or dismissing bit coin. I do not use bit coin nor have I ever used it].  

Zeev Kirsch has also predicted, at the Long Now, the following scenario:

“By the end of Obama’s second term as President, The Central Bank of China will publicly announce that they have an amount of gold in reserve that is greater than Germany’s.”

the link to that “LongBet” is HERE

hero image from here:


This article can also be found at

National Intelligence Council Predicts a “Very Transhuman Future by 2030”

U.S. government agency –  the National Intelligence Council (NIC) – released “a 140-page document that outlines major trends and technological developments we should expect in the next 20 years.”  The entire 140 page document can be read or downloaded at

U.S. spy agency predicts a very transhuman future by 2030

U.S. spy agency predicts a very transhuman future by 2030

The National Intelligence Council has just released its much anticipated forecasting report, a 140-page document that outlines major trends and technological developments we should expect in the next 20 years. Among their many predictions, the NIC foresees the end of U.S. global dominance, the rising power of individuals against states, a growing middle class that will increasingly challenge governments, and ongoing shortages in water, food and energy. But they also envision a future in which humans have been significantly modified by their technologies — what will herald the dawn of the transhuman era.

This work brings to mind the National Science Foundation’s groundbreaking 2003 report,Converging Technologies for Improving Human Performance — a relatively early attempt to understand and predict how advanced biotechnologies would impact on the human experience. The NIC’s new report, Global Trends 2030: Alternative Worlds, follows in the same tradition — namely one that doesn’t ignore the potential for enhancement technologies.

U.S. spy agency predicts a very transhuman future by 20301

In the new report, the NIC describes how implants, prosthetics, and powered exoskeletons will become regular fixtures of human life — what could result in substantial improvements to innate human capacities. By 2030, the authors predict, prosthetics should reach the point where they’re just as good — or even better — than organic limbs. By this stage, the military will increasingly rely on exoskeletons to help soldiers carry heavy loads. Servicemen will also be adminstered psychostimulants to help them remain active for longer periods.

Many of these same technologies will also be used by the elderly, both as a way to maintain more youthful levels of strength and energy, and as a part of their life extension strategies.

Brain implants will also allow for advanced neural interface devices — what will bridge the gap between minds and machines. These technologies will allow for brain-controlled prosthetics, some of which may be able to provide “superhuman” abilities like enhanced strength, speed — and completely new functionality altogether.

Other mods will include retinal eye implants to enable night vision and other previously inaccessible light spectrums. Advanced neuropharmaceuticals will allow for vastly improved working memory, attention, and speed of thought.

“Augmented reality systems can provide enhanced experiences of real-world situations,” the report notes, “Combined with advances in robotics, avatars could provide feedback in the form of sensors providing touch and smell as well as aural and visual information to the operator.”

But as the report notes, many of these technologies will only be available to those who are able to afford them. The authors warn that it could result in a two-tiered society comprising enhanced and nonenhanced persons, a dynamic that would likely require government oversight and regulation.

Smartly, the report also cautions that these technologies will need to be secure. Developers will be increasingly challenged to prevent hackers from interfering with these devices.

Lastly, other technologies and scientific disciplines will have to keep pace to make much of this work. For example, longer-lasting batteries will improve the practicality of exoskeletons. Progress in the neurosciences will be critical for the development of future brain-machine interfaces. And advances in flexible biocompatible electronics will enable improved integration with cybernetic implants.

The entire report can be read here.

Image: Bruce Rolff/shutterstock.

This article can also be found on io9 at

Transhumanism, medical technology and slippery slopes from the NCBI

This article (Transhumanism, medical technology and slippery slopes from the NCBI) explores transhumanism in the medical industry.  I thought it was bit negatively biased, but the sources are good and disagreement doesn’t equate to invalidation in my book so here it is…


In this article, transhumanism is considered to be a quasi‐medical ideology that seeks to promote a variety of therapeutic and human‐enhancing aims. Moderate conceptions are distinguished from strong conceptions of transhumanism and the strong conceptions were found to be more problematic than the moderate ones. A particular critique of Boström’s defence of transhumanism is presented. Various forms of slippery slope arguments that may be used for and against transhumanism are discussed and one particular criticism, moral arbitrariness, that undermines both weak and strong transhumanism is highlighted.

No less a figure than Francis Fukuyama1 recently labelled transhumanism as “the world’s most dangerous idea”. Such an eye‐catching condemnation almost certainly denotes an issue worthy of serious consideration, especially given the centrality of biomedical technology to its aims. In this article, we consider transhumanism as an ideology that seeks to evangelise its human‐enhancing aims. Given that transhumanism covers a broad range of ideas, we distinguish moderate conceptions from strong ones and find the strong conceptions more problematic than the moderate ones. We also offer a critique of Boström’s2 position published in this journal. We discuss various forms of slippery slope arguments that may be used for and against transhumanism and highlight one particular criticism, moral arbitrariness, which undermines both forms of transhumanism.

What is transhumanism?

At the beginning of the 21st century, we find ourselves in strange times; facts and fantasy find their way together in ethics, medicine and philosophy journals and websites.2,3,4 Key sites of contestation include the very idea of human nature, the place of embodiment within medical ethics and, more specifically, the systematic reflections on the place of medical and other technologies in conceptions of the good life. A reflection of this situation is captured by Dyens5 who writes,

What we are witnessing today is the very convergence of environments, systems, bodies, and ontology toward and into the intelligent matter. We can no longer speak of the human condition or even of the posthuman condition. We must now refer to the intelligent condition.

We wish to evaluate the contents of such dialogue and to discuss, if not the death of human nature, then at least its dislocation and derogation in the thinkers who label themselves transhumanists.

One difficulty for critics of transhumanism is that a wide range of views fall under its label.6 Not merely are there idiosyncrasies of individual academics, but there does not seem to exist an absolutely agreed on definition of transhumanism. One can find not only substantial differences between key authors2,3,4,7,8 and the disparate disciplinary nuances of their exhortations, but also subtle variations of its chief representatives in the offerings of people. It is to be expected that any ideology transforms over time and not least of all in response to internal and external criticism. Yet, the transhumanism critic faces a further problem of identifying a robust target that stays still sufficiently long to locate it properly in these web‐driven days without constructing a “straw man” to knock over with the slightest philosophical breeze. For the purposes of targeting a sufficiently substantial target, we identify the writings of one of its clearest and intellectually robust proponents, the Oxford philosopher and cofounder of the World Transhumanist Association , Nick Boström,2 who has written recently in these pages of transhumanism’s desire to make good the “half‐baked” project3 that is human nature.

Before specifically evaluating Boström’s position, it is best first to offer a global definition for transhumanism and then to locate it among the range of views that fall under the heading. One of the most celebrated advocates of transhumanism is Max More, whose website reads “no more gods, nor more faith, no more timid holding back. The future belongs to posthumanity”.8 We will have a clearer idea then of the kinds of position transhumanism stands in direct opposition to. Specifically, More8 asserts,

“Transhumanism” is a blanket term given to the school of thought that refuses to accept traditional human limitations such as death, disease and other biological frailties. Transhumans are typically interested in a variety of futurist topics, including space migration, mind uploading and cryonic suspension. Transhumans are also extremely interested in more immediate subjects such as bio‐ and nano‐technology, computers and neurology. Transhumans deplore the standard paradigms that attempt to render our world comfortable at the sake of human fulfilment.8

Strong transhumanism advocates see themselves engaged in a project, the purpose of which is to overcome the limits of human nature. Whether this is the foundational claim, or merely the central claim, is not clear. These limitations—one may describe them simply as features of human nature, as the idea of labelling them as limitations is itself to take up a negative stance towards them—concern appearance, human sensory capacities, intelligence, lifespan and vulnerability to harm. According to the extreme transhumanism programme, technology can be used to vastly enhance a person’s intelligence; to tailor their appearance to what they desire; to lengthen their lifespan, perhaps to immortality; and to reduce vastly their vulnerability to harm. This can be done by exploitation of various kinds of technology, including genetic engineering, cybernetics, computation and nanotechnology. Whether technology will continue to progress sufficiently, and sufficiently predictably, is of course quite another matter.

Advocates of transhumanism argue that recruitment or deployment of these various types of technology can produce people who are intelligent and immortal, but who are not members of the species Homo sapiens. Their species type will be ambiguous—for example, if they are cyborgs (part human, part machine)—or, if they are wholly machines, they will lack any common genetic features with human beings. A legion of labels covers this possibility; we find in Dyen’s5 recently translated book a variety of cultural bodies, perhaps the most extreme being cyberpunks:

…a profound misalignment between existence and its manifestation. This misalignment produces bodies so transformed, so dissociated, and so asynchronized, that their only outcome is gross mutation. Cyberpunk bodies are horrible, strange and mysterious (think of Alien, Robocop, Terminator, etc.), for they have no real attachment to any biological structure. (p 75)

Perhaps a reasonable claim is encapsulated in the idea that such entities will be posthuman. The extent to which posthuman might be synonymous with transhumanism is not clear. Extreme transhumanists strongly support such developments.

At the other end of transhumanism is a much less radical project, which is simply the project to use technology to enhance human characteristics—for example, beauty, lifespan and resistance to disease. In this less extreme project, there is no necessary aspiration to shed human nature or human genetic constitution, just to augment it with technology where possible and where desired by the person.

Who is for transhumanism?

At present it seems to be a movement based mostly in North America, although there are some adherents from the UK. Among its most intellectually sophisticated proponents is Nick Boström. Perhaps the most outspoken supporters of transhumanism are people who see it simply as an issue of free choice. It may simply be the case that moderate transhumanists are libertarians at the core. In that case, transhumanism merely supplies an overt technological dimension to libertarianism. If certain technological developments are possible, which they as competent choosers desire, then they should not be prevented from acquiring the technologically driven enhancements they desire. One obvious line of criticism here may be in relation to the inequality that necessarily arises with respect to scarce goods and services distributed by market mechanisms.9 We will elaborate this point in the Transhumanism and slippery slopes section.

So, one group of people for the transhumanism project sees it simply as a way of improving their own life by their own standards of what counts as an improvement. For example, they may choose to purchase an intervention, which will make them more intelligent or even extend their life by 200 years. (Of course it is not self‐evident that everyone would regard this as an improvement.) A less vociferous group sees the transhumanism project as not so much bound to the expansion of autonomy (notwithstanding our criticism that will necessarily be effected only in the sphere of economic consumer choice) as one that has the potential to improve the quality of life for humans in general. For this group, the relationship between transhumanism and the general good is what makes transhumanism worthy of support. For the other group, the worth of transhumanism is in its connection with their own conception of what is good for them, with the extension of their personal life choices.

What can be said in its favour?

Of the many points for transhumanism, we note three. Firstly, transhumanism seems to facilitate two aims that have commanded much support. The use of technology to improve humans is something we pretty much take for granted. Much good has been achieved with low‐level technology in the promotion of public health. The construction of sewage systems, clean water supplies, etc, is all work to facilitate this aim and is surely good work, work which aims at, and in this case achieves, a good. Moreover, a large portion of the modern biomedical enterprise is another example of a project that aims at generating this good too.

Secondly, proponents of transhumanism say it presents an opportunity to plan the future development of human beings, the species Homo sapiens. Instead of this being left to the evolutionary process and its exploitation of random mutations, transhumanism presents a hitherto unavailable option: tailoring the development of human beings to an ideal blueprint. Precisely whose ideal gets blueprinted is a point that we deal with later.

Thirdly, in the spirit of work in ethics that makes use of a technical idea of personhood, the view that moral status is independent of membership of a particular species (or indeed any biological species), transhumanism presents a way in which moral status can be shown to be bound to intellectual capacity rather than to human embodiment as such or human vulnerability in the capacity of embodiment (Harris, 1985).9a

What can be said against it?

Critics point to consequences of transhumanism, which they find unpalatable. One possible consequence feared by some commentators is that, in effect, transhumanism will lead to the existence of two distinct types of being, the human and the posthuman. The human may be incapable of breeding with the posthuman and will be seen as having a much lower moral standing. Given that, as Buchanan et al9 note, much moral progress, in the West at least, is founded on the category of the human in terms of rights claims, if we no longer have a common humanity, what rights, if any, ought to be enjoyed by transhumans? This can be viewed either as a criticism (we poor humans are no longer at the top of the evolutionary tree) or simply as a critical concern that invites further argumentation. We shall return to this idea in the final section, by way of identifying a deeper problem with the open‐endedness of transhumanism that builds on this recognition.

In the same vein, critics may argue that transhumanism will increase inequalities between the rich and the poor. The rich can afford to make use of transhumanism, but the poor will not be able to. Indeed, we may come to think of such people as deficient, failing to achieve a new heightened level of normal functioning.9 In the opposing direction, critical observers may say that transhumanism is, in reality, an irrelevance, as very few will be able to use the technological developments even if they ever manifest themselves. A further possibility is that transhumanism could lead to the extinction of humans and posthumans, for things are just as likely to turn out for the worse as for the better (eg, those for precautionary principle).

One of the deeper philosophical objections comes from a very traditional source. Like all such utopian visions, transhumanism rests on some conception of good. So just as humanism is founded on the idea that humans are the measure of all things and that their fulfilment is to be found in the powers of reason extolled and extended in culture and education, so too transhumanism has a vision of the good, albeit one loosely shared. For one group of transhumanists, the good is the expansion of personal choice. Given that autonomy is so widely valued, why not remove the barriers to enhanced autonomy by various technological interventions? Theological critics especially, but not exclusively, object to what they see as the imperialising of autonomy. Elshtain10 lists the three c’s: choice, consent and control. These, she asserts, are the dominant motifs of modern American culture. And there is, of course, an army of communitarians (Bellah et al,10a MacIntyre,10b Sandel,10c Taylor10d and Walzer10e) ready to provide support in general moral and political matters to this line of criticism. One extension of this line of transhumanism thinking is to align the valorisation of autonomy with economic rationality, for we may as well be motivated by economic concerns as by moral ones where the market is concerned. As noted earlier, only a small minority may be able to access this technology (despite Boström’s naive disclaimer for democratic transhumanism), so the technology necessary for transhumanist transformations is unlikely to be prioritised in the context of artificially scarce public health resources. One other population attracted to transhumanism will be the elite sports world, fuelled by the media commercialisation complex—where mere mortals will get no more than a glimpse of the transhuman in competitive physical contexts. There may be something of a double‐binding character to this consumerism. The poor, at once removed from the possibility of such augmentation, pay (per view) for the pleasure of their envy.

If we argue against the idea that the good cannot be equated with what people choose simpliciter, it does not follow that we need to reject the requisite medical technology outright. Against the more moderate transhumanists, who see transhumanism as an opportunity to enhance the general quality of life for humans, it is nevertheless true that their position presupposes some conception of the good. What kind of traits is best engineered into humans: disease resistance or parabolic hearing? And unsurprisingly, transhumanists disagree about precisely what “objective goods” to select for installation into humans or posthumans.

Some radical critics of transhumanism see it as a threat to morality itself.1,11 This is because they see morality as necessarily connected to the kind of vulnerability that accompanies human nature. Think of the idea of human rights and the power this has had in voicing concern about the plight of especially vulnerable human beings. As noted earlier a transhumanist may be thought to be beyond humanity and as neither enjoying its rights nor its obligations. Why would a transhuman be moved by appeals to human solidarity? Once the prospect of posthumanism emerges, the whole of morality is thus threatened because the existence of human nature itself is under threat.

One further objection voiced by Habermas11 is that interfering with the process of human conception, and by implication human constitution, deprives humans of the “naturalness which so far has been a part of the taken‐for‐granted background of our self‐understanding as a species” and “Getting used to having human life biotechnologically at the disposal of our contingent preferences cannot help but change our normative self‐understanding” (p 72).

On this account, our self‐understanding would include, for example, our essential vulnerability to disease, ageing and death. Suppose the strong transhumanism project is realised. We are no longer thus vulnerable: immortality is a real prospect. Nevertheless, conceptual caution must be exercised here—even transhumanists will be susceptible in the manner that Hobbes12 noted. Even the strongest are vulnerable in their sleep. But the kind of vulnerability transhumanism seeks to overcome is of the internal kind (not Hobbes’s external threats). We are reminded of Woody Allen’s famous remark that he wanted to become immortal, not by doing great deeds but simply by not dying. This will result in a radical change in our self‐understanding, which has inescapably normative elements to it that need to be challenged. Most radically, this change in self‐understanding may take the form of a change in what we view as a good life. Hitherto a human life, this would have been assumed to be finite. Transhumanists suggest that even now this may change with appropriate technology and the “right” motivation.

Do the changes in self‐understanding presented by transhumanists (and genetic manipulation) necessarily have to represent a change for the worse? As discussed earlier, it may be that the technology that generates the possibility of transhumanism can be used for the good of humans—for example, to promote immunity to disease or to increase quality of life. Is there really an intrinsic connection between acquisition of the capacity to bring about transhumanism and moral decline? Perhaps Habermas’s point is that moral decline is simply more likely to occur once radical enhancement technologies are adopted as a practice that is not intrinsically evil or morally objectionable. But how can this be known in advance? This raises the spectre of slippery slope arguments.

But before we discuss such slopes, let us note that the kind of approach (whether characterised as closed‐minded or sceptical) Boström seems to dislike is one he calls speculative. He dismisses as speculative the idea that offspring may think themselves lesser beings, commodifications of their parents’ egoistic desires (or some such). None the less, having pointed out the lack of epistemological standing of such speculation, he invites us to his own apparently more congenial position:

We might speculate, instead, that germ‐line enhancements will lead to more love and parental dedication. Some mothers and fathers might find it easier to love a child who, thanks to enhancements, is bright, beautiful, healthy, and happy. The practice of germ‐line enhancement might lead to better treatment of people with disabilities, because a general demystification of the genetic contributions to human traits could make it clearer that people with disabilities are not to blame for their disabilities and a decreased incidence of some disabilities could lead to more assistance being available for the remaining affected people to enable them to live full, unrestricted lives through various technological and social supports. Speculating about possible psychological or cultural effects of germ‐line engineering can therefore cut both ways. Good consequences no less than bad ones are possible. In the absence of sound arguments for the view that the negative consequences would predominate, such speculations provide no reason against moving forward with the technology. Ruminations over hypothetical side effects may serve to make us aware of things that could go wrong so that we can be on the lookout for untoward developments. By being aware of the perils in advance, we will be in a better position to take preventive countermeasures. (Boström, 2003, p 498)

Following Boström’s3 speculation then, what grounds for hope exist? Beyond speculation, what kinds of arguments does Boström offer? Well, most people may think that the burden of proof should fall to the transhumanists. Not so, according to Boström. Assuming the likely enormous benefits, he turns the tables on this intuition—not by argument but by skilful rhetorical speculation. We quote for accuracy of representation (emphasis added):

Only after a fair comparison of the risks with the likely positive consequences can any conclusion based on a cost‐benefit analysis be reached. In the case of germ‐line enhancements, the potential gains are enormous. Only rarely, however, are the potential gains discussed, perhaps because they are too obvious to be of much theoretical interest. By contrast, uncovering subtle and non‐trivial ways in which manipulating our genome could undermine deep values is philosophically a lot more challenging. But if we think about it, we recognize that the promise of genetic enhancements is anything but insignificant. Being free from severe genetic diseases would be good, as would having a mind that can learn more quickly, or having a more robust immune system. Healthier, wittier, happier people may be able to reach new levels culturally. To achieve a significant enhancement of human capacities would be to embark on the transhuman journey of exploration of some of the modes of being that are not accessible to us as we are currently constituted, possibly to discover and to instantiate important new values. On an even more basic level, genetic engineering holds great potential for alleviating unnecessary human suffering. Every day that the introduction of effective human genetic enhancement is delayed is a day of lost individual and cultural potential, and a day of torment for many unfortunate sufferers of diseases that could have been prevented. Seen in this light,proponents of a ban or a moratorium on human genetic modification must take on a heavy burden of proof in order to have the balance of reason tilt in their favor. (Bostrom,3 pp 498–9).

Now one way in which such a balance of reason may be had is in the idea of a slippery slope argument. We now turn to that.

Transhumanism and slippery slopes

A proper assessment of transhumanism requires consideration of the objection that acceptance of the main claims of transhumanism will place us on a slippery slope. Yet, paradoxically, both proponents and detractors of transhumanism may exploit slippery slope arguments in support of their position. It is necessary therefore to set out the various arguments that fall under this title so that we can better characterise arguments for and against transhumanism. We shall therefore examine three such attempts13,14,15 but argue that the arbitrary slippery slope15 may undermine all versions of transhumanists, although not every enhancement proposed by them.

Schauer13 offers the following essentialist analysis of slippery slope arguments. A “pure” slippery slope is one where a “particular act, seemingly innocuous when taken in isolation, may yet lead to a future host of similar but increasingly pernicious events”. Abortion and euthanasia are classic candidates for slippery slope arguments in public discussion and policy making. Against this, however, there is no reason to suppose that the future events (acts or policies) down the slope need to display similarities—indeed we may propose that they will lead to a whole range of different, although equally unwished for, consequences. The vast array of enhancements proposed by transhumanists would not be captured under this conception of a slippery slope because of their heterogeneity. Moreover, as Sternglantz16 notes, Schauer undermines his case when arguing that greater linguistic precision undermines the slippery slope and that indirect consequences often bolster slippery slope arguments. It is as if the slippery slopes would cease in a world with greater linguistic precision or when applied only to direct consequences. These views do not find support in the later literature. Schauer does, however, identify three non‐slippery slope arguments where the advocate’s aim is (a) to show that the bottom of a proposed slope has been arrived at; (b) to show that a principle is excessively broad; (c) to highlight how granting authority to X will make it more likely that an undesirable outcome will be achieved. Clearly (a) cannot properly be called a slippery slope argument in itself, while (b) and (c) often have some role in slippery slope arguments.

The excessive breadth principle can be subsumed under Bernard Williams’s distinction between slippery slope arguments with (a) horrible results and (b) arbitrary results. According to Williams, the nature of the bottom of the slope allows us to determine which category a particular argument falls under. Clearly, the most common form is the slippery slope to a horrible result argument. Walton14 goes further in distinguishing three types: (a) thin end of the wedge or precedent arguments; (b) Sorites arguments; and (c) domino‐effect arguments. Importantly, these arguments may be used both by antagonists and also by advocates of transhumanism. We shall consider the advocates of transhumanism first.

In the thin end of the wedge slippery slopes, allowing P will set a precedent that will allow further precedents (Pn) taken to an unspecified problematic terminus. Is it necessary that the end point has to be bad? Of course this is the typical linguistic meaning of the phrase “slippery slopes”. Nevertheless, we may turn the tables here and argue that [the] slopes may be viewed positively too.17 Perhaps a new phrase will be required to capture ineluctable slides (ascents?) to such end points. This would be somewhat analogous to the ideas of vicious and virtuous cycles. So transhumanists could argue that, once the artificial generation of life through technologies of in vitro fertilisation was thought permissible, the slope was foreseeable, and transhumanists are doing no more than extending that life‐creating and fashioning impulse.

In Sorites arguments, the inability to draw clear distinctions has the effect that allowing P will not allow us to consistently deny Pn. This slope follows the form of the Sorites paradox, where taking a grain of sand from a heap does not prevent our recognising or describing the heap as such, even though it is not identical with its former state. At the heart of the problem with such arguments is the idea of conceptual vagueness. Yet the logical distinctions used by philosophers are often inapplicable in the real world.15,18 Transhumanists may well seize on this vagueness and apply a Sorites argument as follows: as therapeutic interventions are currently morally permissible, and there is no clear distinction between treatment and enhancement, enhancement interventions are morally permissible too. They may ask whether we can really distinguish categorically between the added functionality of certain prosthetic devices and sonar senses.

In domino‐effect arguments, the domino conception of the slippery slope, we have what others often refer to as a causal slippery slope.19 Once P is allowed, a causal chain will be effected allowing Pn and so on to follow, which will precipitate increasingly bad consequences.

In what ways can slippery slope arguments be used against transhumanism? What is wrong with transhumanism? Or, better, is there a point at which we can say transhumanism is objectionable? One particular strategy adopted by proponents of transhumanism falls clearly under the aspect of the thin end of the wedge conception of the slippery slope. Although some aspects of their ideology seem aimed at unqualified goods, there seems to be no limit to the aspirations of transhumanism as they cite the powers of other animals and substances as potential modifications for the transhumanist. Although we can admire the sonic capacities of the bat, the elastic strength of lizards’ tongues and the endurability of Kevlar in contrast with traditional construction materials used in the body, their transplantation into humans is, to coin Kass’s celebrated label, “repugnant” (Kass, 1997).19a

Although not all transhumanists would support such extreme enhancements (if that is indeed what they are), less radical advocates use justifications that are based on therapeutic lines up front with the more Promethean aims less explicitly advertised. We can find many examples of this manoeuvre. Take, for example, the Cognitive Enhancement Research Institute in California. Prominently displayed on its website front page ( we read, “Do you know somebody with Alzheimer’s disease? Click to see the latest research breakthrough.” The mode is simple: treatment by front entrance, enhancement by the back door. Borgmann,20 in his discussion of the uses of technology in modern society, observed precisely this argumentative strategy more than 20 years ago:

The main goal of these programs seems to be the domination of nature. But we must be more precise. The desire to dominate does not just spring from a lust of power, from sheer human imperialism. It is from the start connected with the aim of liberating humanity from disease, hunger, and toil and enriching life with learning, art and athletics.

Who would want to deny the powers of viral diseases that can be genetically treated? Would we want to draw the line at the transplantation of non‐human capacities (sonar path finding)? Or at in vivo fibre optic communications backbone or anti‐degeneration powers? (These would have to be non‐human by hypothesis). Or should we consider the scope of technological enhancements that one chief transhumanist, Natasha Vita More21, propounds:

A transhuman is an evolutionary stage from being exclusively biological to becoming post‐biological. Post‐biological means a continuous shedding of our biology and merging with machines. (…) The body, as we transform ourselves over time, will take on different types of appearances and designs and materials. (…)

For hiking a mountain, I’d like extended leg strength, stamina, a skin‐sheath to protect me from damaging environmental aspects, self‐moisturizing, cool‐down capability, extended hearing and augmented vision (Network of sonar sensors depicts data through solid mass and map images onto visual field. Overlay window shifts spectrum frequencies. Visual scratch pad relays mental ideas to visual recognition bots. Global Satellite interface at micro‐zoom range).

For a party, I’d like an eclectic look ‐ a glistening bronze skin with emerald green highlights, enhanced height to tower above other people, a sophisticated internal sound system so that I could alter the music to suit my own taste, memory enhance device, emotional‐select for feel‐good people so I wouldn’t get dragged into anyone’s inappropriate conversations. And parabolic hearing so that I could listen in on conversations across the room if the one I was currently in started winding down.

Notwithstanding the difficulty of bringing together transhumanism under one movement, the sheer variety of proposals merely contained within Vita More’s catalogue means that we cannot determinately point to a precise station at which we can say, “Here, this is the end we said things would naturally progress to.” But does this pose a problem? Well, it certainly makes it difficult to specify exactly a “horrible result” that is supposed to be at the bottom of the slope. Equally, it is extremely difficult to say that if we allow precedent X, it will allow practices Y or Z to follow as it is not clear how these practices Y or Z are (if at all) connected with the precedent X. So it is not clear that a form of precedent‐setting slippery slope can be strictly used in every case against transhumanism, although it may be applicable in some.

Nevertheless, we contend, in contrast with Boström that the burden of proof would fall to the transhumanist. Consider in this light, a Sorites‐type slope. The transhumanist would have to show how the relationship between the therapeutic practices and the enhancements are indeed transitive. We know night from day without being able to specify exactly when this occurs. So simply because we cannot determine a precise distinction between, say, genetic treatments G1, G2 and G3, and transhumanism enhancements T1, T2 and so on, it does not follow that there are no important moral distinctions between G1 and T20. According to Williams,15 this kind of indeterminacy arises because of the conceptual vagueness of certain terms. Yet, the indeterminacy of so open a predicate “heap” is not equally true of “therapy” or “enhancement”. The latitude they permit is nowhere near so wide.

Instead of objecting to Pn on the grounds that Pn is morally objectionable (ie, to depict a horrible result), we may instead, after Williams, object that the slide from P to Pn is simply morally arbitrary, when it ought not to be. Here, we may say, without specifying a horrible result, that it would be difficult to know what, in principle, can ever be objected to. And this is, quite literally, what is troublesome. It seems to us that this criticism applies to all categories of transhumanism, although not necessarily to all enhancements proposed by them. Clearly, the somewhat loose identity of the movement—and the variations between strong and moderate versions—makes it difficult to sustain this argument unequivocally. Still the transhumanist may be justified in asking, “What is wrong with arbitrariness?” Let us consider one brief example. In aspects of our lives, as a widely shared intuition, we may think that in the absence of good reasons, we ought not to discriminate among people arbitrarily. Healthcare may be considered to be precisely one such case. Given the ever‐increasing demand for public healthcare services and products, it may be argued that access to them typically ought to be governed by publicly disputable criteria such as clinical need or potential benefit, as opposed to individual choices of an arbitrary or subjective nature. And nothing in transhumanism seems to allow for such objective dispute, let alone prioritisation. Of course, transhumanists such as More find no such disquietude. His phrase “No more timidity” is a typical token of transhumanist slogans. We applaud advances in therapeutic medical technologies such as those from new genetically based organ regeneration to more familiar prosthetic devices. Here the ends of the interventions are clearly medically defined and the means regulated closely. This is what prevents transhumanists from adopting a Sorites‐type slippery slope. But in the absence of a telos, of clearly and substantively specified ends (beyond the mere banner of enhancement), we suggest that the public, medical professionals and bioethicists alike ought to resist the potentially open‐ended transformations of human nature. For if all transformations are in principle enchancements, then surely none are. The very application of the word may become redundant. Thus it seems that one strong argument against transhumanism generally—the arbitrary slippery slope—presents a challenge to transhumanism, to show that all of what are described as transhumanist enhancements are imbued with positive normative force and are not merely technological extensions of libertarianism, whose conception of the good is merely an extension of individual choice and consumption.

Limits of transhumanist arguments for medical technology and practice

Already, we have seen the misuse of a host of therapeutically designed drugs used by non‐therapeutic populations for enhancements. Consider the non‐therapeutic use of human growth hormone in non‐clinical populations. Such is the present perception of height as a positional good in society that Cuttler et al22 report that the proportion of doctors who recommended human growth hormone treatment of short non‐growth hormone deficient children ranged from 1% to 74%. This is despite its contrary indication in professional literature, such as that of the Pediatric Endocrine Society, and considerable doubt about its efficacy.23,24 Moreover, evidence supports the view that recreational body builders will use the technology, given the evidence of their use or misuse of steroids and other biotechnological products.25,26 Finally, in the sphere of elite sport, which so valorises embodied capacities that may be found elsewhere in greater degree, precision and sophistication in the animal kingdom or in the computer laboratory, biomedical enhancers may latch onto the genetically determined capacities and adopt or adapt them for their own commercially driven ends.

The arguments and examples presented here do no more than to warn us of the enhancement ideologies, such as transhumanism, which seek to predicate their futuristic agendas on the bedrock of medical technological progress aimed at therapeutic ends and are secondarily extended to loosely defined enhancement ends. In discussion and in bioethical literatures, the future of genetic engineering is often challenged by slippery slope arguments that lead policy and practice to a horrible result. Instead of pointing to the undesirability of the ends to which transhumanism leads, we have pointed out the failure to specify their telos beyond the slogans of “overcoming timidity” or Boström’s3 exhortation that the passive acceptance of ageing is an example of “reckless and dangerous barriers to urgently needed action in the biomedical sphere”.

We propose that greater care be taken to distinguish the slippery slope arguments that are used in the emotionally loaded exhortations of transhumanism to come to a more judicious perspective on the technologically driven agenda for biomedical enhancement. Perhaps we would do better to consider those other all‐too‐human frailties such as violent aggression, wanton self‐harming and so on, before we turn too readily to the richer imaginations of biomedical technologists.


Competing interests: None.


1. Fukuyama F. Transhumanism. Foreign Policy 2004. 12442–44.44
2. Boström N. The fable of the dragon tyrant. J Med Ethics 2005. 31231–237.237 [PMC free article] [PubMed]
3. Boström N. Human genetic enhancements: a transhumanist perspective. J Value Inquiry 2004. 37493–506.506[PubMed]
4. Boström N. Transhumanist values. http://www.nickBostr öm com/ethics/values.h tml (accessed 19 May 2005).
5. Dyens O. The evolution of man: technology takes over. In: Trans Bibbee EJ, Dyens O, eds. Metal and flesh.L. ondon: MIT Press, 2001
6. World Transhumanist Association (accessed 7 Apr 2006)
7. More M. Transhumanism: towards a futurist philosophy. 1996 (accessed 20 Jul 2005)
8. More M. 2005 (accessed 13 Jul 2005)
9. Buchanan A, Brock D W, Daniels N. et alFrom chance to choice: genetics and justice. Cambridge: Cambridge University Press, 2000
9a. Harris J. The Value of Life. London: Routledge. 1985
10. Elshtain B. ed. The body and the quest for control. Is human nature obsolete?. Cambridge, MA: MIT Press, 2004. 155–174.174
10a. Bellah R N. et alHabits of the heart: individualism and commitment in American life. Berkeley: University of California Press. 1996
10b. MacIntyre A C. After virtue. (2nd ed) London: Duckworth. 1985
10c. Sandel M. Liberalism and the limits of justice. Cambridge: Cambridge University Press. 1982
10d. Taylor C. The ethics of authenticity. Boston: Harvard University Press. 1982
10e. Walzer M. Spheres of Justice. New York: Basic Books. 1983
11. Habermas J. The future of human nature. Cambridge: Polity, 2003
12. Hobbes T. In: Oakeshott M, ed. Leviathan. London: MacMillan, 1962
13. Schauer F. Slippery slopes. Harvard Law Rev 1985. 99361–383.383
14. Walton D N. Slippery slope arguments. Oxford: Clarendon, 1992
15. Williams B A O. Which slopes are slippery. In: Lockwood M, ed. Making sense of humanity. Cambridge: Cambridge University Press, 1995. 213–223.223
16. Sternglantz R. Raining on the parade of horribles: of slippery slopes, faux slopes, and Justice Scalia’s dissent in Lawrence v Texas, University of Pennsylvania Law Review, 153. Univ Pa Law Rev 2005. 1531097–1120.1120
17. Schubert L. Ethical implications of pharmacogenetics‐do slippery slope arguments matter? Bioethics 2004.18361–378.378 [PubMed]
18. Lamb D. Down the slippery slope. London: Croom Helm, 1988
19. Den Hartogh G. The slippery slope argument. In: Kuhse H, Singer P, eds. Companion to bioethics. Oxford: Blackwell, 2005. 280–290.290
19a. Kass L. The wisdom of repugnance. New Republic June 2, pp17–26 [PubMed]
20. Borgmann A. Technology and the character of everyday life. Chicago: University of Chicago Press, 1984
21. Vita More N. Who are transhumans?, 2000 (accessed 7 Apr 2006)
22. Cuttler L, Silvers J B, Singh J. et al Short stature and growth hormone therapy: a national study of physician recommendation patterns. JAMA 1996. 276531–537.537 [PubMed]
23. Vance M L, Mauras N. Growth hormone therapy in adults and children. N Engl J Med 1999. 3411206–1216.1216 [PubMed]
24. Anon Guidelines for the use of growth hormone in children with short stature: a report by the Drug and Therapeutics Committee of the Lawson Wilkins Pediatric Endocrine Society. J Pediatr 1995. 127857–867.867[PubMed]
25. Grace F, Baker J S, Davies B. Anabolic androgenic steroid (AAS) use in recreational gym users. J Subst Use2001. 6189–195.195
26. Grace F, Baker J S, Davies B. Blood pressure and rate pressure product response in males using high‐dose anabolic androgenic steroids (AAS) J Sci Med Sport 2003. 6307–12, 2728.12, 2728 [PubMed]

Articles from Journal of Medical Ethics are provided here courtesy of BMJ Group

This article can also be found on the National Center for Biotechnology Information (NCBI) website at

Humans 2.0 with Jason Silva

This is one of the Shots of Awe videos created by Jason Silva.  It’s called HUMAN 2.0.  I don’t think a description is in order here since all the Shots of Awe videos are short and sweet.

Runtime: 2:15

Video Info:

Published on Dec 2, 2014

“Your trivial-seeming self tracking app is part of something much bigger. It’s part of a new stage in the scientific method, another tool in the kit. It’s not the only thing going on, but it’s part of that evolutionary process.” – Ethan Zuckerman paraphrasing Kevin Kelly

Steven Johnson
“Chance favors the connected mind.”…

Additional footage courtesy of Monstro Design and

For more information on Norton security, please go here:

Join Jason Silva every week as he freestyles his way into the complex systems of society, technology and human existence and discusses the truth and beauty of science in a form of existential jazz. New episodes every Tuesday.

Watch More Shots of Awe on TestTube

Subscribe now!…

Jason Silva on Twitter

Jason Silva on Facebook

Jason Silva on Google+…

This video can also be found at

How Much Longer Before Our First AI Catastrophe by George Dvorsky

This is an article called How Much Longer Before Our First AI Catastrophe?  Pretty pessimistic sounding title, right?  It’s actually not a bad article.  The primary focus is not on strong AI (artificial intelligence), as you might be assuming, but on weak AI.  My philosophy is the same as always on this one; be aware and be smart, fear will only cause problems.

How Much Longer Before Our First AI Catastrophe?

How Much Longer Before Our First AI Catastrophe?

What will happen in the days after the birth of the first true artificial intelligence? If things continue apace, this could prove to be the most dangerous time in human history. It will be an era of weak and narrow artificial intelligence, a highly dangerous combination that could wreak tremendous havoc on human civilization. Here’s why we’ll need to be ready.

First, let’s define some terms. The Technological Singularity, which you’ve probably heard of before, is the advent of recursively improving greater-than-human artificial general intelligence (or artificial superintelligence), or the development of strong AI (human-like artificial general intelligence).

But this particular concern has to do with the rise of weak AI — expert systems that match or exceed human intelligence in a narrowly defined area, but not in broader areas. As a consequence, many of these systems will work outside of human comprehension and control.

But don’t let the name fool you; there’s nothing weak about the kind of damage it could do.

Before the Singularity

The Singularity is often misunderstood as AI that’s simply smarter than humans, or the rise of human-like consciousness in a machine. Neither are the case. To a non-trivial degree, much of our AI already exceeds human capacities. It’s just not sophisticated and robust enough to do any significant damage to our infrastructure. The trouble will start to come when, in the case of the Singularity, a highly generalized AI starts to iteratively improve upon itself.

How Much Longer Before Our First AI Catastrophe?

And indeed, when the Singularity hits, it’ll be like, in the words of mathematician I. J. Good, anintelligence explosion — and it will indeed hit us like a bomb. Human control will forever be relegated to the sidelines, in whatever form that might take.

A pre-Singularity AI disaster or catastrophe, on the other hand, will be containable. But just barely. It’ll likely arise from an expert system or super-sophisticated algorithm run amok. And the worry is not so much its power — which is definitely a significant part of the equation — but the speed at which it will inflict the damage. By the time we have a grasp on what’s going on, something terrible may have happened.

Narrow AI could knock out our electric grid, damage nuclear power plants, cause a global-scale economic collapse, misdirect autonomous vehicles and robots, take control of a factory or military installation, or unleash some kind of propagating blight that will be difficult to get rid of (whether in the digital realm or the real world). The possibilities are frighteningly endless.

Our infrastructure is becoming increasingly digital and interconnected — and by consequence, increasingly vulnerable. In a few decades, it will be brittle as glass, with the bulk of human activity dependant upon it.

And it is indeed a possibility. The signs are all there.

Accidents Will Happen

Back in 1988, a Cornell University student named Robert Morris scripted a software program that could measure the size of the Internet. To make it work, he equipped it with a few clever tricks to help it along its way, including an ability to exploit known vulnerabilities in popular utility programs running on UNIX. This allowed the program to break into those machines and copy itself, thus infecting those systems.

How Much Longer Before Our First AI Catastrophe?

On November 2, 1988, Morris introduced his program to the world. It quickly spread to thousands of computers, disrupting normal activities and Internet connectivity for days. Estimates put the cost of the damage anywhere between $10,000 to $100,000. Dubbed the “Morris Worm,” it’s considered the first worm in human history — one that prompted DARPA to fund the establishment of the CERT/CC at Carnegie Mellon University to anticipate and respond to this new kind of threat.

As for Morris, he was charged under the Computer Fraud and Abuse Act and given a $10,000 fine.

But the takeaway from the incident was clear: Despite our best intentions, accidents willhappen. And as we continue to develop and push our technologies forward, there’s always the chance that it will operate outside our expectations — and even our control.

Down to the Millisecond

Indeed, unintended consequences are one thing, containability is quite another. Our technologies are increasingly operating at levels beyond our real-time capacities. The best example of this comes from the world of high-frequency stock trading (HFT).

How Much Longer Before Our First AI Catastrophe?

In HFT, securities are traded on a rapid-fire basis through the use of powerful computers and algorithms. A single investment position can last for a few minutes — or a few milliseconds; there can be as many as 500 transactions made in a single second. This type of computer trading can result in thousands upon thousands of transactions a day, each and every one of them decided by super-sophisticated scripts. The human traders involved (such as they are) just sit back and watch, incredulous to the machinations happening at break-neck speeds.

“Back in the day, I used to be able to explain to a client how their trade was executed. Technology has made the trade process so convoluted and complex that I can’t do that any more,” noted PNC Wealth Management’s Jim Dunigan in a Markets Media article.

Clearly, the ability to assess market conditions and react quickly is a valuable asset to have. And indeed, according to a 2009 study, HFT firms accounted for 60 to 73% of all U.S. equity trading volume; but as of last year that number dropped to 50% — but it’s still considered a highly profitable form of trading.

To date, the most significant single incident involving HFT came at 2:45 on May 5th, 2010. For a period of about five minutes, the Dow Jones Industrial Average plummeted over 1,000 points (approximately 9%); for a few minutes, $1 trillion in market value vanished. About 600 points were recovered 20 minutes later. It’s now called the 2010 Flash Crash, the second largest point swing in history and the biggest one-day point decline.

The incident prompted an investigation by Gregg E. Berman, the U.S. Securities and Exchange Commission (SEC), and the Commodity Futures Trading Commission (CFTC). The investigators posited a number of theories (of which there are many, some of them quite complex), but their primary concern was the impact of HFT. They determined that the collective efforts of the algorithms exacerbated price declines; by selling aggressively, the trader-bots worked to eliminate their positions and withdraw from the market in the face of uncertainty.

The following year, an independent study concluded that technology played an important role, but that it wasn’t the entire story. Looking at the Flash Crash in detail, the authors argued that it was “the result of the new dynamics at play in the current market structure,” and the role played by “order toxicity.” At the same time, however, they noted that HFT traders exhibited trading patterns inconsistent with the traditional definition of market making, and that they were “aggressively [trading] in the direction of price changes.”

HFT is also playing an increasing role in currencies and commodities, making up about 28% of the total volume in futures markets. Not surprisingly, this area has become vulnerable to mini crashes. Following incidents involving the trading of cocoa and sugar, the Wall Street Journalhighlighted the growing concerns:

“The electronic platform is too fast; it doesn’t slow things down” like humans would, said Nick Gentile, a former cocoa floor trader. “It’s very frustrating” to go through these flash crashes, he said…

..The same is happening in the sugar market, provoking outrage within the industry. In a February letter to ICE, the World Sugar Committee, which represents large sugar users and producers, called algorithmic and high-speed traders “parasitic.”

Just how culpable HFT is to the phenomenon of flash crashes is an open question, but it’s clear that the trading environment is changing rapidly. Market analysts now speak in terms of “microstructures,” trading “circuit breakers,” and the “VPIN Flow Toxicity metric.” It’s also difficult to predict how serious future flash crashes could become. If insufficient measures aren’t put into place to halt these events when they happen, and assuming HFT is scaled-up in terms of market breadth, scope, and speed, it’s not unreasonable to think of events in which massive and irrecoverable losses might occur. And indeed, some analysts are already predicting systems that can support 100,000 transactions per second.

More to the point, HFT and flash crashes may not create an economic disaster — but it’s a potent example of how our other mission-critical systems may reach unprecedented tempos. As we defer critical decision making to our technological artifacts, and as they increase in power and speed, we are increasingly finding ourselves outside of the locus of control and comprehension.

When AI Screws Up, It Screws Up Badly

No doubt, we are already at the stage when computers exceed our ability to understand how and why they do the things they do. One of the best examples of this is IBM’s Watson, the expert computer system that trounced the world’s best Jeopardy players in 2011. To make it work, Watson’s developers scripted a series of programs that, when pieced together, created an overarching game-playing system. And they’re not entirely sure how it works.

David Ferrucci, the Leader Researcher of the project, put it this way:

Watson absolutely surprises me. People say: ‘Why did it get that one wrong?’ I don’t know. ‘Why did it get that one right?’ I don’t know.

Which is actually quite disturbing. And not so much because we don’t understand why it succeeds, but because we don’t necessarily understand why it fails. By virtue, we can’t understand or anticipate the nature of its mistakes.

How Much Longer Before Our First AI Catastrophe?

For example, Watson had one memorable gaff that clearly demonstrated how, when an AI fails, it fails big time. During the Final Jeopardy portion, it was asked, “Its largest airport is named for a World War II hero; its second largest, for a World War II battle.” Watson responded with, “What is Toronto?”

Given that Toronto’s Billy Bishop Airport is named after a war hero, that was not a terrible guess. But why this was such a blatant mistake is that the category was “U.S. Cities.” Toronto, not being a U.S. city, couldn’t possibly have been the correct answer.

Again, this is the important distinction that needs to be made when addressing the potential for a highly generalized AI. Weak, narrow systems are extremely powerful, but they’re also extremely stupid; they’re completely lacking in common sense. Given enough autonomy and responsibility, a failed answer or a wrong decision could be catastrophic.

As another example, take the recent initiative to give robots their very own Internet. By providing and sharing information amongst themselves, it’s hoped that these bots can learn without having to be programmed. A problem arises, however, when instructions for a task are mismatched — the result of an AI error. A stupid robot, acting without common sense, would simply execute upon the task even when the instructions are wrong. In another 30 to 40 years, one can only imagine the kind of damage that could be done, either accidentally, or by a malicious script kiddie.

Moreover, because expert systems like Watson will soon be able to conjure answers to questions that are beyond our comprehension, we won’t always know when they’re wrong. And that is a frightening prospect.

The Shape of Things to Come

It’s difficult to know exactly how, when, or where the first true AI catastrophe will occur, but we’re still several decades off. Our infrastructure is still not integrated or robust enough to allow for something really terrible to happen. But by the 2040s (if not sooner), our highly digital and increasingly interconnected world will be susceptible to these sorts of problems.

How Much Longer Before Our First AI Catastrophe?

By that time, our power systems (electric grids, nuclear plants, etc.) could be vulnerable to errors and deliberate attacks. Already today, the U.S. has been able to infiltrate the control system software known to run centrifuges in Iranian nuclear facilities by virtue of its Stuxnet program — an incredibly sophisticated computer virus (if you can call it that). This program represents the future of cyber-espionage and cyber-weaponry — and it’s a pale shadow of things to come.

In future, more advanced versions will likely be able to not just infiltrate enemy or rival systems, it could reverse-engineer it, inflict terrible damage — or even take control. But like the Morris Worm incident showed, it may be difficult to predict the downstream effects of these actions, particularly when dealing with autonomous, self-replicating code. It could also result in an AI arms race, with each side developing programs and counter-programs to get an edge on the other side’s technologies.

How Much Longer Before Our First AI Catastrophe?

And though it might seem like the premise of a scifi novel, an AI catastrophe could also involve the deliberate or accidental takeover of any system running off an AI. This could include integrated military equipment, self-driving vehicles (including airplanes), robots, and factories. Should something like this occur, the challenge will be to disable the malign script (or source program) as quickly as possible, which may not be easy.

More conceptually, and in the years immediately preceding the onset of uncontainable self-improving machine intelligence, a narrow AI could be used (again, either deliberately or unintentionally) to execute upon a poorly articulated goal. The powerful system could over-prioritize a certain aspect, or grossly under-prioritize another. And it could make sweeping changes in the blink of an eye.

Hopefully, if and when this does happen, it will be containable and relatively minor in scope. But it will likely serve as a call to action in anticipation of more catastrophic episodes. As for now, and in consideration of these possibilities, we need to ensure that our systems are secure, smart, and resilient.

Images: Shutterstock/agsandrew; Washington Times; TIME, Potapov Alexander/Shutterstock.

This article can also be found on the io9 website at

Just a Another Definition of “The Singularity”

There are plenty of definitions of the singularity out there and I don’t plan to post any more of these, but I thought this one (from WhatIs)was worth having on Dawn of Giants.  

Singularity (the)

Part of the Nanotechnology glossary:

The Singularity is the hypothetical future creation of superintelligent machines. Superintelligence is defined as a technologically-created cognitive capacity far beyond that possible for humans. Should the Singularity occur, technology will advance beyond our ability to foresee or control its outcomes and the world will be transformed beyond recognition by the application of superintelligence to humans and/or human problems, including poverty, disease and mortality.

Revolutions in genetics, nanotechnology and robotics (GNR) in the first half of the 21stcentury are expected to lay the foundation for the Singularity. According to Singularity theory, superintelligence will be developed by self-directed computers and will increase exponentially rather than incrementally.

Lev Grossman explains the prospective exponential gains in capacity enabled by superintelligent machines in an article in Time:

“Their rate of development would also continue to increase, because they would take over their own development from their slower-thinking human creators. Imagine a computer scientist that was itself a super-intelligent computer. It would work incredibly quickly. It could draw on huge amounts of data effortlessly. It wouldn’t even take breaks to play Farmville.”

Proposed mechanisms for adding superintelligence to humans include brain-computer interfaces, biological alteration of the brain, artificial intelligence (AI) brain implants and genetic engineering. Post-singularity, humanity and the world would be quite different.  A human could potentially scan his consciousness into a computer and live eternally in virtual reality or as a sentient robot. Futurists such as Ray Kurzweil (author of The Singularity is Near) have predicted that in a post-Singularity world, humans would typically live much of the time in virtual reality — which would be virtually indistinguishable from normal reality. Kurzweil predicts, based on mathematical calculations of exponential technological development, that the Singularity will come to pass by 2045.

Most arguments against the possibility of the Singularity involve doubts that computers can ever become intelligent in the human sense. The human brain and cognitive processes may simply be more complex than a computer could be. Furthermore, because the human brain isanalog, with theoretically infinite values for any process, some believe that it cannot ever be replicated in a digital format. Some theorists also point out that the Singularity may not even be desirable from a human perspective because there is no reason to assume that a superintelligence would see value in, for example, the continued existence or well-being of humans.

Science-fiction writer Vernor Vinge first used the term the Singularity in this context in the 1980s, when he used it in reference to the British mathematician I.J. Good’s concept of an “intelligence explosion” brought about by the advent of superintelligent machines. The term is borrowed from physics; in that context a singularity is a point where the known physical laws cease to apply.


This article can also be found at