Don’t Fear Artificial Intelligence by Ray Kurzweil

This is an article from TIME by Ray Kurzweil called Don’t Fear Artificial Intelligence.  Basically, Kurzweil’s stance is that “technology is a double-edged sword” and that it always has been, but that’s no reason to abandon the research.  Kurzweil also states that, “Virtually every­one’s mental capabilities will be enhanced by it within a decade.”  I hope it makes people smarter and not just more intelligent! 

Don’t Fear Artificial Intelligence

Retro toy robot
Getty Images

Kurzweil is the author of five books on artificial ­intelligence, including the recent New York Times best seller “How to Create a Mind.”

Two great thinkers see danger in AI. Here’s how to make it safe.

Stephen Hawking, the pre-eminent physicist, recently warned that artificial intelligence (AI), once it sur­passes human intelligence, could pose a threat to the existence of human civilization. Elon Musk, the pioneer of digital money, private spaceflight and electric cars, has voiced similar concerns.

If AI becomes an existential threat, it won’t be the first one. Humanity was introduced to existential risk when I was a child sitting under my desk during the civil-­defense drills of the 1950s. Since then we have encountered comparable specters, like the possibility of a bioterrorist creating a new virus for which humankind has no defense. Technology has always been a double-edged sword, since fire kept us warm but also burned down our villages.

The typical dystopian futurist movie has one or two individuals or groups fighting for control of “the AI.” Or we see the AI battling the humans for world domination. But this is not how AI is being integrated into the world today. AI is not in one or two hands; it’s in 1 billion or 2 billion hands. A kid in Africa with a smartphone has more intelligent access to knowledge than the President of the United States had 20 years ago. As AI continues to get smarter, its use will only grow. Virtually every­one’s mental capabilities will be enhanced by it within a decade.

We will still have conflicts among groups of people, each enhanced by AI. That is already the case. But we can take some comfort from a profound, exponential decrease in violence, as documented in Steven Pinker’s 2011 book, The Better Angels of Our Nature: Why Violence Has Declined. According to Pinker, although the statistics vary somewhat from location to location, the rate of death in war is down hundredsfold compared with six centuries ago. Since that time, murders have declined tensfold. People are surprised by this. The impression that violence is on the rise results from another trend: exponentially better information about what is wrong with the world—­another development aided by AI.

There are strategies we can deploy to keep emerging technologies like AI safe. Consider biotechnology, which is perhaps a couple of decades ahead of AI. A meeting called the Asilomar ­Conference on Recombinant DNA was organized in 1975 to ­assess its potential dangers and devise a strategy to keep the field safe. The resulting guidelines, which have been revised by the industry since then, have worked very well: there have been no significant problems, accidental or intentional, for the past 39 years. We are now seeing major ad­vances in medical treatments reaching clinical practice and thus far none of the anticipated problems.

Consideration of ethical guidelines for AI goes back to Isaac Asimov’s three laws of robotics, which appeared in his short story “Runaround” in 1942, eight years before Alan Turing introduced the field of AI in his 1950 paper “Computing Machinery and Intelligence.” The median view of AI practitioners today is that we are still several decades from achieving human-­level AI. I am more optimistic and put the date at 2029, but either way, we do have time to devise ethical standards.

There are efforts at universities and companies to develop AI safety strategies and guidelines, some of which are already in place. Similar to the Asilomar guidelines, one idea is to clearly define the mission of each AI program and to build in encrypted safeguards to prevent unauthorized uses.

Ultimately, the most important approach we can take to keep AI safe is to work on our human governance and social institutions. We are already a human-­machine civilization. The best way to avoid destructive conflict in the future is to continue the advance of our social ideals, which has already greatly reduced violence.

AI today is advancing the diagnosis of disease, finding cures, developing renewable clean energy, helping to clean up the environment, providing high-­quality education to people all over the world, helping the disabled (including providing Hawking’s voice) and contributing in a myriad of other ways. We have the opportunity in the decades ahead to make major strides in addressing the grand challenges of humanity. AI will be the pivotal technology in achieving this progress. We have a moral imperative to realize this promise while controlling the peril. It won’t be the first time we’ve succeeded in doing this.

Kurzweil is the author of five books on artificial ­intelligence, including the recent New York Times best seller How to Create a Mind.


This article can also be found here.


The Hedonistic Imperative – David Pearce

This is a video of David Pearce talking about the Hedonistic Imperative.  In the video (The Hedonistic Imperative – David Pearce), Pearce discusses what he calls “paradise engineering“. I like Pierce’s response to the old myth that we need suffering to appreciate pleasure (about 8 minutes in).  Have a look…

RunTime: 17:57

This video can also be found at

Video Info:

Published on Mar 25, 2014

Filmed at the Botanical Gardens in Melbourne Australia – The Hedonistic Imperative outlines how genetic engineering and nanotechnology will abolish suffering in all sentient life. The abolitionist project is hugely ambitious but technically feasible. It is also instrumentally rational and morally urgent. The metabolic pathways of pain and malaise evolved because they served the fitness of our genes in the ancestral environment. They will be replaced by a different sort of neural architecture – a motivational system based on heritable gradients of bliss. States of sublime well-being are destined to become the genetically pre-programmed norm of mental health. It is predicted that the world’s last unpleasant experience will be a precisely dateable event. Two hundred years ago, powerful synthetic pain-killers and surgical anesthetics were unknown. The notion that physical pain could be banished from most people’s lives would have seemed absurd. Today most of us in the technically advanced nations take its routine absence for granted. The prospect that what we describe as psychological pain, too, could ever be banished is equally counter-intuitive. The feasibility of its abolition turns its deliberate retention into an issue of social policy and ethical choice.

Subscribe to this Channel:…

Science, Technology & the Future:




Making Small Stuff Do Big Things: TEDxHouston 2011 – Wade Adams – Nanotechnology and Energy

This is Wade Adams delivering a TEDx presentation called TEDxHouston 2011 – Wade Adams – Nanotechnology and Energy.  I remember reading something at the MIT News website a few years ago about gold nanorods using gamma rays to destroy cancer cells (ok, just looked it up – I was close… kinda).  Let me just say that nanotech is finally becoming a reality.  Let’s just all agree not to make gray goo, yeah?


Runtime: 25:20

This video can also be found at

Video Info:

Uploaded on Aug 6, 2011

Dr. Wade Adams is the Director of the Smalley Institute for Nanoscale Science and Technology at Rice University. The Institute is devoted to the development of new innovations on the nanometer scale. Some of the institute’s current thrusts include research in carbon nanotubes, medical applications of nanoparticles, nanoporous membranes, molecular computing, and nanoshell diagnostic and therapeutic applications.

Wade was appointed a senior scientist (ST) in the Materials Directorate of the Wright Laboratory in 1995. Prior to that he was a research leader and in-house research scientist in the directorate. For the past 36 years he has conducted research in polymer physics, concentrating on structure-property relations in high-performance organic materials. He is internationally known for his research in high-performance rigid-rod polymer fibers, X-ray scattering studies of fibers and liquid crystalline films, polymer dispersed liquid crystals, and theoretical studies of ultimate polymer properties.

The coming transhuman era: Jason Sosa at TEDxGrandRapids [Transhumanism]

Dawn of Giants Favorite…

This video from TEDx Grand Rapids is probably one of the best introductions to transhumanism. The video is called The coming transhuman era: Jason Sosa at TEDxGrandRapids. Jason Sosa is a tech entrepreneur and I think it’s pretty safe to say that we’ll be hearing more about him in the near future. This one is an absolute must see!

Runtime: 15:37

This video can also be found at

Video Info:

Published on Jun 24, 2014

Sosa is the founder and CEO of IMRSV, a computer vision and artificial intelligence company and was named one of “10 Startups to Watch in NYC” by Time Inc., and one of “25 Hot and New Startups to Watch in NYC” by Business Insider. He has been featured by Forbes, CNN, New York Times, Fast Company, Bloomberg and Business Insider, among others.

In the spirit of ideas worth spreading, TEDx is a program of local, self-organized events that bring people together to share a TED-like experience. At a TEDx event, TEDTalks video and live speakers combine to spark deep discussion and connection in a small group. These local, self-organized events are branded TEDx, where x = independently organized TED event. The TED Conference provides general guidance for the TEDx program, but individual TEDx events are self-organized.* (*Subject to certain rules and regulations)

NASA and Singularity University

This isn’t an article so much as it is a memo posted on the NASA website.  Basically, the ‘article’ states that NASA supports the Singularity University endeavor.  This is actually kind of old news (from 2009), but part of the mission of Dawn of Giants is to convince people of the need to take transhumanism and the idea of the technological singularity seriously.  Maybe the support of government agencies like NASA and DARPA will help to this end.  

NASA Ames Becomes Home To Newly Launched Singularity University

Rachel Prucey – Ames Research Center, Moffett Field, Calif.

Denise Vardakas – Singularity University, Moffett Field, Calif.

Feb. 03, 2009

MOFFETT FIELD, Calif., — Technology experts and entrepreneurs with a passion for solving humanity’s grand challenges, will soon have a new place to exchange ideas and facilitate the use of rapidly developing technologies.

NASA Ames Research Center today announced an Enhanced Use Lease Agreement with Singularity University (SU) to house a new academic program at Ames’ NASA Research Park. The university will open its doors this June and begin offering a nine-week graduate studies program, as well as three-day chief executive officer-level and 10-day management-level programs. The SU curriculum provides a broad, interdisciplinary exposure to ten fields of study: future studies and forecasting; networks and computing systems; biotechnology and bioinformatics; nanotechnology; medicine, neuroscience and human enhancement; artificial intelligence, robotics, and cognitive computing; energy and ecological systems; space and physical sciences; policy, law and ethics; and finance and entrepreneurship.

“The NASA Ames campus has a proud history of supporting ground-breaking innovation, and Singularity University fits into that tradition,” said S. Pete Worden, Ames Center Director and one of Singularity University’s founders. “We’re proud to help launch this unique graduate university program and are looking forward to the new ideas, technologies and social applications that result.”

Singularity University was founded Sept. 20, 2008 by a group of leaders, including Worden; Ray Kurzweil, author and futurist; Peter Diamandis, space entrepreneur and chairman of the X PRIZE Foundation; Robert Richards, co-founder of the International Space University; Michael Simpson, president of the International Space University; and a group of SU associate founders who have contributed time and capital.

“With its strong focus on interdisciplinary learning, Singularity University is poised to foster the leaders who will create a uniquely creative and productive future world,” said Kurzweil.


NASA Ames would like to eliminate confusion that might have arisen concerning NASA personnel as “Founders” of Singularity University in the Feb. 3, 2009 news release, “NASA Ames Becomes Home To Newly Launched Singularity University.”

NASA Ames Center Director S. Pete Worden hosted SU’s Founders Conference on Sept. 20, 2008 at NASA Ames. On NASA’s behalf he and other Ames personnel provided input to SU’s founders and encouraged the scientific and technical discussions. Neither Dr. Worden nor any other NASA employee is otherwise engaged in the University’s operation nor do any NASA Ames employees have personal or financial interests in Singularity University. As with other educational institutions, NASA employees may support educational activities of SU through lectures, discussions and interactions with students and staff. NASA employees may also attend SU as students.

For more information about Singularity University, visit:

For more information about NASA programs, visit:


This can also be found at – Ray Kurzweil: The Exponential Mind

Chris Raymond at interview Ray Kurzweil.  The article is called Ray Kurzweil: The Exponential Mind.  It follows the usual Kurzwelian interview parameters (a little background, explain exponential growth with examples, discuss where technology is taking us), but it also goes into some of the things his critics have to say and talks a bit about Kurzweil’s new role at Google.  


Ray Kurzweil: The Exponential Mind

The inventor, scientist, author, futurist and director of engineering at Google aims to help mankind devise a better world by keeping tabs on technology, consumer behavior and more.

Chris Raymond

Ray Kurzweil is not big on small talk. At 3:30 on a glorious early summer afternoon, the kind that inspires idle daydreams, he strides into a glass-walled, fifth-floor conference room overlooking the leafy tech town of Waltham, Mass.

Lowering himself into a chair, he looks at his watch and says, “How much time do you need?”

It doesn’t quite qualify as rude. He’s got a plane to catch this evening, and he’s running nearly two hours behind schedule. But there is a hint of menace to the curtness, a subtle warning to keep things moving. And this is certainly in keeping with Kurzweil’s M.O.

“If you spend enough time with him, you’ll see that there’s very little waste in his day,” says director Barry Ptolemy, who tailed Kurzweil for more than two years while filming the documentary Transcendent Man. “His nose is always to the grindstone; he’s always applying himself to the next job, the next interview, the next book, the next little task.”

It would appear the 66-year-old maverick has operated this way since birth. He decided to become an inventor at age 5, combing his Queens, N.Y., neighborhood for discarded radios and bicycle parts to assemble his prototypes. In 1965, at age 17, he unveiled an early project, a computer capable of composing music, on the Steve Allen TV show I’ve Got a Secret. He made his first trip to the White House that same year, meeting with Lyndon Johnson, along with other young scientists uncovered in a Westinghouse talent search. As a sophomore at MIT, he launched a company that used a computer to help high school students find their ideal college. Then at 20, he sold the firm to a New York publisher for $100,000, plus royalties.

The man has been hustling since he learned how to tie his shoes.

Though he bears a slight resemblance to Woody Allen—beige slacks, open collar, reddish hair, glasses—he speaks with the baritone authority of Henry Kissinger. He brings an engineer’s sense of discipline to each new endeavor, pinpointing the problem, surveying the options, choosing the best course of action. “He’s very good at triage, very good at compartmentalizing,” says Ptolemy.

A bit ironically, Kurzweil describes his first great contribution to society—the technology that first gave computers an audible voice—as a solution he developed in the early 1970s for no problem in particular. After devising a program that allowed the machines to recognize letters in any font, he pursued market research to decide how his advancement could be useful. It wasn’t until he sat next to a blind man on an airplane that he realized his technology could shatter the inherent limitations of Braille; only a tiny sliver of books had been printed in Braille, and no topical sources—newspapers, magazines or office memos—were available in that format.

Kurzweil and a team that included engineers from the National Federation for the Blind built around his existing software to make text-to-speech reading machines a reality by 1976. “What really motivates an innovator is that leap from dry formulas on a blackboard to changes in people’s lives,” Kurzweil says. “It’s very gratifying for me when I get letters from blind people who say they were able to get a job or an education due to the reading technology that I helped create…. That’s really the thrill of being an innovator.”

The passion for helping humanity has pushed Kurzweil to establish double-digit companies over the years, pursuing all sorts of technological advancements. Along the way, his sleepy eyes have become astute at seeing into the future.

In The Age of Intelligent Machines, first published in 1990, Kurzweil started sharing his visions with the public. At the time they sounded a lot like science fiction, but a startling number of his predictions came true. He correctly predicted that by 1998 a computer would win the world chess championship, that new modes of communication would bring about the downfall of the Soviet Union, and that millions of people worldwide would plug into a web of knowledge. Today, he is the author of five best-selling books, including The Singularity Is Near and How to Create a Mind.

This wasn’t his original aim. In 1981, when he started collecting data on how rapidly computer technology was evolving, it was for purely practical reasons.

“Invariably people create technologies and business plans as if the world is never going to change,” Kurzweil says. As a result, their companies routinely fail, even though they successfully build the products they promise to produce. Visionaries see the potential, but they don’t plot it out correctly. “The inventors whose names you recognize were in the right place with the right idea at the right time,” he explains, pointing to his friend Larry Page, who launched Google with Sergey Brin in 1998, right about the time the founders of legendary busts and discovered mankind wasn’t remotely ready for Internet commerce.

How do you master timing? You look ahead.

“My projects have to make sense not for the time I’m looking at, but the world that will exist when I finish,” Kurzweil says. “And that world is a very different place.”

In recent years, companies like Ford, Hallmark and Hershey’s have recognized the value in this way of thinking, hiring expert guides like Kurzweil to help them study the shifting sands and make sense of the road ahead. These so-called “futurists” keep a careful eye on scientific advances, consumer behavior, market trends and cultural leanings. According to Intel’s resident futurist, Brian David Johnson, the goal is not so much to predict the future as to invent it. “Too many people believe that the future is a fixed point that we’re powerless to change,” Johnson recently told Forbes. “But the reality is that the future is created every day by the actions of people.”

Kurzweil subscribes to this notion. He has boundless confidence in man’s ability to construct a better world. This isn’t some utopian dream. He has the data to back it up—and a team of 10 researchers who help him construct his mathematical models. They’ve been plotting the price and computing power of information technologies—processing speed, data storage, that sort of thing—for decades.

In his view, we are on the verge of a great leap forward, an age of unprecedented invention, the kinds of breakthroughs that can lead to peace and prosperity and make humans immortal. In other words, he has barely begun to bend time to his will.

Ray Kurzweil does not own a crystal ball. The secret to his forecasting success is “exponential thinking.”

Our minds are trained to see the world linearly. If you drive at this speed, you will reach your destination at this time. But technology evolves exponentially. Kurzweil calls this the Law of Accelerating Returns.

He leans back in his chair to retrieve his cellphone and holds it aloft between two fingers. “This is several billion times more powerful than the computer I used as an undergraduate,” he says, and goes on to point out that the device is also about 100,000 times smaller. Whereas computers once took up entire floors at university research halls, far more advanced models now fit in our pockets (and smaller spaces) and are becoming more miniscule all the time. This is a classic example of exponential change.

The Human Genome Project is another. Launched in 1990, it was billed from the start as an ambitious, 15-year venture. Estimated cost: $3 billion. When researchers neared the time line’s halfway point with only 3 percent of the DNA sequencing finished, critics were quick to pounce. What they did not see was the annual doubling in output. Thanks to increases in computing power and efficiency, 3 percent became 6 percent and then 12 percent and so on. With a few more doublings, the project was completed a full two years ahead of schedule.

That is the power of exponential change.

“If you take 30 steps linearly, you get to 30,” Kurzweil says. “If you take 30 steps exponentially, you’re at a billion.”

The fruits of these accelerating returns are all around us. It took more than 15 years to sequence HIV beginning in the 1980s. Thirty-one days to sequence SARS in 2003. And today we can map a virus in a single day.

While thinking about the not-too-distant future, when virtual reality and self-driving cars, 3-D printing and Google Glass are norms, Kurzweil dreams of the next steps. In his vision, we’re rapidly approaching the point where human power becomes infinite.

Holding the phone upright, he swipes a finger across the glass.

“When I do this, my fingers are connected to my brain,” Kurzweil says. “The phone is an extension of my brain. Today a kid in Africa with a smartphone has access to all of human knowledge. He has more knowledge at his fingertips than the president of the United States did 15 years ago.” Multiplying by exponents of progress, Kurzweil projects continued shrinkage in computer size and growth in power over the next 25 years. He hypothesizes microscopic nanobots—inexpensive machines the size of blood cells—that will augment our intelligence and immune systems. These tiny technologies “will go into our neocortex, our brain, noninvasively through our capillaries and basically put our neocortex on the cloud.”

Imagine having Wikipedia linked directly to your brain cells. Imagine digital neurons that reverse the effects of Parkinson’s disease.Maybe we can live forever.

He smiles, letting the sweep of his statements sink in. Without question, it is an impressive bit of theater. He loves telling stories, loves dazzling people with his visions. But his zeal for showmanship has been known to backfire.

The biologist P.Z. Myers has called him “one of the greatest hucksters of the age.” Other critics have labeled him crazy and called his ideas hot air. Kurzweil’s public pursuit of immortality doesn’t help matters. In an effort to prolong his life, Kurzweil takes 150 supplements a day, washing them down with cup after cup of green tea and alkaline water. He monitors the effects of these chemistry experiments with weekly blood tests. It’s one of a few eccentricities.

“He’s extremely honest and direct,” Ptolemy says of his friend’s prickly personality. “He talks to people and if he doesn’t like what you’re saying, he’ll just say it. There’s no B.S. If he doesn’t like what he’s hearing, he’ll just say, ‘No. Got anything  else?’”

But it’s hard to argue with the results. Kurzweil claims 86 percent of his predictions for the year 2009 came true. Others insist the figure is actually much lower. But that’s just part of the game. Predicting is hard work.

“He was considered extremely radical 15 years ago,” Ptolemy says. “That’s less the case now. People are seeing these technologies catch up—the iPhone, Google’s self-driving cars, Watson [the IBM computer that bested Jeopardy genius Ken Jennings in 2011]. All these things start happening, and people are like, ‘Oh, OK. I see what’s going on.’”

Ray Kurzweil was born into a family of artists. His mother was a painter; his father, a conductor and musician. Both moved to New York from Austria in the late 1930s, fleeing the horrors of Hitler’s Nazi regime. When Ray was 7 years old, his maternal grandfather returned to the land of his birth, where he was given the chance to hold in his hands documents that once belonged to the great Leonardo da Vinci—painter, sculptor, inventor, thinker. “He described the experience with reverence,” Kurzweil writes, “as if he had touched the work of God himself.”

Ray’s parents raised their son and daughter in the Unitarian Church, encouraging them to study the teachings of various religions to arrive at the truth. Ray is agnostic, in part, he says, because religions tend to rationalize death; but like Da Vinci, he firmly believes in the power of ideas—the ability to overcome pain and peril, to transcend life’s challenges with reason and thought. “He wants to change the world—impact it as much as possible,” Ptolemy says. “That’s what drives him.”

Despite what his critics say, Kurzweil is not blind to the threats posed by modern science. If nanotechnology could bring healing agents into our bodies, nano-hackers or nano-terrorists could spread viruses—the literal, deadly kind. “Technology has been a double-edged sword ever since fire,” he says. “It kept us warm, cooked our food, but also burned down our villages.” That doesn’t mean you keep it under lock and key.

In January of 2013, Kurzweil entered the next chapter of his life, dividing his time between Waltham and San Francisco, where he works with Google engineers to deepen computers’ understanding of human language. “It’s my first job with a company I didn’t start myself,” he deadpans. The idea is to move the company beyond keyword search, to teach computers how to grasp the meaning and ideas in the billions of documents at their disposal, to move them one more step forward on the journey to becoming sentient virtual assistants—picture Joaquin Phoenix’s sweet-talking laptop in 2013’s Kurzweil-influenced movie Her, a Best Picture nominee.

Kurzweil had pitched the idea of breaking computers’ language barrier to Page while searching for investors. Page offered him a full-time salary and Google-scale resources instead, promising to give Kurzweil the independence he needs to complete the project. “It’s a courageous company,” Kurzweil says. “It has a biz model that supports very widespread distribution of these technologies. It’s the only place I could do this project. I would not have the resources, even if I raised all the money I wanted in my own company. I wouldn’t be able to run algorithms on a million computers.”

That’s not to say Page will sit idle while Kurzweil toils away. In the last year, the Google CEO has snapped up eight robotics companies, including industry frontrunner Boston Dynamics. He paid $3.2 billion for Nest Labs, maker of learning thermostats and smoke alarms. He scooped up the artificial intelligence startup DeepMind and lured Geoffrey Hinton, the world’s foremost expert on neural networks—computer systems that function like a brain—into the Google fold.

Kurzweil’s ties to Page run deep. Google (and NASA) provided early funding for Singularity University, the education hub/startup accelerator Kurzweil launched with the XPRIZE’s Peter Diamandis to train young leaders to use cutting-edge technology to make life better for billions of people on Earth.

Kurzweil’s faith in entrepreneurship is so strong that he believes it should be taught in elementary school.


Because that kid with the cellphone now has a chance to change the world. If that seems far-fetched, consider the college sophomore who started Facebook because he wanted to meet girls or the 15-year-old who recently invented a simple new test for pancreatic cancer. This is one source of his optimism. Another? The most remarkable thing about the mathematical models Kurzweil has assembled, the breathtaking arcs that demonstrate his thinking, is that they don’t halt their climb for any reason—not for world wars, not for the Great Depression.

Once again, that’s the power of exponential growth.

“Things that seemed impossible at one point are now possible,” Kurzweil says. “That’s the fundamental difference between me and my critics.” Despite the thousands of years of evolution hard-wired into his brain, he resists the urge to see the world in linear fashion. That’s why he’s bullish on solar power, artificial intelligence, nanobots and 3-D printing. That’s why he believes the 2020s will be studded with one huge medical breakthrough after another.

“There’s a lot of pessimism in the world,” he laments. “If I  believed progress was linear, I’d be pessimistic, too. Because we would not be able to solve these problems. But I’m optimistic—more than optimistic: I believe we will solve these problems because of the scale of these technologies.”

He looks down at his watch yet again. Mickey Mouse peeks out from behind the timepiece’s sweeping hands. “Just a bit of whimsy,” he says.

Nearly an hour has passed. The world has changed. It’s time to get on with his day.

Post date:

Oct 9, 2014

This article can also be found at

The BioBricks Foundation – Synthetic Biology and Modular DNA

The following is from the BioBricks Foundation where research into synthetic biology and biotechnology is taking place.  The article and video below are from the BioBricks Foundation About page.  I’ll be keeping an eye on their research and I will post anything interesting that arises from it here on Dawn of Giants.


About the BioBricks Foundation

The BioBricks Foundation (BBF) is a 501(c)(3) public-benefit organization founded in 2006 by scientists and engineers who recognized that synthetic biology had the potential to produce big impacts on people and the planet and who wanted to ensure that this emerging field would serve the public interest.

Our mission is to ensure that the engineering of biology is conducted in an open and ethical manner to benefit all people and the planet.

We envision a world in which scientists and engineers work together using freely available standardized biological parts that are safe, ethical, cost effective and publicly accessible to create solutions to the problems facing humanity.

We envision synthetic biology as a force for good in the world. We see a future in which architecture, medicine, environmental remediation, agriculture, and other fields use synthetic biology.

We believe biosecurity, biosafety, bioethics, environmental health, and sustainability must be integrated with scientific research and applied technology.

We bring together engineers, scientists, attorneys, innovators, teachers, students, policymakers, and ordinary citizens to make this vision a reality.

Decoding Synthetic Biology on KQED’s “Quest”


Video Info:

Uploaded on Jul 22, 2009

Imagine living cells acting as memory devices; biofuels brewing from yeast, or a light receptor taken from algae that makes photographs on a plate of bacteria. With the new science of synthetic biology, the goal is to make biology easier to engineer so that new functions can be derived from living systems. Find out the tools that Bay Area synthetic biologists are using and the exciting things they are building.


This summary can also be found at

DARPA and Transhumanism – Biology is Technology

This is an article by Peter Rothman at H+ Magazine called Biology is Technology — DARPA is Back in the Game With A Big Vision and It Is H+.  DARPA, the world’s most technologically advanced organization is pursuing transhuman technologies and supporting the transhumanism/singularity movement.  Just a thought to keep in mind while reading this; DARPA doesn’t do science fiction…


Biology is Technology — DARPA is Back in the Game With A Big Vision and It Is H+

Peter Rothman


DARPA, the Defense Research Projects Agency, is perhaps best known for its role as progenitors of the computer networking and the Internet. Formed in the wake of the Soviet Union’s surprise launch of Sputnik, DARPA’s objective was to ensure that the United States would avoid technological surprises in the future. This role was later expanded to causing technological surprises as well.

And although DARPA is and has been the leading source of funding for artificial intelligence and a number of other transhumanist projects, they’ve been missing in action for a while. Nothing DARPA has worked on since seems to have had the societal impact of the invention of the Internet. But that is about to change.

The current director of DARPA is Dr. Arati Prabhakar. She is the second female director of the organization, following the previous and controversial director Regina Dugan who left the government to work at Google. The return to big visions and big adventures was apparent and in stark contrast to Dugan’s leadership of the organization.

Quoted in WIRED, Dugan had, for example, stated that “There is a time and a place for daydreaming. But it is not at DARPA,” and she told a congressional panel in March 2011, “Darpa is not the place of dreamlike musings or fantasies, not a place for self-indulging in wishes and hopes. DARPA is a place of doing.”

Those days are gone. DARPA’s new vision is simply to revolutionize the human situation and it is fully transhumanist in its approach.

The Biological Technologies Office or BTO was announced with little fanfare in the spring of 2014. This announcement didn’t get that much attention, perhaps because the press release announcing the BTO was published on April Fool’s Day.

But DARPA is determined to turn that around, and to help make that happen, they held a two day event in the SIlicon Valley area to facilitate and communicate about radical changes ahead in the area of biotechnologies. Invitees included some of the top biotechnology scientists in the world. And the audience was a mixed group of scientists, engineers, inventors, investors, futurists, along with a handful of government contractors and military personnel.

Biology is Technology

I was lucky to be invited to this event because although I spend a large amount of time researching technology and science as related to the future, nothing prepared me for the scope of the DARPA vision. The ostensible purpose of the two day meeting was to introduce the DARPA Biotechnology Program Office and to connect program managers with innovators, investors, and scientists working in biotechnology and related disciplines. But really they were here to shake things up.

darpa bit01

Opening the Biology Is Technology (BiT) event was DARPA Director Dr. Arati Prabhakar. Dr. Prabhakar’s presence at this meeting demonstrates how serious DARPA is about this effort, and one imagines that she was also in California to support President Obama’s Cybersecurity Summit with top leaders of the computer industry.

Dr. Prabhakar interviewed GE’s Sue Siegel about innovation and GE’s role in creating the future. This was a freewheeling conversation in which Ms. Siegel turned the tables and interviewed Dr. Prabhakar instead. What followed was an outstanding introduction to the proactionary approach to research and development, or in DARPA’s language, preventing surprises by creating your own.

Dr. Prabhakar clearly set up the DARPA’s latest incarnation as a return to the big vision, swing for the fences approach. She discussed DARPA’s approach to managing risks while creating high impact technologies. In this vision, DARPA’s role is to help scientists and innovators to “remove early risk” which might prevent them from obtaining investment and bringing novel ideas to market. DARPA was described by one presenter as a “always friendly, but somewhat crazy rich uncle” and they made it clear that they were going to put a fair bit of money behind these ideas.

darpa bit04

This meeting was focused around the launch of the new program office, the Biotechnology Program Office, although other program managers were present. The BTO is headed Dr. Geoff Ling who is a practicing Army medical doctor. Dr. Ling is an energetic spokesman for the DARPA vision and the BTO. And it is notable that it is an M.D. that is in charge of this effort because many of the developments being undertaken by the BTO are simply going to revolutionize the practice of medicine as we know it today. With the energetic Dr. Ling in charge, you can imagine it getting done.

Dr. Ling portrayed DARPA’s ambitious goals and set out what was one of the clearest presentations of the proactionary principle which I have heard. But that was just the opening volley; DARPA is going full on H+.

Following the inspirational presentation by Dr. Ling, the individual program managers had a chance to present their projects.

The first Program Manager to present, Phillip Alvelda, opened the event with his mind blowing project to develop a working “cortical modem”. What is a cortical modem you ask? Quite simply it is a direct neural interface that will allow for the visual display of information without the use of glasses or goggles. I was largely at this event to learn about this project and I wasn’t disappointed.

Leveraging the work of Karl Deisseroth in the area of optogenetics, the cortical modem project aims to build a low cost neural interface based display device. The short term goal of the project is the development of a device about the size of two stacked nickels with a cost of goods on the order of $10 which would enable a simple visual display via a direct interface to the visual cortex with the visual fidelity of something like an early LED digital clock.

The implications of this project are astounding.

Consider a more advanced version of the device capable of high fidelity visual display. First, this technology could be used to restore sensory function to individuals who simply can’t be treated with current approaches. Second, the device could replace all virtual reality and augmented reality displays. Bypassing the visual sensory system entirely, a cortical modem can directly display into the visual cortex enabling a sort of virtual overlay on the real world. Moreover, the optogenetics approach allows both reading and writing of information. So we can imagine at least a device in which virtual objects appear well integrated into our perceived world. Beyond this, a working cortical modem would enable electronic telepathy and telekinesis. The cortical modem is a real world version of the science fiction neural interfaces envisioned by writers such as William Gibson and more recently Ramez Naam.

To the extent that it is real, the cortical modem is still a crude device. This isn’t going to give you a high fidelity augmented reality display soon. And since the current approach is based in optogenetics, it requires a  genetic alteration of the DNA in your neurons. The health implications are unknown, and this research is currently limited to work with animal models. Specifically discussed was a real time imaging of the zebrafish brain with about 85,000 neurons.

Notably, while i was live blogging the event one h+ Magazine reader volunteered to undergo this possibly dangerous genetic procedure in exchange for early access to a cortical modem. A fact which I later got to mention directly to Dr. Prabhakar at the reception afterwards.

darpa bit18

Following the astounding cortical modem presentation, Dr. Dan Wattendorf presented DARPA’s efforts to get in front of and prevent disease outbreaks such as the recent crisis with ebola in Africa. This was a repeated theme throughout the event. DARPA is clearly recognizing the need to avoid “technological surprises” from nature as well as from nations. It is widely recognized that the current technology for dealing with novel disease outbreaks, the so called “post antibiotic” era, and bioweapons requires entirely new strategies for detection and rapid response to communicable illnesses. As an example, the ebola vaccine currently being considered for use has been in development for decades. Moreover, only a small number of vaccines exists even for known diseases. A novel threat might provide only weeks or months to respond however. Clearly new approaches are needed in both detection of disease outbreaks and response to them. Perhaps most interesting to me here was the discussion of transient gene therapies where an intervention that alters an organism’s DNA but which “turn off” after some time period or event.

Dr. Jack Newman Chief Science Officer at Amyris and board member of the Biobricks Foundation followed. Jack has recently joined DARPA as a program manager himself and he talked about Amyris’ work with producing useful materials from bio-engineered yeast. This project funded under DARPA’s Living Foundries program is just one of a number of efforts seeking to create novel materials and production processes. Dr. Newman presented a view into the programming of living systems using Amyris software that was quite interesting.

This provided a natural segue to program manager Alicia Jackson’s presentation on the broader Living Foundries program which promises to leverage the synthetic and functional capabilities of biology to create biologically-based manufacturing platforms to provide access to new materials, capabilities and manufacturing paradigms based in biology and synthetic biology. Imagine materials that self assemble, heal, and adapt to their changing environment as biological systems do. The program currently focuses on compressing the biological design-build-test-learn cycle by at least 10 times in both time and cost, while simultaneously increasing the complexity of systems that are created. The second phase of the program builds on these advancements and tools to create a scalable, integrated, rapid design and prototyping infrastructure for the engineering of biology.

Following this, a more casual presentation, a “fireside” chat between famed geneticist Dr. George Church and technology historian George Dyson. This chat rambled a bit and started off slowly. But once it got going, Church laid out his vision of engineering ecosystems using “gene drives” and throughout a variety of remarks that were of interest. For example, he expressed skepticism about “longevity” research as compared with “age reversal” techniques. GDF 11 got a mention. He also discussed the observation of genetic changes in cells grown outside of the body for example in so called “printed” organs, and discussed his alternative approach of growing human donor organs in transgenic pigs. He suggested the real possibility of enhancing human intelligence through genetic techniques and pointed to the complete molecular description of living systems as a goal.

This led into another amazing presentation from new DARPA program manager Julian Sanchez who is leading DARPA’s Human-machine symbiosis group which is developing many of the groundbreaking prosthetics such as mind controlled limbs which have recently been in the news. DARPA’s investment in advanced limb prosthetics has already delivered an FDA-approved device but “cognitive prosthetics” are next. DARPA is developing hardware and software to overcome the memory deficits and neuropsychiatric illnesses afflicting returning veterans for example.

Screen Shot 2015-02-13 at 9.56.11 AM

While there wasn’t much shown regarding applying these ideas to healthy individuals or combat systems, we can assume that this work is underway. One patient was shown employing a neural interface to fly a simulated aircraft for example. And DARPA is supposedly working towards a system that would allow one person to pilot multiple vehicles by thought alone. The approach is bigger than just thought controlled drones however, because it focuses on creating symbiosis which is to ensure a mutual benefit to both partners in a relationship. The potential of this idea is often overlooked and misunderstood in conversations about machine intelligence for example.

Together with the cortical modem, these devices promise to revolutionize human abilities to repair ourselves, extend ourselves, communicate and indeed they will eventually and inevitably alter what it means to be human. Where is the boundary between self and other if we can directly share thoughts, dreams, emotions, and ideas? When we can experience not only the thoughts but feelings of someone else? How will direct neural access to knowledge change education and work? These technologies raise many questions for which we do not yet have answers.

Dr. Sanchez closed by calling on members of the audience to “come to DARPA and change the world” a call which didn’t ring hollow by this point. And things were just getting started.

This statement was made repeatedly. DARPA is open for business and looking for collaborators to work with. They’re building teams that work across subjects, disciplines and communities. They seek to build a community of interest aimed at tackling some of mankind’s greatest challenges, including things like curing communicable diseases and reversing ecosystem collapse. DARPA has some unique instruments and capabilities to offer anyone developing radical technological ideas and they want you to know about them. They openly invited the audience to submit abstracts for research ideas and promised that every email they receive would be answered “at least once”.

Several different DARPA performers also gave presentations. These are the people that DARPA has hired under contract to actually do the work and the presentations were a pretty heady and eclectic mix ranging from deep science to the unusual and on to the profound. Dr.Michel M. Maharbiz of UC Berkeley who is developing “neural dust” and has done controversial work with insect cyborgs. Saul Griffith of Otherlab presented the farthest ranging talk including his work with computer controlled inflatables which includes development of exoskeleton concepts, pneumatic sun trackers for low cost solar power applications, and a life sized robotic inflatable elephant he made for his daughter. I was also intrigued by a toy they had designed that was a universal constructor. He also had some very interesting analysis of the world’s energy production and utilization, showing areas where DARPA (and anyone else interested) could make the biggest difference to slow climate change.

How about curing all known and even unknown communicable diseases? Exploring “post pathogen medicine” is an effort in which DARPA is working to identify “unlikely heros”, those individuals with surprising  resilience or resistance to dangerous diseases. The idea is to apply big data analytics to analyze data from a large number of existing scientific analyses that might hide data indicating genetic markers for immunity or disease resistance in individuals.

Karl Deisseroth presented his work with optogenetics and his newer techniques for transforming neural tissue into a clear gel that can be imaged. He presented some impressive images from this work and his new unpublished imaging technique called “Swift 3D”. The resulting images are real-time maps of neural events. For example, Dr. Deisseroth presented visual representations of mouse thoughts from one controlled experiment.

Beyond reading mids, DARPA’s BiT programs are also looking to revolutionize the practice of biology and science in general. Dr. Stephen Friend presented Sage Networks a science oriented social sharing and collaboration platform which radically realigns the practices of scientific publication and data sharing. Apart from providing a standardized platform for publishing annotated bioscience datasets, the system requires users to make their data available to other researchers while still preserving their ability to get credit for original ideas and work. This project is important and could see application elsewhere outside of the biosciences. One member of the audience was so impressed with this idea she was compelled to comment.

darpa bit38

More directly, DARPA seeks to revolutionize the day to day practice of biotechnology and drug development. A series of “organs on a chip” was presented. These devices allow cultures of cells from an individual’s organs to be grown and treated with medications to assess effectiveness and possible side effects without the need to use an animal model or test on a live human subject. While they haven’t replicated every human organ, they did have a “gut on a chip” shown here. These little chips are flexible and kind of artistic actually. The company Emulate had a representative explaining the technology at the reception after the first day of the event. This is just one of several projects in which DARPA is seeking to understand the effects of drugs including adverse side effects in novel ways. The eventual hope is to shorten time to market while also radically lowering the costs of new medications.

Microfluidics — making tiny droplets

Another impressive series of developments was presented in the area of microfluidics. These developments consist of a set of technologies for creating very small droplets, and various mechanisms for manipulating, and experimenting on these tiny drops. Currently the practice of bioscience experimentation is largely performed by human postdocs who spend thousands of hours pipetting, mixing, and carefully measuring results. But using microfluidics and a series of intricate valves, nozzles, and so on, many of these procedures can be automated and radically sped up.

The audience got a chance to mix with the DARPA program managers after the event at a reception where some of DARPA’s projects were presented in a hands on environment. I had a brief conversation with Dr. Prabhakar who mentioned that she was aware of Humanity+ and transhumanism more generally. She was excited to have us involved, but also expressed some dismay at the political aspect of the transhumanist movement.

Well known Silicon Valley venture capitalist, rocketeer, transhumanist, and super guy Steve Jurvetson was spotted “high fiving” a DARPA funded telepresence robot developed at Johns Hopkins APL at the reception.

The robot operates via a head mounted display which places the wearer into the robot’s “head” and two instrumented gloves which give the wearer control over the robot’s dexterous human like hands. The hands get a bit hot due to the motors that move them however, so a fist bump is going to be prefered over a handshake with this guy.

darpa bit28 darpa bit32 darpa bit34

DARPA’s Inner Buddha

a photo of a child holding hands with a prosthetic hand

AT the two day BiT event, it was revealed that DARPA hasn’t just gone full on transhumanist, they’re full Buddha.

The goal of his project as presented by one of the project investigators, Dr. Eddie Chang of the University of California at San Francisco, during day two’s “Lightning Round” , was nothing less than eliminating human suffering.

Curing communicable diseases and prosthetics were the top of the list day one.

But Dr. Chang was talking about curing a deeper inner injury, the sort of thing that causes mental illness, depression, and intractable PTSD;  problems which military veterans notably suffer disproportionately.

The first stage of the project is underway and working with patients who are already undergoing brain surgery for intractable epilepsy. Four individuals so far have had their detailed neural patterns recorded 24 hours a day for ten days using an implanted device. The resulting neural map is at the millimeter and millisecond level and is correlated with other information about the patient’s mood and physiological state.

In another program, ElectRX, DARPA is investigating the use of similar neural stimulation techniques to promote healing of the body from injuries and disease. In both cases the emphasis isn’t  on working around or bypassing damage, but using electrical stimulation to promote healing and repair. DARPA wants to heal you. Dr. Chang stated, for example, that the success of his project wouldn’t be marked by the date of the first implanted device, but rather the date of the first removal.


Creating novel industrial processes to reduce climate change? DARPA had that covered too. So while Dr. Ling made sure to remind the audience up front that this was all about supporting warfighters, it was impossible to not consider the deeper implications of what was being presented as the event proceeeded.

The reality is that the true DARPA mission isn’t just about war. A happier, more secure and sustainable world is the best possible security for the United States, a fact that DARPA’s leaders seemingly recognize at the moment.  And so DARPA is developing technologies for rapid identification of communicable diseases, restoring lost biological functions, producing materials and developing novel industrial processes to prevent slow and reverse climate change, save ecosystems and more.

And DARPA’s next revolution, biology is technology, is something even bigger than the Internet. They’re out to revolutionize the practice and products of bio-science and along the way they are re-defining what it will mean to be human. Will we alter our biology to enable direct mind to mind communication? Can we extend our immune system into the world to cure all communicable diseases? Can we cure and repair the most damaging and persistent mental illnesses?

In this amazing two day event, DARPA opened the door to a wider public collaboration and conversation about these amazing ideas.

A second event is planned for New York City in June and video of the February presentations will be available online according to DARPA representatives at the event. I will update this story with videos when they are available.

This article can also be found at

Transhumanism, medical technology and slippery slopes from the NCBI

This article (Transhumanism, medical technology and slippery slopes from the NCBI) explores transhumanism in the medical industry.  I thought it was bit negatively biased, but the sources are good and disagreement doesn’t equate to invalidation in my book so here it is…


In this article, transhumanism is considered to be a quasi‐medical ideology that seeks to promote a variety of therapeutic and human‐enhancing aims. Moderate conceptions are distinguished from strong conceptions of transhumanism and the strong conceptions were found to be more problematic than the moderate ones. A particular critique of Boström’s defence of transhumanism is presented. Various forms of slippery slope arguments that may be used for and against transhumanism are discussed and one particular criticism, moral arbitrariness, that undermines both weak and strong transhumanism is highlighted.

No less a figure than Francis Fukuyama1 recently labelled transhumanism as “the world’s most dangerous idea”. Such an eye‐catching condemnation almost certainly denotes an issue worthy of serious consideration, especially given the centrality of biomedical technology to its aims. In this article, we consider transhumanism as an ideology that seeks to evangelise its human‐enhancing aims. Given that transhumanism covers a broad range of ideas, we distinguish moderate conceptions from strong ones and find the strong conceptions more problematic than the moderate ones. We also offer a critique of Boström’s2 position published in this journal. We discuss various forms of slippery slope arguments that may be used for and against transhumanism and highlight one particular criticism, moral arbitrariness, which undermines both forms of transhumanism.

What is transhumanism?

At the beginning of the 21st century, we find ourselves in strange times; facts and fantasy find their way together in ethics, medicine and philosophy journals and websites.2,3,4 Key sites of contestation include the very idea of human nature, the place of embodiment within medical ethics and, more specifically, the systematic reflections on the place of medical and other technologies in conceptions of the good life. A reflection of this situation is captured by Dyens5 who writes,

What we are witnessing today is the very convergence of environments, systems, bodies, and ontology toward and into the intelligent matter. We can no longer speak of the human condition or even of the posthuman condition. We must now refer to the intelligent condition.

We wish to evaluate the contents of such dialogue and to discuss, if not the death of human nature, then at least its dislocation and derogation in the thinkers who label themselves transhumanists.

One difficulty for critics of transhumanism is that a wide range of views fall under its label.6 Not merely are there idiosyncrasies of individual academics, but there does not seem to exist an absolutely agreed on definition of transhumanism. One can find not only substantial differences between key authors2,3,4,7,8 and the disparate disciplinary nuances of their exhortations, but also subtle variations of its chief representatives in the offerings of people. It is to be expected that any ideology transforms over time and not least of all in response to internal and external criticism. Yet, the transhumanism critic faces a further problem of identifying a robust target that stays still sufficiently long to locate it properly in these web‐driven days without constructing a “straw man” to knock over with the slightest philosophical breeze. For the purposes of targeting a sufficiently substantial target, we identify the writings of one of its clearest and intellectually robust proponents, the Oxford philosopher and cofounder of the World Transhumanist Association , Nick Boström,2 who has written recently in these pages of transhumanism’s desire to make good the “half‐baked” project3 that is human nature.

Before specifically evaluating Boström’s position, it is best first to offer a global definition for transhumanism and then to locate it among the range of views that fall under the heading. One of the most celebrated advocates of transhumanism is Max More, whose website reads “no more gods, nor more faith, no more timid holding back. The future belongs to posthumanity”.8 We will have a clearer idea then of the kinds of position transhumanism stands in direct opposition to. Specifically, More8 asserts,

“Transhumanism” is a blanket term given to the school of thought that refuses to accept traditional human limitations such as death, disease and other biological frailties. Transhumans are typically interested in a variety of futurist topics, including space migration, mind uploading and cryonic suspension. Transhumans are also extremely interested in more immediate subjects such as bio‐ and nano‐technology, computers and neurology. Transhumans deplore the standard paradigms that attempt to render our world comfortable at the sake of human fulfilment.8

Strong transhumanism advocates see themselves engaged in a project, the purpose of which is to overcome the limits of human nature. Whether this is the foundational claim, or merely the central claim, is not clear. These limitations—one may describe them simply as features of human nature, as the idea of labelling them as limitations is itself to take up a negative stance towards them—concern appearance, human sensory capacities, intelligence, lifespan and vulnerability to harm. According to the extreme transhumanism programme, technology can be used to vastly enhance a person’s intelligence; to tailor their appearance to what they desire; to lengthen their lifespan, perhaps to immortality; and to reduce vastly their vulnerability to harm. This can be done by exploitation of various kinds of technology, including genetic engineering, cybernetics, computation and nanotechnology. Whether technology will continue to progress sufficiently, and sufficiently predictably, is of course quite another matter.

Advocates of transhumanism argue that recruitment or deployment of these various types of technology can produce people who are intelligent and immortal, but who are not members of the species Homo sapiens. Their species type will be ambiguous—for example, if they are cyborgs (part human, part machine)—or, if they are wholly machines, they will lack any common genetic features with human beings. A legion of labels covers this possibility; we find in Dyen’s5 recently translated book a variety of cultural bodies, perhaps the most extreme being cyberpunks:

…a profound misalignment between existence and its manifestation. This misalignment produces bodies so transformed, so dissociated, and so asynchronized, that their only outcome is gross mutation. Cyberpunk bodies are horrible, strange and mysterious (think of Alien, Robocop, Terminator, etc.), for they have no real attachment to any biological structure. (p 75)

Perhaps a reasonable claim is encapsulated in the idea that such entities will be posthuman. The extent to which posthuman might be synonymous with transhumanism is not clear. Extreme transhumanists strongly support such developments.

At the other end of transhumanism is a much less radical project, which is simply the project to use technology to enhance human characteristics—for example, beauty, lifespan and resistance to disease. In this less extreme project, there is no necessary aspiration to shed human nature or human genetic constitution, just to augment it with technology where possible and where desired by the person.

Who is for transhumanism?

At present it seems to be a movement based mostly in North America, although there are some adherents from the UK. Among its most intellectually sophisticated proponents is Nick Boström. Perhaps the most outspoken supporters of transhumanism are people who see it simply as an issue of free choice. It may simply be the case that moderate transhumanists are libertarians at the core. In that case, transhumanism merely supplies an overt technological dimension to libertarianism. If certain technological developments are possible, which they as competent choosers desire, then they should not be prevented from acquiring the technologically driven enhancements they desire. One obvious line of criticism here may be in relation to the inequality that necessarily arises with respect to scarce goods and services distributed by market mechanisms.9 We will elaborate this point in the Transhumanism and slippery slopes section.

So, one group of people for the transhumanism project sees it simply as a way of improving their own life by their own standards of what counts as an improvement. For example, they may choose to purchase an intervention, which will make them more intelligent or even extend their life by 200 years. (Of course it is not self‐evident that everyone would regard this as an improvement.) A less vociferous group sees the transhumanism project as not so much bound to the expansion of autonomy (notwithstanding our criticism that will necessarily be effected only in the sphere of economic consumer choice) as one that has the potential to improve the quality of life for humans in general. For this group, the relationship between transhumanism and the general good is what makes transhumanism worthy of support. For the other group, the worth of transhumanism is in its connection with their own conception of what is good for them, with the extension of their personal life choices.

What can be said in its favour?

Of the many points for transhumanism, we note three. Firstly, transhumanism seems to facilitate two aims that have commanded much support. The use of technology to improve humans is something we pretty much take for granted. Much good has been achieved with low‐level technology in the promotion of public health. The construction of sewage systems, clean water supplies, etc, is all work to facilitate this aim and is surely good work, work which aims at, and in this case achieves, a good. Moreover, a large portion of the modern biomedical enterprise is another example of a project that aims at generating this good too.

Secondly, proponents of transhumanism say it presents an opportunity to plan the future development of human beings, the species Homo sapiens. Instead of this being left to the evolutionary process and its exploitation of random mutations, transhumanism presents a hitherto unavailable option: tailoring the development of human beings to an ideal blueprint. Precisely whose ideal gets blueprinted is a point that we deal with later.

Thirdly, in the spirit of work in ethics that makes use of a technical idea of personhood, the view that moral status is independent of membership of a particular species (or indeed any biological species), transhumanism presents a way in which moral status can be shown to be bound to intellectual capacity rather than to human embodiment as such or human vulnerability in the capacity of embodiment (Harris, 1985).9a

What can be said against it?

Critics point to consequences of transhumanism, which they find unpalatable. One possible consequence feared by some commentators is that, in effect, transhumanism will lead to the existence of two distinct types of being, the human and the posthuman. The human may be incapable of breeding with the posthuman and will be seen as having a much lower moral standing. Given that, as Buchanan et al9 note, much moral progress, in the West at least, is founded on the category of the human in terms of rights claims, if we no longer have a common humanity, what rights, if any, ought to be enjoyed by transhumans? This can be viewed either as a criticism (we poor humans are no longer at the top of the evolutionary tree) or simply as a critical concern that invites further argumentation. We shall return to this idea in the final section, by way of identifying a deeper problem with the open‐endedness of transhumanism that builds on this recognition.

In the same vein, critics may argue that transhumanism will increase inequalities between the rich and the poor. The rich can afford to make use of transhumanism, but the poor will not be able to. Indeed, we may come to think of such people as deficient, failing to achieve a new heightened level of normal functioning.9 In the opposing direction, critical observers may say that transhumanism is, in reality, an irrelevance, as very few will be able to use the technological developments even if they ever manifest themselves. A further possibility is that transhumanism could lead to the extinction of humans and posthumans, for things are just as likely to turn out for the worse as for the better (eg, those for precautionary principle).

One of the deeper philosophical objections comes from a very traditional source. Like all such utopian visions, transhumanism rests on some conception of good. So just as humanism is founded on the idea that humans are the measure of all things and that their fulfilment is to be found in the powers of reason extolled and extended in culture and education, so too transhumanism has a vision of the good, albeit one loosely shared. For one group of transhumanists, the good is the expansion of personal choice. Given that autonomy is so widely valued, why not remove the barriers to enhanced autonomy by various technological interventions? Theological critics especially, but not exclusively, object to what they see as the imperialising of autonomy. Elshtain10 lists the three c’s: choice, consent and control. These, she asserts, are the dominant motifs of modern American culture. And there is, of course, an army of communitarians (Bellah et al,10a MacIntyre,10b Sandel,10c Taylor10d and Walzer10e) ready to provide support in general moral and political matters to this line of criticism. One extension of this line of transhumanism thinking is to align the valorisation of autonomy with economic rationality, for we may as well be motivated by economic concerns as by moral ones where the market is concerned. As noted earlier, only a small minority may be able to access this technology (despite Boström’s naive disclaimer for democratic transhumanism), so the technology necessary for transhumanist transformations is unlikely to be prioritised in the context of artificially scarce public health resources. One other population attracted to transhumanism will be the elite sports world, fuelled by the media commercialisation complex—where mere mortals will get no more than a glimpse of the transhuman in competitive physical contexts. There may be something of a double‐binding character to this consumerism. The poor, at once removed from the possibility of such augmentation, pay (per view) for the pleasure of their envy.

If we argue against the idea that the good cannot be equated with what people choose simpliciter, it does not follow that we need to reject the requisite medical technology outright. Against the more moderate transhumanists, who see transhumanism as an opportunity to enhance the general quality of life for humans, it is nevertheless true that their position presupposes some conception of the good. What kind of traits is best engineered into humans: disease resistance or parabolic hearing? And unsurprisingly, transhumanists disagree about precisely what “objective goods” to select for installation into humans or posthumans.

Some radical critics of transhumanism see it as a threat to morality itself.1,11 This is because they see morality as necessarily connected to the kind of vulnerability that accompanies human nature. Think of the idea of human rights and the power this has had in voicing concern about the plight of especially vulnerable human beings. As noted earlier a transhumanist may be thought to be beyond humanity and as neither enjoying its rights nor its obligations. Why would a transhuman be moved by appeals to human solidarity? Once the prospect of posthumanism emerges, the whole of morality is thus threatened because the existence of human nature itself is under threat.

One further objection voiced by Habermas11 is that interfering with the process of human conception, and by implication human constitution, deprives humans of the “naturalness which so far has been a part of the taken‐for‐granted background of our self‐understanding as a species” and “Getting used to having human life biotechnologically at the disposal of our contingent preferences cannot help but change our normative self‐understanding” (p 72).

On this account, our self‐understanding would include, for example, our essential vulnerability to disease, ageing and death. Suppose the strong transhumanism project is realised. We are no longer thus vulnerable: immortality is a real prospect. Nevertheless, conceptual caution must be exercised here—even transhumanists will be susceptible in the manner that Hobbes12 noted. Even the strongest are vulnerable in their sleep. But the kind of vulnerability transhumanism seeks to overcome is of the internal kind (not Hobbes’s external threats). We are reminded of Woody Allen’s famous remark that he wanted to become immortal, not by doing great deeds but simply by not dying. This will result in a radical change in our self‐understanding, which has inescapably normative elements to it that need to be challenged. Most radically, this change in self‐understanding may take the form of a change in what we view as a good life. Hitherto a human life, this would have been assumed to be finite. Transhumanists suggest that even now this may change with appropriate technology and the “right” motivation.

Do the changes in self‐understanding presented by transhumanists (and genetic manipulation) necessarily have to represent a change for the worse? As discussed earlier, it may be that the technology that generates the possibility of transhumanism can be used for the good of humans—for example, to promote immunity to disease or to increase quality of life. Is there really an intrinsic connection between acquisition of the capacity to bring about transhumanism and moral decline? Perhaps Habermas’s point is that moral decline is simply more likely to occur once radical enhancement technologies are adopted as a practice that is not intrinsically evil or morally objectionable. But how can this be known in advance? This raises the spectre of slippery slope arguments.

But before we discuss such slopes, let us note that the kind of approach (whether characterised as closed‐minded or sceptical) Boström seems to dislike is one he calls speculative. He dismisses as speculative the idea that offspring may think themselves lesser beings, commodifications of their parents’ egoistic desires (or some such). None the less, having pointed out the lack of epistemological standing of such speculation, he invites us to his own apparently more congenial position:

We might speculate, instead, that germ‐line enhancements will lead to more love and parental dedication. Some mothers and fathers might find it easier to love a child who, thanks to enhancements, is bright, beautiful, healthy, and happy. The practice of germ‐line enhancement might lead to better treatment of people with disabilities, because a general demystification of the genetic contributions to human traits could make it clearer that people with disabilities are not to blame for their disabilities and a decreased incidence of some disabilities could lead to more assistance being available for the remaining affected people to enable them to live full, unrestricted lives through various technological and social supports. Speculating about possible psychological or cultural effects of germ‐line engineering can therefore cut both ways. Good consequences no less than bad ones are possible. In the absence of sound arguments for the view that the negative consequences would predominate, such speculations provide no reason against moving forward with the technology. Ruminations over hypothetical side effects may serve to make us aware of things that could go wrong so that we can be on the lookout for untoward developments. By being aware of the perils in advance, we will be in a better position to take preventive countermeasures. (Boström, 2003, p 498)

Following Boström’s3 speculation then, what grounds for hope exist? Beyond speculation, what kinds of arguments does Boström offer? Well, most people may think that the burden of proof should fall to the transhumanists. Not so, according to Boström. Assuming the likely enormous benefits, he turns the tables on this intuition—not by argument but by skilful rhetorical speculation. We quote for accuracy of representation (emphasis added):

Only after a fair comparison of the risks with the likely positive consequences can any conclusion based on a cost‐benefit analysis be reached. In the case of germ‐line enhancements, the potential gains are enormous. Only rarely, however, are the potential gains discussed, perhaps because they are too obvious to be of much theoretical interest. By contrast, uncovering subtle and non‐trivial ways in which manipulating our genome could undermine deep values is philosophically a lot more challenging. But if we think about it, we recognize that the promise of genetic enhancements is anything but insignificant. Being free from severe genetic diseases would be good, as would having a mind that can learn more quickly, or having a more robust immune system. Healthier, wittier, happier people may be able to reach new levels culturally. To achieve a significant enhancement of human capacities would be to embark on the transhuman journey of exploration of some of the modes of being that are not accessible to us as we are currently constituted, possibly to discover and to instantiate important new values. On an even more basic level, genetic engineering holds great potential for alleviating unnecessary human suffering. Every day that the introduction of effective human genetic enhancement is delayed is a day of lost individual and cultural potential, and a day of torment for many unfortunate sufferers of diseases that could have been prevented. Seen in this light,proponents of a ban or a moratorium on human genetic modification must take on a heavy burden of proof in order to have the balance of reason tilt in their favor. (Bostrom,3 pp 498–9).

Now one way in which such a balance of reason may be had is in the idea of a slippery slope argument. We now turn to that.

Transhumanism and slippery slopes

A proper assessment of transhumanism requires consideration of the objection that acceptance of the main claims of transhumanism will place us on a slippery slope. Yet, paradoxically, both proponents and detractors of transhumanism may exploit slippery slope arguments in support of their position. It is necessary therefore to set out the various arguments that fall under this title so that we can better characterise arguments for and against transhumanism. We shall therefore examine three such attempts13,14,15 but argue that the arbitrary slippery slope15 may undermine all versions of transhumanists, although not every enhancement proposed by them.

Schauer13 offers the following essentialist analysis of slippery slope arguments. A “pure” slippery slope is one where a “particular act, seemingly innocuous when taken in isolation, may yet lead to a future host of similar but increasingly pernicious events”. Abortion and euthanasia are classic candidates for slippery slope arguments in public discussion and policy making. Against this, however, there is no reason to suppose that the future events (acts or policies) down the slope need to display similarities—indeed we may propose that they will lead to a whole range of different, although equally unwished for, consequences. The vast array of enhancements proposed by transhumanists would not be captured under this conception of a slippery slope because of their heterogeneity. Moreover, as Sternglantz16 notes, Schauer undermines his case when arguing that greater linguistic precision undermines the slippery slope and that indirect consequences often bolster slippery slope arguments. It is as if the slippery slopes would cease in a world with greater linguistic precision or when applied only to direct consequences. These views do not find support in the later literature. Schauer does, however, identify three non‐slippery slope arguments where the advocate’s aim is (a) to show that the bottom of a proposed slope has been arrived at; (b) to show that a principle is excessively broad; (c) to highlight how granting authority to X will make it more likely that an undesirable outcome will be achieved. Clearly (a) cannot properly be called a slippery slope argument in itself, while (b) and (c) often have some role in slippery slope arguments.

The excessive breadth principle can be subsumed under Bernard Williams’s distinction between slippery slope arguments with (a) horrible results and (b) arbitrary results. According to Williams, the nature of the bottom of the slope allows us to determine which category a particular argument falls under. Clearly, the most common form is the slippery slope to a horrible result argument. Walton14 goes further in distinguishing three types: (a) thin end of the wedge or precedent arguments; (b) Sorites arguments; and (c) domino‐effect arguments. Importantly, these arguments may be used both by antagonists and also by advocates of transhumanism. We shall consider the advocates of transhumanism first.

In the thin end of the wedge slippery slopes, allowing P will set a precedent that will allow further precedents (Pn) taken to an unspecified problematic terminus. Is it necessary that the end point has to be bad? Of course this is the typical linguistic meaning of the phrase “slippery slopes”. Nevertheless, we may turn the tables here and argue that [the] slopes may be viewed positively too.17 Perhaps a new phrase will be required to capture ineluctable slides (ascents?) to such end points. This would be somewhat analogous to the ideas of vicious and virtuous cycles. So transhumanists could argue that, once the artificial generation of life through technologies of in vitro fertilisation was thought permissible, the slope was foreseeable, and transhumanists are doing no more than extending that life‐creating and fashioning impulse.

In Sorites arguments, the inability to draw clear distinctions has the effect that allowing P will not allow us to consistently deny Pn. This slope follows the form of the Sorites paradox, where taking a grain of sand from a heap does not prevent our recognising or describing the heap as such, even though it is not identical with its former state. At the heart of the problem with such arguments is the idea of conceptual vagueness. Yet the logical distinctions used by philosophers are often inapplicable in the real world.15,18 Transhumanists may well seize on this vagueness and apply a Sorites argument as follows: as therapeutic interventions are currently morally permissible, and there is no clear distinction between treatment and enhancement, enhancement interventions are morally permissible too. They may ask whether we can really distinguish categorically between the added functionality of certain prosthetic devices and sonar senses.

In domino‐effect arguments, the domino conception of the slippery slope, we have what others often refer to as a causal slippery slope.19 Once P is allowed, a causal chain will be effected allowing Pn and so on to follow, which will precipitate increasingly bad consequences.

In what ways can slippery slope arguments be used against transhumanism? What is wrong with transhumanism? Or, better, is there a point at which we can say transhumanism is objectionable? One particular strategy adopted by proponents of transhumanism falls clearly under the aspect of the thin end of the wedge conception of the slippery slope. Although some aspects of their ideology seem aimed at unqualified goods, there seems to be no limit to the aspirations of transhumanism as they cite the powers of other animals and substances as potential modifications for the transhumanist. Although we can admire the sonic capacities of the bat, the elastic strength of lizards’ tongues and the endurability of Kevlar in contrast with traditional construction materials used in the body, their transplantation into humans is, to coin Kass’s celebrated label, “repugnant” (Kass, 1997).19a

Although not all transhumanists would support such extreme enhancements (if that is indeed what they are), less radical advocates use justifications that are based on therapeutic lines up front with the more Promethean aims less explicitly advertised. We can find many examples of this manoeuvre. Take, for example, the Cognitive Enhancement Research Institute in California. Prominently displayed on its website front page ( we read, “Do you know somebody with Alzheimer’s disease? Click to see the latest research breakthrough.” The mode is simple: treatment by front entrance, enhancement by the back door. Borgmann,20 in his discussion of the uses of technology in modern society, observed precisely this argumentative strategy more than 20 years ago:

The main goal of these programs seems to be the domination of nature. But we must be more precise. The desire to dominate does not just spring from a lust of power, from sheer human imperialism. It is from the start connected with the aim of liberating humanity from disease, hunger, and toil and enriching life with learning, art and athletics.

Who would want to deny the powers of viral diseases that can be genetically treated? Would we want to draw the line at the transplantation of non‐human capacities (sonar path finding)? Or at in vivo fibre optic communications backbone or anti‐degeneration powers? (These would have to be non‐human by hypothesis). Or should we consider the scope of technological enhancements that one chief transhumanist, Natasha Vita More21, propounds:

A transhuman is an evolutionary stage from being exclusively biological to becoming post‐biological. Post‐biological means a continuous shedding of our biology and merging with machines. (…) The body, as we transform ourselves over time, will take on different types of appearances and designs and materials. (…)

For hiking a mountain, I’d like extended leg strength, stamina, a skin‐sheath to protect me from damaging environmental aspects, self‐moisturizing, cool‐down capability, extended hearing and augmented vision (Network of sonar sensors depicts data through solid mass and map images onto visual field. Overlay window shifts spectrum frequencies. Visual scratch pad relays mental ideas to visual recognition bots. Global Satellite interface at micro‐zoom range).

For a party, I’d like an eclectic look ‐ a glistening bronze skin with emerald green highlights, enhanced height to tower above other people, a sophisticated internal sound system so that I could alter the music to suit my own taste, memory enhance device, emotional‐select for feel‐good people so I wouldn’t get dragged into anyone’s inappropriate conversations. And parabolic hearing so that I could listen in on conversations across the room if the one I was currently in started winding down.

Notwithstanding the difficulty of bringing together transhumanism under one movement, the sheer variety of proposals merely contained within Vita More’s catalogue means that we cannot determinately point to a precise station at which we can say, “Here, this is the end we said things would naturally progress to.” But does this pose a problem? Well, it certainly makes it difficult to specify exactly a “horrible result” that is supposed to be at the bottom of the slope. Equally, it is extremely difficult to say that if we allow precedent X, it will allow practices Y or Z to follow as it is not clear how these practices Y or Z are (if at all) connected with the precedent X. So it is not clear that a form of precedent‐setting slippery slope can be strictly used in every case against transhumanism, although it may be applicable in some.

Nevertheless, we contend, in contrast with Boström that the burden of proof would fall to the transhumanist. Consider in this light, a Sorites‐type slope. The transhumanist would have to show how the relationship between the therapeutic practices and the enhancements are indeed transitive. We know night from day without being able to specify exactly when this occurs. So simply because we cannot determine a precise distinction between, say, genetic treatments G1, G2 and G3, and transhumanism enhancements T1, T2 and so on, it does not follow that there are no important moral distinctions between G1 and T20. According to Williams,15 this kind of indeterminacy arises because of the conceptual vagueness of certain terms. Yet, the indeterminacy of so open a predicate “heap” is not equally true of “therapy” or “enhancement”. The latitude they permit is nowhere near so wide.

Instead of objecting to Pn on the grounds that Pn is morally objectionable (ie, to depict a horrible result), we may instead, after Williams, object that the slide from P to Pn is simply morally arbitrary, when it ought not to be. Here, we may say, without specifying a horrible result, that it would be difficult to know what, in principle, can ever be objected to. And this is, quite literally, what is troublesome. It seems to us that this criticism applies to all categories of transhumanism, although not necessarily to all enhancements proposed by them. Clearly, the somewhat loose identity of the movement—and the variations between strong and moderate versions—makes it difficult to sustain this argument unequivocally. Still the transhumanist may be justified in asking, “What is wrong with arbitrariness?” Let us consider one brief example. In aspects of our lives, as a widely shared intuition, we may think that in the absence of good reasons, we ought not to discriminate among people arbitrarily. Healthcare may be considered to be precisely one such case. Given the ever‐increasing demand for public healthcare services and products, it may be argued that access to them typically ought to be governed by publicly disputable criteria such as clinical need or potential benefit, as opposed to individual choices of an arbitrary or subjective nature. And nothing in transhumanism seems to allow for such objective dispute, let alone prioritisation. Of course, transhumanists such as More find no such disquietude. His phrase “No more timidity” is a typical token of transhumanist slogans. We applaud advances in therapeutic medical technologies such as those from new genetically based organ regeneration to more familiar prosthetic devices. Here the ends of the interventions are clearly medically defined and the means regulated closely. This is what prevents transhumanists from adopting a Sorites‐type slippery slope. But in the absence of a telos, of clearly and substantively specified ends (beyond the mere banner of enhancement), we suggest that the public, medical professionals and bioethicists alike ought to resist the potentially open‐ended transformations of human nature. For if all transformations are in principle enchancements, then surely none are. The very application of the word may become redundant. Thus it seems that one strong argument against transhumanism generally—the arbitrary slippery slope—presents a challenge to transhumanism, to show that all of what are described as transhumanist enhancements are imbued with positive normative force and are not merely technological extensions of libertarianism, whose conception of the good is merely an extension of individual choice and consumption.

Limits of transhumanist arguments for medical technology and practice

Already, we have seen the misuse of a host of therapeutically designed drugs used by non‐therapeutic populations for enhancements. Consider the non‐therapeutic use of human growth hormone in non‐clinical populations. Such is the present perception of height as a positional good in society that Cuttler et al22 report that the proportion of doctors who recommended human growth hormone treatment of short non‐growth hormone deficient children ranged from 1% to 74%. This is despite its contrary indication in professional literature, such as that of the Pediatric Endocrine Society, and considerable doubt about its efficacy.23,24 Moreover, evidence supports the view that recreational body builders will use the technology, given the evidence of their use or misuse of steroids and other biotechnological products.25,26 Finally, in the sphere of elite sport, which so valorises embodied capacities that may be found elsewhere in greater degree, precision and sophistication in the animal kingdom or in the computer laboratory, biomedical enhancers may latch onto the genetically determined capacities and adopt or adapt them for their own commercially driven ends.

The arguments and examples presented here do no more than to warn us of the enhancement ideologies, such as transhumanism, which seek to predicate their futuristic agendas on the bedrock of medical technological progress aimed at therapeutic ends and are secondarily extended to loosely defined enhancement ends. In discussion and in bioethical literatures, the future of genetic engineering is often challenged by slippery slope arguments that lead policy and practice to a horrible result. Instead of pointing to the undesirability of the ends to which transhumanism leads, we have pointed out the failure to specify their telos beyond the slogans of “overcoming timidity” or Boström’s3 exhortation that the passive acceptance of ageing is an example of “reckless and dangerous barriers to urgently needed action in the biomedical sphere”.

We propose that greater care be taken to distinguish the slippery slope arguments that are used in the emotionally loaded exhortations of transhumanism to come to a more judicious perspective on the technologically driven agenda for biomedical enhancement. Perhaps we would do better to consider those other all‐too‐human frailties such as violent aggression, wanton self‐harming and so on, before we turn too readily to the richer imaginations of biomedical technologists.


Competing interests: None.


1. Fukuyama F. Transhumanism. Foreign Policy 2004. 12442–44.44
2. Boström N. The fable of the dragon tyrant. J Med Ethics 2005. 31231–237.237 [PMC free article] [PubMed]
3. Boström N. Human genetic enhancements: a transhumanist perspective. J Value Inquiry 2004. 37493–506.506[PubMed]
4. Boström N. Transhumanist values. http://www.nickBostr öm com/ethics/values.h tml (accessed 19 May 2005).
5. Dyens O. The evolution of man: technology takes over. In: Trans Bibbee EJ, Dyens O, eds. Metal and flesh.L. ondon: MIT Press, 2001
6. World Transhumanist Association (accessed 7 Apr 2006)
7. More M. Transhumanism: towards a futurist philosophy. 1996 (accessed 20 Jul 2005)
8. More M. 2005 (accessed 13 Jul 2005)
9. Buchanan A, Brock D W, Daniels N. et alFrom chance to choice: genetics and justice. Cambridge: Cambridge University Press, 2000
9a. Harris J. The Value of Life. London: Routledge. 1985
10. Elshtain B. ed. The body and the quest for control. Is human nature obsolete?. Cambridge, MA: MIT Press, 2004. 155–174.174
10a. Bellah R N. et alHabits of the heart: individualism and commitment in American life. Berkeley: University of California Press. 1996
10b. MacIntyre A C. After virtue. (2nd ed) London: Duckworth. 1985
10c. Sandel M. Liberalism and the limits of justice. Cambridge: Cambridge University Press. 1982
10d. Taylor C. The ethics of authenticity. Boston: Harvard University Press. 1982
10e. Walzer M. Spheres of Justice. New York: Basic Books. 1983
11. Habermas J. The future of human nature. Cambridge: Polity, 2003
12. Hobbes T. In: Oakeshott M, ed. Leviathan. London: MacMillan, 1962
13. Schauer F. Slippery slopes. Harvard Law Rev 1985. 99361–383.383
14. Walton D N. Slippery slope arguments. Oxford: Clarendon, 1992
15. Williams B A O. Which slopes are slippery. In: Lockwood M, ed. Making sense of humanity. Cambridge: Cambridge University Press, 1995. 213–223.223
16. Sternglantz R. Raining on the parade of horribles: of slippery slopes, faux slopes, and Justice Scalia’s dissent in Lawrence v Texas, University of Pennsylvania Law Review, 153. Univ Pa Law Rev 2005. 1531097–1120.1120
17. Schubert L. Ethical implications of pharmacogenetics‐do slippery slope arguments matter? Bioethics 2004.18361–378.378 [PubMed]
18. Lamb D. Down the slippery slope. London: Croom Helm, 1988
19. Den Hartogh G. The slippery slope argument. In: Kuhse H, Singer P, eds. Companion to bioethics. Oxford: Blackwell, 2005. 280–290.290
19a. Kass L. The wisdom of repugnance. New Republic June 2, pp17–26 [PubMed]
20. Borgmann A. Technology and the character of everyday life. Chicago: University of Chicago Press, 1984
21. Vita More N. Who are transhumans?, 2000 (accessed 7 Apr 2006)
22. Cuttler L, Silvers J B, Singh J. et al Short stature and growth hormone therapy: a national study of physician recommendation patterns. JAMA 1996. 276531–537.537 [PubMed]
23. Vance M L, Mauras N. Growth hormone therapy in adults and children. N Engl J Med 1999. 3411206–1216.1216 [PubMed]
24. Anon Guidelines for the use of growth hormone in children with short stature: a report by the Drug and Therapeutics Committee of the Lawson Wilkins Pediatric Endocrine Society. J Pediatr 1995. 127857–867.867[PubMed]
25. Grace F, Baker J S, Davies B. Anabolic androgenic steroid (AAS) use in recreational gym users. J Subst Use2001. 6189–195.195
26. Grace F, Baker J S, Davies B. Blood pressure and rate pressure product response in males using high‐dose anabolic androgenic steroids (AAS) J Sci Med Sport 2003. 6307–12, 2728.12, 2728 [PubMed]

Articles from Journal of Medical Ethics are provided here courtesy of BMJ Group

This article can also be found on the National Center for Biotechnology Information (NCBI) website at