Main

May 13, 2014

Magna Cortica

One of the projects I worked on for the Institute for the Future's 2014 Ten-Year Forecast was Magna Cortica, a proposal to create an overarching set of ethical guidelines and design principles to shape the ways in which we develop and deploy the technologies of brain enhancement over the coming years. The forecast seemed to strike a nerve for many people -- a combination of the topic and the surprisingly evocative name, I suspect. Alexis Madrigal at The Atlantic Monthly wrote a very good piece on the Ten-Year Forecast, focusing on Magna Cortica, and Popular Science subsequently picked up on the story. I thought I'd expand a bit on the idea here, pulling in some of the material I used for the TYF talk.

As you might have figured, the name Magna Cortica is a direct play on the Magna Carta, the so-called charter of liberties from nearly 800 years ago. The purpose of the Magna Carta was to clarify the rights that should be more broadly held, and the limits that should be placed on the rights of the king. All in all a good thing, and often cited as the founding document of a broader shift to democracy.

The Magna Cortica wouldn’t be a precise mirror of this, but it would follow a similar path: the Magna Cortica project would be an effort to make explicit the rights and restrictions that would apply to the rapidly-growing set of cognitive enhancement technologies. The parallel may not be precise, but it is important: while the crafters of the Magna Carta feared what might happen should the royalty remain unrestrained, those of us who would work on the Magna Cortica project do so with a growing concern about what could happen in a world of unrestrained pursuit of cognitive enhancement. The closer we look at this path of development, the more we see reasons to want to be cautious.

Of course, we have to first acknowledge that the idea of cognitive enhancement isn’t a new one. Most of us regularly engage in the chemical augmentation of our neurological systems, typically through caffeinated beverages. And while the value of coffee and tea includes wonderful social and flavor-based components, it’s the way that consumption kicks our thinking into high gear that usually gets the top billing. This, too, isn’t new: there are many scholars who correlate the emergence of so-called “coffeehouse society” with the onset of the enlightenment.

But if caffeine is our legacy cognitive technology, it has more recently been overshadowed by the development of a variety of brain boosting drugs. What’s important to recognize is that these drugs were not created in order to make the otherwise-healthy person smarter, they were created to provide specific medical benefits.

Provigil and its variants, for example, were invented as a means of treating narcolepsy. Like coffee and tea, it keeps you awake; unlike caffeine, however, it’s not technically a stimulant. Clear-headed wakefulness is itself a powerful boost. But for many users, Provigil also measurably improves a variety of cognitive processes, from pattern recognition to spatial thinking.

Much more commonly used (and, depending upon your perspective, abused) are the drugs devised to help people with attention-deficit disorder, from the now-ancient Adderall and Ritalin to more recent drugs like Vyvanse. These types of drugs are often a form of stimulant -- usually part of the amphetamine family, actually -- but have the useful result of giving users enhanced focus and greatly reduced distractibility.

These drugs are supposed to be prescribed solely for people who have particular medical conditions. The reality, however, is that the focus-enhancing, pattern-recognizing benefits don’t just go to people with disorders -- and these kinds of drugs have become commonplace on university campuses and in the research departments of high-tech companies around the world.

Over the next decade, we’re likely to see the continued emergence of a world of cognitive enhancement technologies, primarily but not exclusively pharmaceutical, increasingly intended for augmentation and not therapy. And as we travel this path, we’ll see even more radical steps, technologies that operate at the genetic level, digital artifacts mixing mind and machine, even the development of brain enhancements that could push us well beyond what’s thought to be the limits of “human normal.”

Neurofluor Artifact med

For many of us, this is both terrifying and exhilarating. Dystopian and utopian scenarios clash and combine. It’s a world of relentless competition to be the smartest person in the room, and unprecedented abilities to solve complex global problems. A world where the use of cognitive boosting drugs is considered as much of an economic and social demand as a present-day smartphone, and one where the diversity of brain enhancements allows us to see and engage with social and political subtleties that would once have been completely invisible. It's the world I explored a bit in my 2009 article in The Atlantic Monthly, "Get Smarter."

And such diversity is really already in play, from so-called “exocortical” augmentations like Google Glass to experimental brain implants to ongoing research to enhance or alter forms of social and emotional expression, including anger, empathy, even religious feelings.

There’s enormous potential for chaos.

There are numerous questions that we’ll need to resolve, dilemmas that we'll be unable to avoid confronting. Since this project may also be seen as a cautious “design spec,” what would we want in an enhanced mind? What should an enhanced mind be able to do? Are there aspects of the mind or brain that we should only alter in case of significant mental illness or brain injury? Are there aspects of a mind or brain we should never alter, no matter what? (E.g., should we ever alter a person’s sense of individual self?)

What are the the rights and responsibilities we would have to the non-human minds that would be enhanced and potentially created along the way to human cognitive enhancement. Animal testing would be unavoidable. What would we owe to rats, dogs, and apes (etc.) with potentially vastly increased intellect? Similarly, whole-brain neural network simulations, like the Blue Brain project, offer a very real possibility of the eventual creation of a system that behaves like -- possibly even believes itself to be -- a human mind. What responsibilities would we have towards such a system? Would it be ethical to reboot it, to turn it off, to erase the software?

The legal and political aspects cannot be ignored. We would need extensive discussion of how this research will be integrated into legal frameworks, especially with the creation of minds that don’t fall neatly into human categories. And as it’s highly likely that military and intelligence agencies will have a great deal of interest in this set of projects, the role that such groups should have will need to be addressed -- particularly once a “hostile actor” begins to undertake similar research.

Across all of this, we'd have to consider the practices and developments that are not currently considered near-term feasible, such as molecular nanotechnologies, as well as techniques not yet invented or conceived. How can we make rules that apply equally well to the known and the unknown?

All of these would be part of a Magna Cortica project. But for today, I’d like to start with five candidates for inclusion as basic Magna Cortica rights, as a way of… let’s say nailing some ideas to a door.

  1. The right to self-knowledge. Likely the least controversial, and arguably the most fundamental, this right would be the logical extension of the quantified self movement that's been growing for the last few years. As the ability to measure, analyze, even read the ongoing processes in our brains continues to expand, the argument here is that the right to know what’s going on inside our own heads should not be abridged.

    Of course, there’s the inescapably related question: who else would have the right to that knowledge?

  2. As the Maker movement says, if you can’t alter something, you don’t really own it. In that spirit, it’s possible that a Magna Cortica could enshrine the right to self-modification. This wouldn’t just apply to cognition augmentation, of course; the same argument would apply to less practical, more entertainment-oriented alterations. And as we’ve seen around the world over the last year, the movement to make such things more legal is well underway.

  3. The flip side of the last right, and potentially of even greater sociopolitical importance, is a right to refuse modification. To just say no, as it were. But while this may seem a logical assertion to us now, as these technologies become more powerful, prevalent, and important, refusing cognitive augmentation may come to be considered as controversial and even irresponsible as the refusal to vaccinate is today. Especially in light of…

  4. A right to modify or to refuse to modify your children. It has to be emphasized that we already grapple with this question every time a doctor prescribes ADHD drugs, when both saying yes and saying no can lead to accusations of abuse. And if the idea of enhancements for children rather than therapy seems beyond the pale, I’d invite you to remember Louise Brown, the first so-called “test tube baby.” The fury and fear accompanying her birth in 1978 is astounding in retrospect; even the co-discoverer of the structure of DNA, James Watson, thought her arrival meant "all Hell will break loose, politically and morally, all over the world." But today, many of you reading this either know someone who has used in-vitro fertilization, have used it yourself, or may even be a product of it.

  5. Finally, there’s the potential right to know who has been modified. This suggested right seems to elicit an immediate reaction of visions of torches and pitchforks, but we can easily flip that script around. Would you want to know if your taxi driver was on brain boosters? Your pilot? Your child’s teacher? Your surgeon? At the root of all of this is the unanswered question of whether the identification as having an augmented mind would be seen as something to be feared… or something to be celebrated.

And here again we encounter the terrifying and the exhilarating: we are almost certain be facing these questions, these crises and dilemmas, over the next ten to twenty years. As long as intelligence is considered a competitive advantage in the workplace, in the labs, or in high office, there will be efforts to make these technologies happen. The value of the Magna Cortica project would be to bring these questions out into the open, to explore where we draw the line that says “no further,” to offer a core set of design principles, and ultimately to determine which pathways to follow before we reach the crossroads.


November 29, 2011

The Prevail Project

Joel Garreau has one of the most sensitive radars for big changes of anyone that I know. I first met him back at GBN, and I quickly came to realize that I should pay very close attention to whatever he's thinking about or working on -- and what he's working on now is definitely worth the time to check out.

The "Prevail Project" (named for one of the scenarios in his book Radical Evolution) at the Sandra Day O'Connor College of Law at Arizona State University is an attempt to draw together people thinking about -- and building -- a livable human future, one that uses (but is not dominated by) transformative technologies.

Joel's statement in the press release sums up his perspective:

"Prevailproject.org will be a place for everybody from my mother to technologists inventing the future to grapple with some of the most pressing questions of our time: How are the genetics, robotics, information and nano revolutions changing human nature, and how can we shape our own futures, toward our own ends, rather than being the pawns of these explosively powerful technologies?” said Joel Garreau, the Lincoln Professor of Law, Culture and Values at the Sandra Day O’Connor College of Law at Arizona State University, and director of The Prevail Project: Wise Governance for Challenging Futures.

“The Prevail Project is a collaborative effort, worldwide, to see if we can help accelerate this social response to match or exceed the pace of technological change,” Garreau said. “The fate of human nature hangs in the balance.”

I'll set aside my resistance to the traditional "social response to technological change" model to celebrate the placement of this project in the Law School, and not as part of the school of engineering or some technical discipline. It's far too common to see these issues dominated by technologists (and technology-fetishists) with little understanding of law and culture; it's vital to get a more sophisticated understanding of society into the conversation.

As the Prevail Project kicks off its public unveiling, it has invited a set of writers to offer up their thoughts on what it means to "prevail" in a transformative future. Bruce Sterling's essay went up yesterday; mine went up today.

May 8, 2010

Our Posthuman Present

Annalee Newitz at io9.com asked me to contribute something to their "Posthumanity Week" series, and -- despite being in the middle of a conference a couple thousand miles from home -- agreed. My piece went live today under the title "Your Posthumanism is Boring Me."

"Posthuman" is a term with more weight than meaning; it's used variously to describe people with altered genomes, people with implanted machinery, people with lifespans measured in millennia, and a whole host of descriptors that ultimately boil down to "not us, not now." Enthusiasts and critics alike embrace the term precisely because it advances the argument that the Augmented is the Other - and either an aspiration or a nightmare, as a result. It doesn't illuminate, it disturbs.

But as augmentations move from the pages of a science fiction story to the pages of a catalog, something interesting happens: they lose their power to disturb. They're no longer the advance forces of the techpocalypse, they're the latest manifestation of the fashionable, the ubiquitous, and the banal. They're normal. They're human.

I've done variations of this rant before, but I think it's a pretty important concept. It serves us little good to think of plausible future changes solely in the present-day context. To really understand their impact, we have to imagine their role in a world that actually sees them as boring.

(And, as I said to Annalee, holy crap that's a big picture of me they're using as an illustration for the piece.)

June 16, 2009

Get Smart(er)

Big Media #2, my Atlantic Monthly article, hit the web today: Get Smarter (or "Get Smart" in the print edition).

Our present century may not be quite as perilous for the human race as an ice age in the aftermath of a super-volcano eruption, but the next few decades will pose enormous hurdles that go beyond the climate crisis. The end of the fossil-fuel era, the fragility of the global food web, growing population density, and the spread of pandemics, as well as the emergence of radically transformative bio- and nano technologies—each of these threatens us with broad disruption or even devastation. And as good as our brains have become at planning ahead, we’re still biased toward looking for near-term, simple threats. Subtle, long-term risks, particularly those involving complex, global processes, remain devilishly hard for us to manage.

But here’s an optimistic scenario for you: if the next several decades are as bad as some of us fear they could be, we can respond, and survive, the way our species has done time and again: by getting smarter. But this time, we don’t have to rely solely on natural evolutionary processes to boost our intelligence. We can do it ourselves.

This article brings together a number of the themes that infuse my work, from augmentation to environmental threats to the need to have a hand in shaping our own futures. There are a few lines, here and there, that long-time readers will recognize, but there's a lot of new stuff, too, ideas and arguments I've wanted to explore, but have been waiting for this to hit before doing so.

It's been a long wait. A little less than a year ago, Atlantic Monthly editor Reihan Salam asked me to write a piece for the magazine. Initially aimed at the November 2008 issue, it was to be a fairly direct reply to Nick Carr's "Is Google Making Us Stupid?" article of the July/August 2008 issue. Little things like a historic election intervened, however, and my article got bumped; it resurfaced this Spring, when Reihan brought on a terrific editor, James Gibney, to shepherd it through to print.

I'm very happy with the result, and I greatly look forward to hearing your responses and critiques.

May 7, 2009

Me++

Provigil

My latest Fast Company column is now up: "Should Creative Workers Use Cognitive-Enhancing Drugs?" (originally entitled "Me++").

We may face a choice between altering our brain chemistries and falling behind in the global economy.

And with that altered brain chemistry, are we sure that we're not losing something? Many of the cognitive enhancement drugs serve to increase focus and concentration. But "letting your mind wander" is very often an important part of the creative process. The "aha!" experience comes from the brain making connections between superficially unrelated subjects, and identifying a deeper link. How do enhancements that focus our attention affect this process? Is it possible that cognitive drugs enhance one aspect of knowledge work--productivity--while diminishing another--creativity?

Conversely, to what degree is the uproar over modafinil, ritalin, and the like just another example of futurephobia? There's a phrase I sometimes use when talking about this kind of issue: "Technology" is anything invented after you turn 13. That is, we tend to think of new disruptive innovations as being "technology," and hence disruptive, while ignoring older innovations that have become embedded into our larger environment, no matter how much they shape our lives.

Having been down with a flu for the past couple of weeks, with all of the brain-fogginess that entails, cognitive enhancement has definitely been on my mind.

March 21, 2009

Laws of Robotics

Here's a sneak preview of the talk I'll be giving tomorrow.

January 22, 2009

Boosting Your Brain for Fun and Profit

A diverse assortment of legal, bioscience, psychology, and ethics academics argue in the pages of Nature for

  • ...a presumption that mentally competent adults should be able to engage in cognitive enhancement using drugs.
  • ...an evidence-based approach to the evaluation of the risks and benefits of cognitive enhancement.
  • ...enforceable policies concerning the use of cognitive-enhancing drugs to support fairness, protect individuals from coercion and minimize enhancement-related socioeconomic disparities.
  • ...a programme of research into the use and impacts of cognitive-enhancing drugs by healthy individuals.
  • ...physicians, educators, regulators and others to collaborate in developing policies that address the use of cognitive-enhancing drugs by healthy individuals.
  • ...information to be broadly disseminated concerning the risks, benefits and alternatives to pharmaceutical cognitive enhancement.
  • ...careful and limited legislative action to channel cognitive-enhancement technologies into useful paths.
  • You might not think this is a terribly controversial idea, but it is -- remember, drugs are bad, m'kay? As far as I can tell, that's the core of the argument against the use of enhancement biochemistry. If the cognitive enhancement came about through education, through computer use, or even through some less-conventional methods like meditation and yoga, the arguments would be about how to increase access, not prevent it.

    The notable element here is that this argument is appearing in the pages of Nature, pretty much the biggest name in science journals. That doesn't mean that such proposals are likely to be adopted any time soon, but it does mean that they're starting to receive mainstream attention -- or, to be precise, more mainstream attention. Recall that Tech Crunch reported that cognitive enhancement drugs were becoming all the rage in Silicon Valley. I can't imagine that, in a rougher economic environment, these executives and programmers are going to rely less on such assistance.

    Here's a bit of what I wrote about the phenomenon in the last draft of the Atlantic article (which now looks like a summer publish date, which means that it will go through yet another round of big edits and rewrites).

    This is one way a world of intelligence augmentation emerges. Little by little, people who don't know about drugs like modafinil (or don’t want/can't afford to use them) will find themselves facing greater competition from the people who do. [...]

    But these are primitive enhancements. As the science improves, we could see other kinds of cognitive modification drugs, boosting recall, brain plasticity, even empathy and emotional intelligence. They would start as therapeutic treatments, but would end up being used to make users "better than normal." Eventually, some of these may end up as over-the-counter products, for sale at your local pharmacy, or on the juice and snack aisle at the supermarket. Spam email would be full of offers to make your brain bigger, and your idea production more powerful.

    Such a future would bear little resemblance to "Brave New World" or similar narcomantic nightmares; we may fear the idea of a population kept doped and placated, but we're more likely to see a populace stuck on overdrive, searching out the last bit of competitive advantage, business insight, and radical innovation. No small amount of that innovation would be directed towards inventing the next, more powerful, cognitive enhancement technology.

    Cognitive enhancement drugs may be primitive for now, but they're here -- and in increasing use. It would be painfully irresponsible to think that it's a fringe issue, and to continue to pretend that prohibition is a reasonable response.

    The series of proposals in the Nature article strike me as eminently reasonable, cautious, and forward-looking. I'm trying hard not to be cynical about their likelihood of implementation. Maybe they should start working on optimism-enhancement technologies, too.

    May 1, 2008

    Remaking the Athlete, Remaking the Culture

    ESPNMag.jpgDiscussions of the implications of the augmentation of our biological bodies with prosthetic technologies can be found quite readily in the esoteric discourses of self-described transhumanists, social theorists, and bioethicists. One might be forgiven for imagining that it's less-common among sports fans, more concerned with the latest scores and statistics. But the cover story of the current ESPN Magazine, "Let 'Em Play," not only explores the bigger issues surrounding the integration of augmentation in our culture, but (as the article title suggests) adopts a clearly pro-prosthetic perspective. Given the sports panics around doping, this isn't just enlightened, it's brave.

    This isn't just a story about Oscar Pistorius, although his aborted effort to reach the Olympics -- shut down not because he wasn't good enough, but because the International Association of Athletics Federations feared that he'd soon be too good -- is clearly a catalyst for the story. The story's author, Eric Adelson, looks at a cross-section of prosthetic enhancements, some allowable, some not, and notes that this wouldn't be the first time that international athletics shied away from an advance. In many cases, reality forced athletics culture to change:

    Every organized sport begins the same way, with the creation of rules. We then establish technological limits, as with horsepower in auto racing, stick curvature in hockey, bike weight in cycling. As sports progress, those rules are sometimes altered. The USGA, for instance, responded to advances in club technology by legalizing metal heads in the early '80s. In Chariots of Fire, the hero comes under heavy scrutiny for using his era's version of steroids: a coach, at a time when the sport frowned upon outside assistance. So if we can adjust rules of sports to the time, why not for prosthetics?

    This story has emerged at a crucial time for augmentative technologies. We have, simultaneously, passionate laments on television and in the halls of Congress about steroid scandals in baseball, and a rapid proliferation of cognitive enhancing drugs in schools and in the workplace. For a moment, it seemed like the Western reaction to enhancement technologies would mirror the US schizophrenia around recreational drugs: widespread use alongside widespread condemnation. With the Pistorius story, and the growing recognition of the diversity of prosthetic technologies, we may not be able to so easily categorize such enhancements as "good" and "bad," "acceptable" and "unacceptable."

    That this is happening in the world of sport is even more important than its timing. As long as arguments about augmentation and prosthetics remained focused on emerging bioscience, abstract notions of "human dignity," and imagined scenarios of war between the enhanced and unenhanced, most people (to the extent they were even aware of the issues), would see them as pointless irrelevancies or, worse still, science fiction. But with the epicenter of the dilemma a cultural arena that cuts across social, geographic and political divisions, arguments about augmentation and prosthetics will be inescapable. ESPN isn't a niche sub-culture; it's a common language.

    For those of us who have been talking about the emerging questions about the role of augmentation technologies, "Let 'Em Play" (along with its two companion pieces, "The Disadvantage Advantage" and "Anything You Can Do...," a photo gallery of augmented athletes), offers a useful, powerful, and above all meaningful framing of the issue for people who might not even be aware that there is an issue.

    (Disclaimer: A producer for ESPN Magazine interviewed me several months ago on a related topic, and the conversation drifted into these particular issues. I'm not cited in the article, but I wouldn't be surprised if lots of people at the magazine are wrestling with this subject.)

    March 27, 2008

    Please Don't Kick the Robots

    If you follow the futures blogosphere at all -- or just read BoingBoing -- you've undoubtedly seen this video of the "packbot" called Big Dog:

    It's an interesting prototype, and a telling example of how rapidly we're moving into the robotic age. The use of four legs for mobility gives it a particularly sci-fi appearance -- as if, at any moment, a tiny flying drone could show up and wrap a cable around its legs. Its walking pattern is distinctly mechanical, except under a particular condition: when it's in trouble, at which point it moves its legs around, trying to stay up, in an eerily animal-like way. I found Big Dog's efforts to recover from slipping on the ice fascinating. But I had a somewhat different reaction to its efforts to recover from being kicked: I felt a bit sick.

    My reaction to seeing this robot kicked paralleled what I would have had if I'd seen a video of a pack mule or a real big dog being kicked like that, and (from anecdotal conversations) I know I'm not the only one with that kind of immediate response. True, it wasn't nearly as strong a shocked feeling for me as it would have been with a real animal, but it was definitely of the same character. It simply felt wrong.

    pleo.pngI had a similar reaction when I learned that the "Pleo" robot dinosaur toy reacts to being picked up by the tail by crying out in apparent distress.

    Pleo is also capable of getting upset—when you hold him upside down by his tail, Pleo lets out an panicky wail until you put him down on his feet.

    This is where the emotional pull of Pleo—not in him, but in you—is apparent, because once placed safely on a flat surface, Pleo knows how to lay a guilt trip. Like a dog that has just been beaten, Pleo's tail trembles and goes down between his legs, all while he hangs his head and makes noises like a baby dinosaur sobbing. Oh, Herbert, I never meant to hold you upside down all those times. Please forgive me.

    Like the author of the above review, my immediate, gut response mirrors what I would feel for a living animal. Intellectually, I know that it's a simple machine without any actual sense of pain or fear; emotionally, it's horrifying.

    This response is, at least to an extent, hard-wired -- most of us react to the sight of an animal in distress with empathy for the creature and, if applicable, disgust for the person abusing it. Psychologists have long recognized that humans without this empathy for non-human animals are more likely to be abusive to other people. The behaviors of these robots -- the scrambling legs, the desperate cries -- mirrors real animal behavior closely enough, at least for some of us, to elicit this same kind of empathy.

    Some of this "mirror empathy" comes from the robots being biomorphic, that is, having animal-like appearances. Even if a Roomba let out panicky squeaks and flashing lights at being turned upside-down, for example, few of us would react as we would to seeing a turtle on its back. There's no biomorphism to the Roomba. And that's probably a good thing. After all, it's trying to carry out a particular task efficiently, and it probably wouldn't work as well if people constantly picked it up because it was so cute.

    kicktherobot.pngIt strikes me that there's a likely split in the near-term evolution of human-environment robots in the years to come. Some robots, those meant to interact on a regular basis with humans, will likely take on stronger biomorphic appearances and behaviors, usually in order to deter abusive behavior. A small number of robots, intended to provide emotional support to the injured or depressed, may have human-like appearances. Other robots, meant to work more-or-less out of sight, will probably take on more camouflaged appearances, trying to avoid being noticed.

    Note the "usually" above. I would expect that some human-interactive robots will be designed with biomorphic cues meant to elicit a response other than empathy. Fear, for example: a robot that triggers deeply-rooted responses to (say) spiders or snakes may be a better tool for the police or military than one that makes people think of puppies or ponies. Such a design wouldn't necessarily undermine its interactions with the military/police units; we know that soldiers already have strong emotional attachments to completely non-biomorphic, remote-control robots.

    I don't think it's likely that we'll stop having these kinds of emotional reactions to biomorphic (in appearance and/or behavior) robots. I think it's rather healthy that we do, actually. For one, it's an indicator that our sense of empathy remains strong and sensitive, and that seems quite a good thing. Another reason, however, is a bit more speculative. At some point, whether in the next decade or next century, we're likely to develop robots that really won't like being kicked. I'd rather not have them start to want to kick back.

    January 14, 2008

    "Techno-Doping" and the New Olympics

    pistorius.jpgOscar Pistorius, AKA "Blade Runner" -- the South African sprinter who uses carbon fiber prosthetics in place of the lower legs amputated as a child -- has officially lost his bid to run in the 2008 Olympics. He's going to give one last appeal to the International Association of Athletics Federations, but his chances of success are slim. The official reason, according to the BBC:

    "...his prosthetic limbs give him an advantage over able-bodied opponents..."

    For now, Pistorius' artificial legs make him fast, but still human-fast (he came in second at a recent meet); although his prosthetics reduce his energy requirements by 25%, he has yet to hit the qualifying speed for the 400m race. It's entirely possible that, even had the IAAF accepted his bid, he wouldn't have made it to this Olympics.

    But it's also entirely possible that, in 2012, he'd be breaking records right and left. And shortly thereafter, he wouldn't be alone in doing so.

    The evolution of technological augmentation is progressing faster than natural human biology, and it's clear that it won't be long until these physical enhancements will completely out-class natural human sports capabilities. The growing likelihood that, within the next decade, the fastest humans alive will be "disabled" holds the potential for profound "future shock." As I wrote about last year (in "The Accidental Cyborg"), young athletes facing the choice between rehabilitation and amputation for leg injuries are starting to pick amputation, knowing that the prosthetics could be an improvement, not an impairment.

    One of the arguments against doping in sports is that it puts young athletes in the position of choosing between potentially injuring their bodies or having a serious disadvantage. Don't be surprised if someone starts making the same argument about amputation. Once augmented athletes start breaking records, will desperate-to-win young men and women consider intentionally injuring their legs in order to get access to prosthetic augmentations? With people already talking about "techno-doping," this question seems painfully close to an answer.

    Moreover, what happens if the "Paralympics" -- the competition for disabled athletes -- becomes an arena for the best runners (and more?) in the world. Would there be a need for a "Supralympics" for technology-enhanced competitors? Would that become a home for "gene-doping," or even some forms of traditional, biochemical enhancement?

    Could the Olympics of (say) 2020 be the same kind of sideshow as today's Paralympics, with all of the advertising and attention going to the super-athletes doing things that everyday humans couldn't imagine?

    July 15, 2007

    Blade Runner

    Oscar Pistorius is the South African sprinter I mentioned in my piece The Accidental Cyborg -- a world-class runner who happens to have artificial lower legs and feet. Because of the shape of the carbon fiber prosthetics, his nickname is now "Blade Runner."

    Here's the Blade Runner in action last Friday:

    He was profiled in the Financial Times last week, in a piece that highlights many of the dilemmas arising from the intersection of augmentation technology and sports, and exploring what might come next.

    Laboratory experiments with genetic implants on mice have produced massive muscle growth, and it is only a matter of time before such (perfected) experiments will be enacted on humans.

    Precedent suggests that sports competitors will be the first to try them. Power-to-weight ratios will then go haywire, and world records could be reduced by 10 per cent. And who will know? It is difficult enough at present to test for an excess of naturally occurring body chemicals, such as testosterone.

    If we don't see gene-doping in 2008, we'll almost certainly see it by 2012 Olympics. The next sports arms race may well be between athletes with enhanced genomes vs. athletes with super-prosthetics.

    June 12, 2007

    The Accidental Cyborg

    out-of-ear.jpgLet me tell you, being a cyborg isn't all it's cracked up to be. But it might be, sooner than you expect.

    The popular image of a "cyborg" may be mash-up the character from Teen Titans and the Six Million Dollar Man (the balance of each depending upon how old you are), but the reality is not nearly so exciting. The truth is, we've had people with cybernetic prosthetics for quite some time, and the number is growing quickly. They're not action heroes (by and large), they're people all too often casually dismissed as "disabled."

    But the demographics of the disabled are changing, as is the power of assistive technologies. And these changes have serious implications both for the role and visibility of the disabled in Western society and the ongoing debate between augmentation as "therapy" and augmentation as "enhancement."

    I speak from personal experience on this one; I recently joined the ranks of the cyborgs. A few years ago, after noticing that my hearing seemed degraded, I saw an audiologist. His diagnosis wasn't encouraging: definite hearing loss, most likely congenital and almost certain to continue to degrade. At that point, it wasn't quite bad enough to require hearing aids. I went in for a new examination last month, and got the news: I really should be using hearing aids, at least if I wanted to stop annoying loved ones, friends and colleagues with my incessant "excuse me?" and "I'm sorry..." requests for repetition. After a few fittings and follow-ups, I got my new hearing assistance devices this week, and I'm wearing them right now.

    in-the-ear.jpgThese aren't just dumb amplifiers; they're little digital signal processors, small enough to fit into the ear canal, and smart enough to know when to boost the input and when to leave it alone. They're programmable, too (sadly, not by the end-user -- programming requires an acoustic enclosure, not just a computer connection). And here's where therapeutic augmentation starts to fuzz into enhancement: one of the program modes I'm considering would give me far better than normal hearing, allowing me to pick up distant conversations like I was standing right there.

    They're not without their drawbacks. They're somewhat uncomfortable -- not painful, but impossible to ignore. The quality of the sound I get through the devices will take some getting used to; the size of the speaker limits just how clear the sound can be, I'm told. And, as far as I can tell, the electronics in these things change very slowly, Moore's Law be damned. I think there's a generational issue here: up until recently, most people wearing hearing aids came from the pre-computer era, and expected to pay outrageous prices for technologies of a just-good-enough quality (especially medical technologies). As more Baby Boomers -- and those of us younger than the Boomers -- start to require augmentation technologies, the manufacturers will increasingly see demand for greater quality and faster improvement.

    A few hearing aid companies are beginning to see the light. Oticon, for example, offers a model of hearing aid with built-in bluetooth to make mobile phone calls easier. They don't come cheap, though -- just about $3,000 per ear. The first hearing aid company to act like a computer industry player instead of a medical tech industry player will make millions from the aging-but-tech-savvy.

    The transformation of augmentation technology from pure therapy to a mix of therapy and enhancement is more visible in other types of prosthetics, however. Both New Scientist and the New York Times had recent articles about the remarkable capabilities of new prosthetic technologies, with a particular focus on cutting-edge models of artificial legs. In New Scientist (subscription required), the focus is on digital prosthetics offering new ways to compensate for disabilities:

    [MIT researcher Hugh] Herr, who has made it his life's work to design improved prosthetic legs, is being funded by the US Department of Veterans Affairs to work on a prosthetic ankle that returns more energy in each stride. Inside each prosthetic are battery-powered motors that do a similar job to muscles. Last week, he wore two of these brand-new ankles for the first time. "It was absolutely amazing," he says. "It's like hitting the moving walkway at the airport." People wearing the new prosthetic have been shown to expend 20 per cent less energy when walking than with a standard prosthetic, and Herr says their gait also looks completely natural.

    (Note the mention of the Department of Veterans Affairs. The Iraq War has become a significant catalyst in the rapid improvement of prosthetic technologies. The vast improvements in battlefield medicine that allow far fewer casualties to die have correspondingly meant that far more casualties come back with significant disabilities, including limb loss. As of January, 500 soldiers had undergone major amputations, a rate double previous wars.)

    The sports implications of these new prosthetics haven't gone without notice. As the New York Times describes, South African Oscar Pistorius runs fast enough on his unpowered carbon fiber artificial legs that he's in contention for the 2008 Olympics -- if the International Olympics Committee will let him in. The IOC fears that continued advances in prosthetic technology will lead to "disabled" runners beating the all-natural runners in the not too distant future. They're right to be concerned; the most compelling part of the New Scientist article had to be this brief section:

    Herr mentions a 17-year-old girl who has decided to go ahead with an operation to amputate a damaged leg because, he says, she thinks a new prosthetic will give her more athletic ability than she has now. For his own part, Herr claims he would not swap his prosthetic legs for natural legs, even if he could. "Would you buy a computer system if you were told you couldn't upgrade it for 50 years?" he says.

    Herr's comment is eerily similar to an observation I made about why implanted computer systems were unlikely. We've seen such remarkable change in computer technology is such a short time, it's hard to imagine wanting to remain stuck with a rapidly-obsolescing model. But in a world of augmentation, is the biological body just another dead-end technology?

    Or, to make this more personal: I expect that, over the next decade, hearing aid technologies will have improved enough that most of the drawbacks will have been rectified, and I'll have access to hearing capabilities better than ever before; over that same time, we may see biomedical advances that can fix deficient hearing, restoring perfectly functional natural hearing. Augmentation for therapy slides inexorably into augmentation for enhancement. Should I give up my better-than-human hearing to go back to a "natural" state?

    disdance.jpgThis changing perception of both disability and augmentation can be summed up in this amazing picture of Sarah Reinertsen, taken by Stephanie Diani for a Times article about prosthetic fashion (and I strongly encourage you to click through to the full-size picture). Her artificial leg has no pretense of biology, yet is clearly part of her. It's not simply a prop to help her live a just-good-enough life; it's an augmentation that will only get better as the months and years pass. I doubt she looks with envy at the women with two "normal" legs on the dance floor; I suspect we're not too far away from a time when those women will look with envy at her.

    October 6, 2006

    Implant Rejection

    braincrafters2.jpgHow'd you like a computer in your head?

    Brain implants are staples of both science fiction and speculative conversations about the future. I noted a few months ago that a surprisingly large portion of the Metaverse Roadmap crowd considered brain implants as the logical extension of the virtual world-real world crossover. The worlds of novels like Neuromancer and games like Transhuman Space are filled with jacked-in, chipped up citizens. I've even raised the possibility in some of my talks about the later stages of the Participatory Panopticon.

    This week, James Hughes, my colleague at the Institute for Ethics and Emerging Technologies, was quoted by the Saint Petersburg Times in an article about the potential for implanted communication devices. Given his visibility as a proponent of transhumanism, you might expect that he'd be all for getting wired up. He's not -- he's rather cautious, in fact.

    "We're moving inside" the body with cell phones, said James Hughes, a bioethicist and sociologist at Trinity College in Hartford, Conn., and author of Citizen Cyborg. "My opinion is it is realistic. But for at least a couple of decades, I don't think it's going to be terribly attractive to open up our heads."

    I'd go further than that. Until we reach a stage where nano-magical systems can rewire our brains at will, I don't see non-therapeutic brain implants ever becoming popular. Cortical implant systems to deal with severe physiological disabilities are already available, and such devices will just get better and more widely available. But voluntary brain implants for enhancement purposes? Count me as a nay-sayer, for reasons that should be familiar to anyone who has purchased consumer electronics.

    The first is stability: I have yet to encounter a piece of electronic gear or computer hardware that doesn't crash on a semi-regular basis. Would you want a chip in your head made by the same folks that made your cell phone? How about having your brain run Windows, or even Linux? Even if we assume that implanted devices are built to higher standards than something you'd pick up at Best Buy, you're still left with the uncomfortable knowledge that even high-end, military-grade systems can and will have flaws. These are complex devices; I don't want to have to ctl-alt-del my brainjack, let alone deal with a all-too-plausible "fatal error."

    If you're going to connect something directly to your brain, you really want it to work.

    The second reason is upgradability: unless or until we live in a world of risk-free, cheap and easy brain surgery, once you get something implanted, it's going to stay there for awhile. Hughes alludes to this; brain surgery isn't something you can wander into the shopping mall and deal with over lunch. It's a major bodily trauma, and certainly not something you'd want to do over and over again. Unfortunately, in a world of ongoing Moore's Law acceleration of technological power, today's cutting-edge implant is tomorrow's obsolete piece of junk -- and good luck if the protocols change or you're on the wrong side of a "format war" (anyone want a betamax implant?). There's no way to avoid falling further and further behind without going under the knife time and again.

    But how many of us would want to be stuck with the computer or mobile phone we had five or ten years ago?

    Fortunately, this is all a solved problem. We can use external information and communication devices -- Hughes refers to these as "exocortical technology," but you can just think of them as "the stuff you already have." If a phone or computer crashes, it's an annoyance but rarely fatal, and upgrading can be done as often as one can afford.

    Implanted computers are a staple of science fiction in large part because they admirably provide one of the key tensions of the genre: a vision of tomorrow that it simultaneously compelling and disturbing. Plugging something into the brain raises all sorts of questions about safety and control -- what does happen if a brain implant fails? How long after the first brain computer is out will we see the first brain computer rootkit? At the same time, fictional implants show a world in which communication and information technologies are quite literally a part of us -- an observation of our lives turned into flesh and silicon.

    Brain implants in fiction and futurism are, in the end, metaphors, not blueprints.