Jamais Cascio

Photo by Bart Nagel

Interviews and Talks

My Name is Jamais Cascio, and I'm a Futurologist interview for pinITALY
(video)          July 2014

Everything Will Be Alright* interview for documentary series.
(video)          February 2014

Crime and Punishment discussion at Fast Company's Innovation Uncensored
(video)          April 2013

Bots, Bacteria, and Carbon talk at the University of Minnesota
(video)          March 2013

Visions of a Sustainable Future interview
(text)          March 2013
Talking about apocalypse gets dull...all apocalypses are the same, but all successful scenarios are different in their own way.

The Future and You! interview
(video)          December 2012

Bad Futurism talk in San Francisco
(video)          December 2012

Inc. magazine interview
(text)          December 2012
Any real breakthrough in AI is going to come from gaming.

Singularity 1 on 1 interview
(video)          November 2012

Momentum Interview
(text)          September 2012
One hope for the future: That we get it right.

Doomsday talk in San Francisco
(video)          June 2012

Polluting the Data Stream talk in San Francisco
(video)          April 2012

Peak Humanity talk at BIL2012 in Long Beach
(video)          February 2012

Acceler8or Interview
(text)          January 2012
Our tools don't make us who we are. We make tools because of who we are.

Hacking the Earth talk in London
(video)          November 2011

Cosmoetica Interview
(text)          May 2011
The fears over eugenics come from fears over the abuse of power. And we have seen, time and again, century after century, that such fears are well-placed.

Future of Facebook project interviews
(video)          April 2011

Geoengineering and the Future interview for Hearsay Culture
(audio)          March 2011

Los Angeles and the Green Future interview for VPRO Backlight
(video)          November 2010

Surviving the Future excerpts on CBC
(video)          October 2010

Future of Media interview for BNN
(video)          September 2010

Hacking the Earth Without Voiding the Warranty talk at NEXT 2010
(video)          September 2010

Map of the Future 2010 at Futuro e Sostanabilita 2010 (Part 2, Part 3)
(video)          July 2010

We++ talk at Guardian Activate 2010
(video)          July 2010

Wired for Anticipation talk at Lift 10
(video)          May 2010

Soylent Twitter talk at Social Business Edge 2010
(video)          April 2010

Hacking the Earth without Voiding the Warranty talk at State of Green Business Forum 2010
(video)          February 2010

Manipulating the Climate interview on "Living on Earth" (public radio)
(audio)          January 2010

Bloggingheads.TV interview
(video)          January 2010

Homesteading the Uncanny Valley talk at the Biopolitics of Popular Culture conference
(audio)          December 2009

Sixth Sense interview for NPR On the Media
(audio)          November 2009

If I Can't Dance, I Don't Want to be Part of Your Singularity talk for New York Future Salon
(video)          October 2009

Future of Money interview for /Message
(video)          October 2009

Cognitive Drugs interview for "Q" on CBC radio
(audio)          September 2009

How the World Could (Almost) End interview for Slate
(video)          July 2009

Geoengineering interview for Kathleen Dunn Show, Wisconsin Public Radio
(audio)          July 2009

Augmented Reality interview at Tactical Transparency podcast
(audio)          July 2009

ReMaking Tomorrow talk at Amplify09
(video)          June 2009

Mobile Intelligence talk for Mobile Monday
(video)          June 2009

Amplify09 Pre-Event Interview for Amplify09 Podcast
(audio)          May 2009

How to Prepare for the Unexpected Interview for New Hampshire Public Radio
(audio)          April 2009

Cascio's Laws of Robotics presentation for Bay Area AI Meet-Up
(video)          March 2009

How We Relate to Robots Interview for CBC "Spark"
(audio)          March 2009

Looking Forward Interview for National Public Radio
(audio)          March 2009

Future: To Go talk for Art Center Summit
(video)          February 2009

Brains, Bots, Bodies, and Bugs Closing Keynote at Singularity Summit Emerging Technologies Workshop (video)          November 2008

Building Civilizational Resilience Talk at Global Catastrophic Risks conference
(video)          November 2008

Future of Education Talk at Moodle Moot
(video)          June 2008

G-Think Interview
(text)          May 2008
"In the best scenario, the next ten years for green is the story of its disappearance."

A Greener Tomorrow talk at Bay Area Futures Salon
(video)          April 2008

Geoengineering Offensive and Defensive interview, Changesurfer Radio
(audio)          March 2008

Wired interview
(text)           March 2008
"The road to hell is paved with short-term distractions. "

The Future Is Now interview, "Ryan is Hungry"
(video)          March 2008

G'Day World interview
(audio)          March 2008

UK Education Drivers commentary
(video)          February 2008

Futurism and its Discontents presentation at UC Berkeley School of Information
(audio)          February 2008

Opportunity Green talk at Opportunity Green conference
(video)          January 2008

Metaverse: Your Life, Live and in 3D talk
(video)          December 2007

Singularity Summit Talk
(audio)          September 2007

Political Relationships and Technological Futures interview
(video)          September 2007

NPR interview
(audio)          September 2007
"Science Fiction is a really nice way of uncovering the tacit desires for tomorrow...."

Spark Radio, CBC interview
(audio)          August 2007
Spark Radio, part 2 CBC interview
(audio)          August 2007

True Mutations Live! roundtable Part 1
(audio)          July 2007
True Mutations Live! roundtable Part 2
(audio)          July 2007

G'Day World interview
(audio)          June 2007

NeoFiles interview
(audio)          June 2007

Take-Away Festival talk
(video)          May 2007

NeoFiles interview
(audio)          May 2007

Changesurfer Radio interview
(audio)          April 2007

NeoFiles interview
(audio)          July 2006

FutureGrinder: Participatory Panopticon interview
(audio)          March 2006

TED 2006 talk
(video)          February 2006

Commonwealth Club roundtable on blogging
(audio)          February 2006

Personal Memory Assistants Accelerating Change 2005 talk
(audio)          October 2005

Participatory Panopticon MeshForum 2005 talk
(audio)          May 2005

Reminder: Open the Future is on a temporary hiatus while I work on a book. I will post now and again, but may go for a few weeks at a time without updating. If you're new to the site, check out the "Start Here" links to the right. Thanks.

Usefully Wrong

It's a line I've used quite a bit in my talks: "The point of futurism [foresight, scenarios] isn't to make accurate predictions. We know that in details large and small, our forecasts will usually be wrong. The goal is to be usefully wrong." I'm not just pre-apologizing for my own errors (although I do hope that it leaves people less annoyed by them). I'm trying to get at a larger point -- forecasts and futurism can still be powerful tools even without being 100% on-target.

Forecasts, especially of the multiple-future scenario style, force you (the reader or recipient of said futurism) to re-examine the assumptions you make about where things go from here. If your response to a given forecast is "that's bullshit!" you need to be able to ask why you think so. Even if the futurist behind the scenarios leaves out something important, she or he may just as easily have included something that you had ignored. To push this thinking, it's often productive to ask:

  • What would have to happen to make this forecast plausible?
  • What would have to happen to make this forecast impossible (not simply unlikely)?
  • What in this forecast feels both surprising and uncomfortably true?

Thinking deeply about forecasts and futurism can change your perception. Events and developments that you might once have ignored or reflexively categorized take on new meanings and (critically) new implications. You start to think in terms of consequences, not just results. Here you ask:

  • Did I expect that event or development? Why or why not?
  • What should I now be prepared to see happen next?
  • What expected consequences or results did we manage to avoid?

Unfortunately, if you really embrace this kind of thinking, you begin to see on a daily basis just how close we as a planet keep coming to disaster. "Dodging bullets" is the top characteristic of human civilization, apparently. Welcome to my world.

Not Very Uplifting

What responsibility do we have for the things we make?

At its root, this is a fairly straightforward science story. Neuroscience researchers at the University of Rochester and the University of Copenhagen successfully transplanted human glial progenitor cells (hGPCs) into a newborn mouse (here's the technical article in The Journal of Neuroscience, and the lay-friendly version in New Scientist). While glial cells are generally considered a support cell in the brain, positioning, feeding, insulating, and protecting neurons, they also help neurons make synaptic connections. The hGPCs out-competed the mouse glial cells, basically taking over that function in the mouse brain, and -- as had been found in similar research (with adult glial cells) -- the mice demonstrated greater intelligence than their unaltered fellows.

So, mice with grafted human brain support cells are smarter than regular mice. The next phase is testing with rats, which start out even smarter. The researchers insist that there's nothing especially human about these altered mice:

"This does not provide the animals with additional capabilities that could in any way be ascribed or perceived as specifically human," he says. "Rather, the human cells are simply improving the efficiency of the mouse's own neural networks. It's still a mouse."

However, the team decided not to try putting human cells into monkeys. "We briefly considered it but decided not to because of all the potential ethical issues," Goldman says.

(...A statement that somewhat undermines his whole "it's still a mouse" argument -- after all, wouldn't it still be a monkey?)

As always, I'm mostly interested in the "what happens next?" question. It's likely that rats with hGPC will show increased intelligence; same with dogs. And just because this set of researchers won't add the hGPC special sauce to monkeys doesn't mean that somebody else won't do it. And maybe even throw in a few neuron precursors for flavor.

But even sticking with hGPCs, the fact remains: we're making these non-human animals demonstrably smarter. We are, in a very limited fashion, uplifting them (to use David Brin's terminology). They will be able to understand the world a bit (or even a lot) better than others of their kind. And at some point, we may well even end up with test subjects significantly smarter than typical and able to demonstrate behaviors unsettlingly close to our own.

What rights should any of these types of uplifted animals have? Do we need to spell out a greater set of rights for the human chimera mice in the news report? Or as we create increasingly more-intelligent-than-typical animals, will there a point at which they could no longer be limited to the rights given to all scientific research animals? At what point would it become a crime to kill them, no matter how humanely or in accordance with ethical standards? It would be easy to draw the line if the uplifted animals exhibit human-like behavior -- complex communication, for example, or the creation of art -- but what about intelligence-boosted animals that exhibit forms of higher intelligence that don't readily map to human-specific behavior but are clearly beyond what a typical animal of that species could do? When do we give them a say in their own lives?

This connects in fairly obvious ways to the ongoing efforts to provide more expansive rights to the Great Apes or Cetaceans, but it's equally an issue for the Magna Cortica project. What it's not is a science fiction question for our distant descendants. This is happening now, and these issues need to be addressed now.

The Inevitable Future

Film student Taylor Baldschun invited me to participate in a project of his, a short documentary on the end of humanity. His final (for the moment) version can be seen here:

The Inevitable Future from Taylor Baldschun on Vimeo.

On my first viewing, I started counting off the various mannerisms and habits that I find annoying in my own speaking style. But I was caught off-guard by my own final statement, which Taylor uses to close the movie.

If humanity were to go extinct, obviously, our life goes away. Over time, our artifacts go away. So what really would be lost in that existential sense is potential. Because we know that we could do so much more than what we’ve done by now. That we could be better stewards of the planet. That we could develop tools to let us learn new things and go new places. That we could make a better world. And that goes away. That potential, that possibility… it would be an enormous loss of a future.

And that, to me is, the hardest thing to envision — not because it’s difficult to imagine but because it’s painful to imagine.

We have, as a civilization, as human beings, such incredible potential. Potential that has not yet been made manifest. And I hope that we have enough time to show the value of that potential.

It's not perfect, could use a bit of editing to clean it up, but it's not too bad for something made up on the spot. The video as a whole is thoughtful, quiet, and well worth watching. It's not a bad way to spend ten minutes of your day.

Magna Cortica talk at TEDx Marin

(brushes away cobwebs, wipes dust off of screen, sits quietly for a moment and wonders what happened...)

Screen Shot 2014 11 04 at 6 13 30 AM

The video of my TEDx talk on the ethics of cognitive augmentation is now up, and you can view it at the TEDx Marin website.

(It's also on YouTube directly, but for the time being I'm doing as asked and pointing people to the TEDx Marin website.)

A few notes:

Most importantly: This talk is based on the work I did for the Institute for the Future's 2014 Ten-Year Forecast. Of all of the things I would like to change about this talk, calling this out explicitly is at the top of the list.

I don't actually speak as fast as I seem to at the outset of the talk; I believe that the editor elided some early "um"/"ah"/word repetitions, resulting in what sounds like I was going WAY too fast.

Most of my usual gestures are on display, but I do think I managed to tone them down a bit.

Unfortunately, I'm still pacing back and forth like a caged carnivore.

There's one thing I do repeatedly throughout the talk, and I don't know why. I'm not going to tell you what it is, because I may just be hypersensitive to it.

So there.

Berlin Videos

IsthereafuturistinthehouseThe Climate Engineering Conference 2014 in Berlin has uploaded the videos of all plenary sessions, available here. (http://www.ce-conference.org/conference-videos)

The Berlin Museum talk I posted below can be listened to here:

Climate Engineering and the Meaning of Nature (Jamais Cascio)

(I had just finished writing the talk -- I scripted it to stay within a very strict time limit -- so I spend more time than I should looking down. Better to listen to than to watch, I think.)

My brief digression on the nature of futurism in the context of thinking about the environment (a last bit of the last plenary meeting) can be found here:

What Future for Climate Engineering? (Jamais Cascio)

Finally, I was asked to moderate a panel on the challenges of writing about climate engineering:

The Writer's Role: Reflections on Communicating Climate Engineering to the Public

Talking About Extinction In Front of Dinosaurs

I'm back from the first Climate Engineering Conference, held in Berlin. Quite a good trip, but in many ways the highlight was the talk I gave at the Berlin Natural History Museum. The gathering took place in the dinosaur room, which holds (among other treasures) the "Berlin Specimen" Archaeopteryx fossil, among the most famous and most important fossils ever discovered.

The acoustics of the place, however, were terrible, so I don't know how well any recordings will turn out. Fortunately, I had to script my talk, so I can offer the full text of what I said:

I’ve been doing foresight work for the past 20 years or so, and put simply, my job is to look at the big picture. To get away from the perspective of quarterly results and short horizon thinking. To break away from conventional points of view by stepping way back. Unsurprisingly, these days much of my work focuses on climate disruption and topics like geoengineering. But here’s the secret: in planetary terms, our actions don’t actually matter that much in the long run. The Earth, as a planet, as a global ecological system, will – over time – be just fine.

After all, it’s dealt with worse than us. Environmental scientists may call the current era the “sixth extinction,” but human civilization is still pretty much a comparative amateur when it comes to wiping out the Earth’s species. Given that there’s a past extinction event called The Great Dying, responsible for killing off possibly 90% of the species on the Earth at the time, arguably we’re nowhere near as dangerous to nature as nature is itself.

But here’s the thing: even after the Great Dying, life came back and, over time, flourished. Every extinction event has eventually become the catalyst for a new surge in life. Given time, evolution works. Environmental niches get filled. Species emerge and change to take full advantage of new planetary conditions. The animals and plants we worry will disappear as the result of human carelessness and ignorance are, in evolutionary terms, only temporary residents of the world – ephemeral, just like we are. The image we have in our heads of what the global environment looks like today is just that – a static snapshot of a dynamic system.

This realization – that the Earth will abide, no matter our mistakes – may seem liberating but is actually quite sobering. Because what this knowledge tells us isn’t that we’re free to do what we will, but that the brutal strength of our fears about what human activity is doing to our world comes from its effect on us. The Earth may be fine, but the fragile webs connecting human civilization to the planet’s ecosystems won’t be.

We don’t need to worry about driving the bees to the edge of extinction because the Earth will somehow be harmed; given time, evolution will fill that niche. We need to worry about the bees because without them our ability to feed ourselves will be eviscerated. Any anxiety we have about the creation of ocean dead zones or the collapse of fisheries is really about what these conditions will do to humanity, to the ability of seven-plus billion people to survive. And the dangers from global temperatures rising by five or more degrees over the course of just a century – an increase so fast in geologic terms it seems as if humanity is somehow the warming equivalent of an asteroid hitting the planet – these dangers will simply make it impossible for human civilization to continue on its current path.

So, does that mean civilization will collapse? Probably not. Humans are reasonably smart. As a species, we’ve survived massive natural environmental disruption before, and with less knowledge and fewer tools than we have today. But that’s not the whole story.

When writer William Gibson said that “the future is here, it’s just not evenly distributed,” he wasn’t just talking about technology. Imbalances in resources, in power, in luck all mean that a majority of the world’s population already lives on the precarious edge of catastrophe. From my “big picture” futurist point of view, it’s easy to say that we’ll adapt. But for far too many of us, that process of forced adaptation will be tragic, and painful, and deadly.

Saying that the Earth will be fine isn’t an attempt to absolve ourselves of responsibility for the harm that we’ve done to the planet. Rather, it’s a blunt acknowledgement that the concerns we have about the world are ultimately – and, I think, appropriately – selfish. The health of the environment, here in this moment of the Anthropocene, is directly connected to the health of human civilization. We’re not separate from nature, we’re very much a part of it; in every sense that matters the well-being of the Earth is thoroughly, intimately, interwoven with our future. In other words, when we harm the planet today, we are really harming ourselves over the long tomorrow.

TEDx in Marin

So, the second announcement can now be revealed: I'm one of the speakers at the 2014 TEDx Marin event on September 18. I'll be talking about the Magna Cortica, and will be speaking alongside my IFTF colleague Miriam Lueck Avery (talking about the microbiome), CEO of the Center for Investigative Reporting Joaquin Alvorado (talking about reinventing journalism), UC Berkeley Professor Ananya Roy (talking about patriarchy and power), and Kenyatta Leal, former San Quentin inmate (talking about how education and entrepreneurship can transform prison).

TEDx events can be a bit of a gamble; there have been enough low-quality, misinformation-driven speakers that I've generally steered clear of all of them. TEDx Marin, however, looks to have a solid history of picking good, smart people to offer interesting and provocative observations -- without veering into controversy for controversy's sake.

Tickets are limited, run about $70, and will only be available through August 5. Come out and say hi!

Climate Engineering in Berlin

Okay, first of a few announcements (posting as they become public):

In August, I'll be speaking in Berlin, Germany at the Climate Engineering Conference 2014. A major multi-day event, CEC2014 covers the gamut of climate engineering/geoengineering issues, from science to policy to media. I'm on two panels, and then a special extra event.

I'll actually be in Berlin for the entire week, so if any German/EU readers want to ping me about giving a talk nearby, please do let me know.

There are a couple more items I'll be announcing soon, so stay tuned -- same Bat-Time, same Bat-Channel.

Magna Cortica

One of the projects I worked on for the Institute for the Future's 2014 Ten-Year Forecast was Magna Cortica, a proposal to create an overarching set of ethical guidelines and design principles to shape the ways in which we develop and deploy the technologies of brain enhancement over the coming years. The forecast seemed to strike a nerve for many people -- a combination of the topic and the surprisingly evocative name, I suspect. Alexis Madrigal at The Atlantic Monthly wrote a very good piece on the Ten-Year Forecast, focusing on Magna Cortica, and Popular Science subsequently picked up on the story. I thought I'd expand a bit on the idea here, pulling in some of the material I used for the TYF talk.

As you might have figured, the name Magna Cortica is a direct play on the Magna Carta, the so-called charter of liberties from nearly 800 years ago. The purpose of the Magna Carta was to clarify the rights that should be more broadly held, and the limits that should be placed on the rights of the king. All in all a good thing, and often cited as the founding document of a broader shift to democracy.

The Magna Cortica wouldn’t be a precise mirror of this, but it would follow a similar path: the Magna Cortica project would be an effort to make explicit the rights and restrictions that would apply to the rapidly-growing set of cognitive enhancement technologies. The parallel may not be precise, but it is important: while the crafters of the Magna Carta feared what might happen should the royalty remain unrestrained, those of us who would work on the Magna Cortica project do so with a growing concern about what could happen in a world of unrestrained pursuit of cognitive enhancement. The closer we look at this path of development, the more we see reasons to want to be cautious.

Of course, we have to first acknowledge that the idea of cognitive enhancement isn’t a new one. Most of us regularly engage in the chemical augmentation of our neurological systems, typically through caffeinated beverages. And while the value of coffee and tea includes wonderful social and flavor-based components, it’s the way that consumption kicks our thinking into high gear that usually gets the top billing. This, too, isn’t new: there are many scholars who correlate the emergence of so-called “coffeehouse society” with the onset of the enlightenment.

But if caffeine is our legacy cognitive technology, it has more recently been overshadowed by the development of a variety of brain boosting drugs. What’s important to recognize is that these drugs were not created in order to make the otherwise-healthy person smarter, they were created to provide specific medical benefits.

Provigil and its variants, for example, were invented as a means of treating narcolepsy. Like coffee and tea, it keeps you awake; unlike caffeine, however, it’s not technically a stimulant. Clear-headed wakefulness is itself a powerful boost. But for many users, Provigil also measurably improves a variety of cognitive processes, from pattern recognition to spatial thinking.

Much more commonly used (and, depending upon your perspective, abused) are the drugs devised to help people with attention-deficit disorder, from the now-ancient Adderall and Ritalin to more recent drugs like Vyvanse. These types of drugs are often a form of stimulant -- usually part of the amphetamine family, actually -- but have the useful result of giving users enhanced focus and greatly reduced distractibility.

These drugs are supposed to be prescribed solely for people who have particular medical conditions. The reality, however, is that the focus-enhancing, pattern-recognizing benefits don’t just go to people with disorders -- and these kinds of drugs have become commonplace on university campuses and in the research departments of high-tech companies around the world.

Over the next decade, we’re likely to see the continued emergence of a world of cognitive enhancement technologies, primarily but not exclusively pharmaceutical, increasingly intended for augmentation and not therapy. And as we travel this path, we’ll see even more radical steps, technologies that operate at the genetic level, digital artifacts mixing mind and machine, even the development of brain enhancements that could push us well beyond what’s thought to be the limits of “human normal.”

Neurofluor Artifact med

For many of us, this is both terrifying and exhilarating. Dystopian and utopian scenarios clash and combine. It’s a world of relentless competition to be the smartest person in the room, and unprecedented abilities to solve complex global problems. A world where the use of cognitive boosting drugs is considered as much of an economic and social demand as a present-day smartphone, and one where the diversity of brain enhancements allows us to see and engage with social and political subtleties that would once have been completely invisible. It's the world I explored a bit in my 2009 article in The Atlantic Monthly, "Get Smarter."

And such diversity is really already in play, from so-called “exocortical” augmentations like Google Glass to experimental brain implants to ongoing research to enhance or alter forms of social and emotional expression, including anger, empathy, even religious feelings.

There’s enormous potential for chaos.

There are numerous questions that we’ll need to resolve, dilemmas that we'll be unable to avoid confronting. Since this project may also be seen as a cautious “design spec,” what would we want in an enhanced mind? What should an enhanced mind be able to do? Are there aspects of the mind or brain that we should only alter in case of significant mental illness or brain injury? Are there aspects of a mind or brain we should never alter, no matter what? (E.g., should we ever alter a person’s sense of individual self?)

What are the the rights and responsibilities we would have to the non-human minds that would be enhanced and potentially created along the way to human cognitive enhancement. Animal testing would be unavoidable. What would we owe to rats, dogs, and apes (etc.) with potentially vastly increased intellect? Similarly, whole-brain neural network simulations, like the Blue Brain project, offer a very real possibility of the eventual creation of a system that behaves like -- possibly even believes itself to be -- a human mind. What responsibilities would we have towards such a system? Would it be ethical to reboot it, to turn it off, to erase the software?

The legal and political aspects cannot be ignored. We would need extensive discussion of how this research will be integrated into legal frameworks, especially with the creation of minds that don’t fall neatly into human categories. And as it’s highly likely that military and intelligence agencies will have a great deal of interest in this set of projects, the role that such groups should have will need to be addressed -- particularly once a “hostile actor” begins to undertake similar research.

Across all of this, we'd have to consider the practices and developments that are not currently considered near-term feasible, such as molecular nanotechnologies, as well as techniques not yet invented or conceived. How can we make rules that apply equally well to the known and the unknown?

All of these would be part of a Magna Cortica project. But for today, I’d like to start with five candidates for inclusion as basic Magna Cortica rights, as a way of… let’s say nailing some ideas to a door.

  1. The right to self-knowledge. Likely the least controversial, and arguably the most fundamental, this right would be the logical extension of the quantified self movement that's been growing for the last few years. As the ability to measure, analyze, even read the ongoing processes in our brains continues to expand, the argument here is that the right to know what’s going on inside our own heads should not be abridged.

    Of course, there’s the inescapably related question: who else would have the right to that knowledge?

  2. As the Maker movement says, if you can’t alter something, you don’t really own it. In that spirit, it’s possible that a Magna Cortica could enshrine the right to self-modification. This wouldn’t just apply to cognition augmentation, of course; the same argument would apply to less practical, more entertainment-oriented alterations. And as we’ve seen around the world over the last year, the movement to make such things more legal is well underway.

  3. The flip side of the last right, and potentially of even greater sociopolitical importance, is a right to refuse modification. To just say no, as it were. But while this may seem a logical assertion to us now, as these technologies become more powerful, prevalent, and important, refusing cognitive augmentation may come to be considered as controversial and even irresponsible as the refusal to vaccinate is today. Especially in light of…

  4. A right to modify or to refuse to modify your children. It has to be emphasized that we already grapple with this question every time a doctor prescribes ADHD drugs, when both saying yes and saying no can lead to accusations of abuse. And if the idea of enhancements for children rather than therapy seems beyond the pale, I’d invite you to remember Louise Brown, the first so-called “test tube baby.” The fury and fear accompanying her birth in 1978 is astounding in retrospect; even the co-discoverer of the structure of DNA, James Watson, thought her arrival meant "all Hell will break loose, politically and morally, all over the world." But today, many of you reading this either know someone who has used in-vitro fertilization, have used it yourself, or may even be a product of it.

  5. Finally, there’s the potential right to know who has been modified. This suggested right seems to elicit an immediate reaction of visions of torches and pitchforks, but we can easily flip that script around. Would you want to know if your taxi driver was on brain boosters? Your pilot? Your child’s teacher? Your surgeon? At the root of all of this is the unanswered question of whether the identification as having an augmented mind would be seen as something to be feared… or something to be celebrated.

And here again we encounter the terrifying and the exhilarating: we are almost certain be facing these questions, these crises and dilemmas, over the next ten to twenty years. As long as intelligence is considered a competitive advantage in the workplace, in the labs, or in high office, there will be efforts to make these technologies happen. The value of the Magna Cortica project would be to bring these questions out into the open, to explore where we draw the line that says “no further,” to offer a core set of design principles, and ultimately to determine which pathways to follow before we reach the crossroads.


More Site Info...

Since November 11, 2007. Based on IEA averages.

Featured in Alltop

Recent Comments

Fortão Suplementos on
   Life Lessons from the Next Decade:
   very nice this article... thanks ve
PatSunter on
   Mirror, Mirror -- Science Fiction and Futurism:
   I see the point of this post is to
Jamais Cascio on
   Imagination Experiment: Visualizing Transformative Tech:
   Excellent observations, Yoshi. Than
Jamais Cascio on
   The End of the World As We Know It (and I'm rather annoyed):
   Very good point, Andrei. Migration
Andrei Shindyapin on
   The End of the World As We Know It (and I'm rather annoyed):
   A better option: Describing some pl
Yoshi on
   Imagination Experiment: Visualizing Transformative Tech:
   Dyson spheres would radiate as much
José Antonio Vanderhorst-Silverio on
   Next Big Thing: Resilience:
   Nassim Nicholas Taleb says "The ant
Jamais Cascio on
   Futures of Human Cultures:
   Whoops. Fixed. Thanks!
Adam on
   Futures of Human Cultures:
   my gut sense is that they'll be all
Randy McDonald on
   End of 2012:
   I'm not sure that Sterling's commen


Creative Commons License
This weblog is licensed under a Creative Commons License.
Powered By MovableType 4.37