Jamais Cascio

Headshot
Photo by Bart Nagel

Interviews and Talks

Everything Will Be Alright* interview for documentary series.
(video)          February 2014

Crime and Punishment discussion at Fast Company's Innovation Uncensored
(video)          April 2013

Bots, Bacteria, and Carbon talk at the University of Minnesota
(video)          March 2013

Visions of a Sustainable Future interview
(text)          March 2013
Talking about apocalypse gets dull...all apocalypses are the same, but all successful scenarios are different in their own way.

The Future and You! interview
(video)          December 2012

Bad Futurism talk in San Francisco
(video)          December 2012

Inc. magazine interview
(text)          December 2012
Any real breakthrough in AI is going to come from gaming.

Singularity 1 on 1 interview
(video)          November 2012

Momentum Interview
(text)          September 2012
One hope for the future: That we get it right.

Doomsday talk in San Francisco
(video)          June 2012

Polluting the Data Stream talk in San Francisco
(video)          April 2012

Peak Humanity talk at BIL2012 in Long Beach
(video)          February 2012

Acceler8or Interview
(text)          January 2012
Our tools don't make us who we are. We make tools because of who we are.

Hacking the Earth talk in London
(video)          November 2011

Cosmoetica Interview
(text)          May 2011
The fears over eugenics come from fears over the abuse of power. And we have seen, time and again, century after century, that such fears are well-placed.

Future of Facebook project interviews
(video)          April 2011

Geoengineering and the Future interview for Hearsay Culture
(audio)          March 2011

Los Angeles and the Green Future interview for VPRO Backlight
(video)          November 2010

Surviving the Future excerpts on CBC
(video)          October 2010

Future of Media interview for BNN
(video)          September 2010

Hacking the Earth Without Voiding the Warranty talk at NEXT 2010
(video)          September 2010

Map of the Future 2010 at Futuro e Sostanabilita 2010 (Part 2, Part 3)
(video)          July 2010

We++ talk at Guardian Activate 2010
(video)          July 2010

Wired for Anticipation talk at Lift 10
(video)          May 2010

Soylent Twitter talk at Social Business Edge 2010
(video)          April 2010

Hacking the Earth without Voiding the Warranty talk at State of Green Business Forum 2010
(video)          February 2010

Manipulating the Climate interview on "Living on Earth" (public radio)
(audio)          January 2010

Bloggingheads.TV interview
(video)          January 2010

Homesteading the Uncanny Valley talk at the Biopolitics of Popular Culture conference
(audio)          December 2009

Sixth Sense interview for NPR On the Media
(audio)          November 2009

If I Can't Dance, I Don't Want to be Part of Your Singularity talk for New York Future Salon
(video)          October 2009

Future of Money interview for /Message
(video)          October 2009

Cognitive Drugs interview for "Q" on CBC radio
(audio)          September 2009

How the World Could (Almost) End interview for Slate
(video)          July 2009

Geoengineering interview for Kathleen Dunn Show, Wisconsin Public Radio
(audio)          July 2009

Augmented Reality interview at Tactical Transparency podcast
(audio)          July 2009

ReMaking Tomorrow talk at Amplify09
(video)          June 2009

Mobile Intelligence talk for Mobile Monday
(video)          June 2009

Amplify09 Pre-Event Interview for Amplify09 Podcast
(audio)          May 2009

How to Prepare for the Unexpected Interview for New Hampshire Public Radio
(audio)          April 2009

Cascio's Laws of Robotics presentation for Bay Area AI Meet-Up
(video)          March 2009

How We Relate to Robots Interview for CBC "Spark"
(audio)          March 2009

Looking Forward Interview for National Public Radio
(audio)          March 2009

Future: To Go talk for Art Center Summit
(video)          February 2009

Brains, Bots, Bodies, and Bugs Closing Keynote at Singularity Summit Emerging Technologies Workshop (video)          November 2008

Building Civilizational Resilience Talk at Global Catastrophic Risks conference
(video)          November 2008

Future of Education Talk at Moodle Moot
(video)          June 2008

G-Think Interview
(text)          May 2008
"In the best scenario, the next ten years for green is the story of its disappearance."

A Greener Tomorrow talk at Bay Area Futures Salon
(video)          April 2008

Geoengineering Offensive and Defensive interview, Changesurfer Radio
(audio)          March 2008

Wired interview
(text)           March 2008
"The road to hell is paved with short-term distractions. "

The Future Is Now interview, "Ryan is Hungry"
(video)          March 2008

G'Day World interview
(audio)          March 2008

UK Education Drivers commentary
(video)          February 2008

Futurism and its Discontents presentation at UC Berkeley School of Information
(audio)          February 2008

Opportunity Green talk at Opportunity Green conference
(video)          January 2008

Metaverse: Your Life, Live and in 3D talk
(video)          December 2007

Singularity Summit Talk
(audio)          September 2007

Political Relationships and Technological Futures interview
(video)          September 2007

NPR interview
(audio)          September 2007
"Science Fiction is a really nice way of uncovering the tacit desires for tomorrow...."

Spark Radio, CBC interview
(audio)          August 2007
Spark Radio, part 2 CBC interview
(audio)          August 2007

True Mutations Live! roundtable Part 1
(audio)          July 2007
True Mutations Live! roundtable Part 2
(audio)          July 2007

G'Day World interview
(audio)          June 2007

NeoFiles interview
(audio)          June 2007

Take-Away Festival talk
(video)          May 2007

NeoFiles interview
(audio)          May 2007

Changesurfer Radio interview
(audio)          April 2007

NeoFiles interview
(audio)          July 2006

FutureGrinder: Participatory Panopticon interview
(audio)          March 2006

TED 2006 talk
(video)          February 2006

Commonwealth Club roundtable on blogging
(audio)          February 2006

Personal Memory Assistants Accelerating Change 2005 talk
(audio)          October 2005

Participatory Panopticon MeshForum 2005 talk
(audio)          May 2005

Reminder: Open the Future is on a temporary hiatus while I work on a book. I will post now and again, but may go for a few weeks at a time without updating. If you're new to the site, check out the "Start Here" links to the right. Thanks.

Climate Engineering in Berlin

Okay, first of a few announcements (posting as they become public):

In August, I'll be speaking in Berlin, Germany at the Climate Engineering Conference 2014. A major multi-day event, CEC2014 covers the gamut of climate engineering/geoengineering issues, from science to policy to media. I'm on two panels, and then a special extra event.

I'll actually be in Berlin for the entire week, so if any German/EU readers want to ping me about giving a talk nearby, please do let me know.

There are a couple more items I'll be announcing soon, so stay tuned -- same Bat-Time, same Bat-Channel.

Magna Cortica

One of the projects I worked on for the Institute for the Future's 2014 Ten-Year Forecast was Magna Cortica, a proposal to create an overarching set of ethical guidelines and design principles to shape the ways in which we develop and deploy the technologies of brain enhancement over the coming years. The forecast seemed to strike a nerve for many people -- a combination of the topic and the surprisingly evocative name, I suspect. Alexis Madrigal at The Atlantic Monthly wrote a very good piece on the Ten-Year Forecast, focusing on Magna Cortica, and Popular Science subsequently picked up on the story. I thought I'd expand a bit on the idea here, pulling in some of the material I used for the TYF talk.

As you might have figured, the name Magna Cortica is a direct play on the Magna Carta, the so-called charter of liberties from nearly 800 years ago. The purpose of the Magna Carta was to clarify the rights that should be more broadly held, and the limits that should be placed on the rights of the king. All in all a good thing, and often cited as the founding document of a broader shift to democracy.

The Magna Cortica wouldn’t be a precise mirror of this, but it would follow a similar path: the Magna Cortica project would be an effort to make explicit the rights and restrictions that would apply to the rapidly-growing set of cognitive enhancement technologies. The parallel may not be precise, but it is important: while the crafters of the Magna Carta feared what might happen should the royalty remain unrestrained, those of us who would work on the Magna Cortica project do so with a growing concern about what could happen in a world of unrestrained pursuit of cognitive enhancement. The closer we look at this path of development, the more we see reasons to want to be cautious.

Of course, we have to first acknowledge that the idea of cognitive enhancement isn’t a new one. Most of us regularly engage in the chemical augmentation of our neurological systems, typically through caffeinated beverages. And while the value of coffee and tea includes wonderful social and flavor-based components, it’s the way that consumption kicks our thinking into high gear that usually gets the top billing. This, too, isn’t new: there are many scholars who correlate the emergence of so-called “coffeehouse society” with the onset of the enlightenment.

But if caffeine is our legacy cognitive technology, it has more recently been overshadowed by the development of a variety of brain boosting drugs. What’s important to recognize is that these drugs were not created in order to make the otherwise-healthy person smarter, they were created to provide specific medical benefits.

Provigil and its variants, for example, were invented as a means of treating narcolepsy. Like coffee and tea, it keeps you awake; unlike caffeine, however, it’s not technically a stimulant. Clear-headed wakefulness is itself a powerful boost. But for many users, Provigil also measurably improves a variety of cognitive processes, from pattern recognition to spatial thinking.

Much more commonly used (and, depending upon your perspective, abused) are the drugs devised to help people with attention-deficit disorder, from the now-ancient Adderall and Ritalin to more recent drugs like Vyvanse. These types of drugs are often a form of stimulant -- usually part of the amphetamine family, actually -- but have the useful result of giving users enhanced focus and greatly reduced distractibility.

These drugs are supposed to be prescribed solely for people who have particular medical conditions. The reality, however, is that the focus-enhancing, pattern-recognizing benefits don’t just go to people with disorders -- and these kinds of drugs have become commonplace on university campuses and in the research departments of high-tech companies around the world.

Over the next decade, we’re likely to see the continued emergence of a world of cognitive enhancement technologies, primarily but not exclusively pharmaceutical, increasingly intended for augmentation and not therapy. And as we travel this path, we’ll see even more radical steps, technologies that operate at the genetic level, digital artifacts mixing mind and machine, even the development of brain enhancements that could push us well beyond what’s thought to be the limits of “human normal.”

Neurofluor Artifact med

For many of us, this is both terrifying and exhilarating. Dystopian and utopian scenarios clash and combine. It’s a world of relentless competition to be the smartest person in the room, and unprecedented abilities to solve complex global problems. A world where the use of cognitive boosting drugs is considered as much of an economic and social demand as a present-day smartphone, and one where the diversity of brain enhancements allows us to see and engage with social and political subtleties that would once have been completely invisible. It's the world I explored a bit in my 2009 article in The Atlantic Monthly, "Get Smarter."

And such diversity is really already in play, from so-called “exocortical” augmentations like Google Glass to experimental brain implants to ongoing research to enhance or alter forms of social and emotional expression, including anger, empathy, even religious feelings.

There’s enormous potential for chaos.

There are numerous questions that we’ll need to resolve, dilemmas that we'll be unable to avoid confronting. Since this project may also be seen as a cautious “design spec,” what would we want in an enhanced mind? What should an enhanced mind be able to do? Are there aspects of the mind or brain that we should only alter in case of significant mental illness or brain injury? Are there aspects of a mind or brain we should never alter, no matter what? (E.g., should we ever alter a person’s sense of individual self?)

What are the the rights and responsibilities we would have to the non-human minds that would be enhanced and potentially created along the way to human cognitive enhancement. Animal testing would be unavoidable. What would we owe to rats, dogs, and apes (etc.) with potentially vastly increased intellect? Similarly, whole-brain neural network simulations, like the Blue Brain project, offer a very real possibility of the eventual creation of a system that behaves like -- possibly even believes itself to be -- a human mind. What responsibilities would we have towards such a system? Would it be ethical to reboot it, to turn it off, to erase the software?

The legal and political aspects cannot be ignored. We would need extensive discussion of how this research will be integrated into legal frameworks, especially with the creation of minds that don’t fall neatly into human categories. And as it’s highly likely that military and intelligence agencies will have a great deal of interest in this set of projects, the role that such groups should have will need to be addressed -- particularly once a “hostile actor” begins to undertake similar research.

Across all of this, we'd have to consider the practices and developments that are not currently considered near-term feasible, such as molecular nanotechnologies, as well as techniques not yet invented or conceived. How can we make rules that apply equally well to the known and the unknown?

All of these would be part of a Magna Cortica project. But for today, I’d like to start with five candidates for inclusion as basic Magna Cortica rights, as a way of… let’s say nailing some ideas to a door.

  1. The right to self-knowledge. Likely the least controversial, and arguably the most fundamental, this right would be the logical extension of the quantified self movement that's been growing for the last few years. As the ability to measure, analyze, even read the ongoing processes in our brains continues to expand, the argument here is that the right to know what’s going on inside our own heads should not be abridged.

    Of course, there’s the inescapably related question: who else would have the right to that knowledge?

  2. As the Maker movement says, if you can’t alter something, you don’t really own it. In that spirit, it’s possible that a Magna Cortica could enshrine the right to self-modification. This wouldn’t just apply to cognition augmentation, of course; the same argument would apply to less practical, more entertainment-oriented alterations. And as we’ve seen around the world over the last year, the movement to make such things more legal is well underway.

  3. The flip side of the last right, and potentially of even greater sociopolitical importance, is a right to refuse modification. To just say no, as it were. But while this may seem a logical assertion to us now, as these technologies become more powerful, prevalent, and important, refusing cognitive augmentation may come to be considered as controversial and even irresponsible as the refusal to vaccinate is today. Especially in light of…

  4. A right to modify or to refuse to modify your children. It has to be emphasized that we already grapple with this question every time a doctor prescribes ADHD drugs, when both saying yes and saying no can lead to accusations of abuse. And if the idea of enhancements for children rather than therapy seems beyond the pale, I’d invite you to remember Louise Brown, the first so-called “test tube baby.” The fury and fear accompanying her birth in 1978 is astounding in retrospect; even the co-discoverer of the structure of DNA, James Watson, thought her arrival meant "all Hell will break loose, politically and morally, all over the world." But today, many of you reading this either know someone who has used in-vitro fertilization, have used it yourself, or may even be a product of it.

  5. Finally, there’s the potential right to know who has been modified. This suggested right seems to elicit an immediate reaction of visions of torches and pitchforks, but we can easily flip that script around. Would you want to know if your taxi driver was on brain boosters? Your pilot? Your child’s teacher? Your surgeon? At the root of all of this is the unanswered question of whether the identification as having an augmented mind would be seen as something to be feared… or something to be celebrated.

And here again we encounter the terrifying and the exhilarating: we are almost certain be facing these questions, these crises and dilemmas, over the next ten to twenty years. As long as intelligence is considered a competitive advantage in the workplace, in the labs, or in high office, there will be efforts to make these technologies happen. The value of the Magna Cortica project would be to bring these questions out into the open, to explore where we draw the line that says “no further,” to offer a core set of design principles, and ultimately to determine which pathways to follow before we reach the crossroads.


Mirror, Mirror -- Science Fiction and Futurism

Futurism -- scenario-based foresight, in particular -- has many parallels to science fiction literature, enough that the two can sometimes be conflated. It's no coincidence that there's quite a bit of overlap between the science fiction writer and futurist communities, and (as a science fiction reader since I was old enough to read) I could myself as extremely fortunate to be able to call many science fiction writers friends. But science fiction and futurism are not the same thing, and it's worth a moment's exploration to show why.

The similarities between the two are obvious. Broadly speaking, both science fiction and futurism involve the development of internally-consistent, plausible future worlds extrapolating from the present. Science fiction and many (but not all) scenario-based forms of futurism both rely on narrative to explore their respective future worlds. Futurist works and many (but not all) science fiction stories have as an underlying motive a desire to illuminate the present (and the dilemmas we now face) by showing ways in which the existing world may evolve.

But here's the twist, and the reason that science fiction and futurism are not identical, but instead are mirror-opposites:

In science fiction, the author(s) build their internally-consistent, plausible future worlds to support a character narrative (taking "character" in the broadest sense -- in science fiction, it's entirely possible for the main character to be a space ship, a computer network, a city, even a planet). In short, a story. Conversely, futurists develop any story or character narrative (here found primarily in scenario-based futurism) to support the depiction of internally-consistent, plausible future worlds.

Science fiction writers need to build out their worlds with enough detail and system knowledge to provide consistent scaffolding for character behavior, allowing the reader (and the author) to understand the flow of the story logic. It's often the case that a good portion of the world-building happens behind the scenes -- written for the author's own use, but never showing up directly on the page. But there's little need for science fiction writers to build their worlds beyond that scaffolding.

Futurists need to make as much of their world-building explicitly visible as possible (and here the primary constraint is usually the intersection of limits to report length and limits to reader/client attention); any "behind the scenes" world-building risks leaving out critical insights, as often the most important ideas to emerge from foresight work concerns those basic technology drivers and societal dynamics. When a futurist narrative includes a story (with or without a main character), that story serves primarily to illuminate key elements of the internally-consistent, plausible future worlds. (The plural "worlds" is intentional; as anyone who follows my work knows, one important aspect of futures work is often the creation of parallel alternative scenarios.)

In science fiction, the imagined world supports the story; in futurism, the story supports the imagined world.

It's a simple but crucial difference, and one that too many casual followers of foresight work miss. If a futurist scenario reads like bad science fiction, it's because it is bad science fiction, in the sense that it's not offering the narrative arc that most good pieces of literature rely upon. And if the future presented in a science fiction story is weak futurism, that's not a surprise either -- as long as the future history helps to make the story compelling, it's done its job.

Futurists and science fiction writers often "talk shop" when they get together -- but fundamentally, their jobs are very, very different.

Watching the World through a Broken Lens

It's often frustrating, as a foresight professional, to listen/read what passes for political discourse, especially during a big international crisis (such as the Russia-Ukraine-Crimea situation). Much of the ongoing discussion offers detailed predictions of what one state or another will do and clear assertions of inevitable outcomes, all with an overwhelming certainty of anticipatory analysis. Of course, these various prognostications will almost always be wrong; worse, they'll typically be wrong in a useless way, having obscured or confused our understanding of the world more than they've illuminated it.

It's not just a peculiarity of Central European crises. We can see a similar process play out in nearly every global-scale system with consequences beyond the immediate, economically, militarily, or politically. Detailed claims about imminent inflation or the arrival of an Iranian nuclear weapon by the end of the year get treated as gospel up to the moment when the assertion is shown to be wrong, after which the previous statement drops down the memory hole and is replaced by one about a new threat of imminent inflation or the arrival of an Iranian nuclear weapon by the end of the new year. Those who inflict this Potemkin futurism on us -- predictions without substance portrayed as careful analysis of future outcomes -- never suffer the consequences of being wrong. Anyone offering more subtle or complex analysis will be treated at best as having just another opinion, or even actively ignored if what they say runs counter to the conventional wisdom.

This prediction-error-prediction cycle isn't just a feature of television or Internet punditry. As I've mentioned before, I did my graduate work in political science, and ultimately erroneous predictions dripping with certainty are commonly found in this realm as well. Unlike most other social sciences, political science has to balance both analysis of past+present conditions and grounded forecasts of the implications of those conditions. When there's a revolution in Country X, you'll rarely see an Anthropologist or Social Psychologist quoted in mainstream discussions of What This Means; conversely, you're almost guaranteed to get a juicy quote or two from an academic in the Department of Government and Conventional Wisdom at Ivy-Covered Halls.edu.

This is not a dilemma without a solution, however. Professional Foresight (aka Futurism) also went through a period where specialists would offer up a single prediction of a certain future. In more recent decades -- arguably since Hermann Kahn's On Thermonuclear War in 1960, but more generally since the advent of Shell-derived Scenario Planning in the 1990s -- futurism has been more comfortable with uncertainty, and more willing to offer multiple rival forecasts of possible outcomes instead of singular, certain predictions. Multi-scenario foresight has evolved various iterations since then, but they all come down to a core idea: you can't predict the future, but you can see the shape of different possible futures.

So what would this model look like if employed by political pundits and political science academics? To be honest, it would probably be confusing, and make for bad television. We as a civilization have a bias towards spectacle and a preference for detail over generality; a talking head saying "this could happen, or that, or this other thing, they're all plausible outcomes" will be squished by someone with a loud voice and absolute certainty.

Certain but wrong usually beats complex and observant. Enjoy your future.

Offshore Wind Turbines Can Tame Hurricanes. Yay, Right? Maybe.

Wind turbine hurricanesStanford University Civil Engineering professor Mark Jacobson (and team) have published an article in Nature Climate Change showing that a large cluster of offshore wind turbines -- about 300+ GW worth -- could significantly reduce the wind speeds and storm surges from hurricanes. BBC article & video. PDF of NCC article. From the abstract:

Benefits occur whether turbine arrays are placed immediately upstream of a city or along an expanse of coastline. The reduction in wind speed due to large arrays increases the probability of survival of even present turbine designs. The net cost of turbine arrays (capital plus operation cost less cost reduction from electricity generation and from health, climate, and hurricane damage avoidance) is estimated to be less than today’s fossil fuel electricity generation net cost in these regions and less than the net cost of sea walls used solely to avoid storm surge damage.

With the possibility that anthropogenic global warming is increasing the frequency and/or intensity of hurricanes (a still-ambiguous issue), this seems like a good thing. After all, these wind turbines are built to generate power, and the hurricane-dampening effect would be a pleasant side-effect. Reduced wind speeds and storm surges mean reduced losses of life, property, and resources. Good news, everybody.

But remember that the climate is a complex system with myriad interactions with the ocean, plant/animal ecosystems, aquifers, soil, and on and on. If hurricane impacts are reduced to below the pre-AGW norm, it's highly likely that we'll see some level of unintended cost to environmental systems that had evolved to be dependent upon periodic inrushes of water, high winds (think seed and insect dispersal), or other consequences of hurricane landfall.

If Jacobson et al are correct (and for now, this is entirely model-based -- so probably generally accurate, but with the potential for small-but-important errors), think of this as both an opportunity and a warning. Offshore wind turbines, built to generate electricity, may also have the capacity to measurably reduce the intensity of hurricanes approaching land. As attractive as this sounds, we'll have to be all the more alert to the possibility of upstream ecosystem disruptions.

You Are the Service, Not the Customer

German public radio program DRadio Wissen spoke with me this week on the subject of Google and the Future, with a particular emphasis on privacy. The conversation, which ran about 20 minutes, was edited down to a 12 minute report, mixing German and English.

The DRadio Wissen page in English (via Google Translate, of course).

The original DRadio Wissen page in German (hit the "Abspielen" button to play the audio).

The title here (also the title given the piece at DRadio Wissen) nicely sums up my argument: Google is a long-term focused company, with plenty of smart people and big ideas, but everything (for now) remains driven by advertising. Gmail, Maps, and all of its other services are offered solely as a way to bring eyeballs to Google's real customers, advertisers.

Everything Will Be Alright*

A couple of years ago, Christian Moran interviewed me for a series of short films he planned to make, focusing on reasons for optimism. That film series is now available at his website, and it's a decent variety of people grappling with big ideas from different perspectives. Technologists, scientists, journalists, artists, doctors... and me. The half-hour interview may be one of the best ones I've done, in terms of how well the ideas I'm trying to articulate come across.

A few caveats, though. Christian was really taken with a somewhat offhand comment I made in the course of the conversation and highlights it in his introduction; fortunately, it's not made the focus of the video. Also, remember that it was recorded in mid-2012, so if there's an obvious reference that I'm not including (e.g., Snowden stuff), that's why. Finally, I really need not to slouch, especially when I wear t-shirts and jackets.

* And yes, I know "alright" isn't grammatically correct, but it's his movie series and he can name it what he wants.

500 Words on Cryptocurrencies

Such money. Big spender. Wow.It will likely come as little or no surprise that cryptocurrencies like Bitcoin, Litecoin, and Dogecoin (my favorite) are frequent topics of conversation among futurist types. After all, they're supposed to be paradigm-breaking disruptions of the status quo, or something. But I still haven't gotten over my sense that something isn't quite fully-baked about the current generation of digital currencies, and I'm going to spend my ~500 words here trying to spell out why.

Cryptocurrencies are computationally-derived mathematical artifacts intended to function as money -- they're to be used to store value and to be exchanged for goods and services. The difference between cryptocurrencies and the US Dollar (or other sovereign-state currency) is that the Dollar is backed by the "full faith and credit" of the United States, meaning that as long as the US is a functioning political entity, the dollar can be used to (at minimum) pay American taxes. Conversely, cryptocurrencies are backed by mutual agreement; as long as the market for it exists, a cryptocurrency has some value. The logic behind cryptocurrencies isn't new, and can be seen in the various complementary currencies that have been used for decades in communities around the world, often (as with some cryptocurrencies) with an explicit social or political goal.

Many supporters of cryptocurrencies prefer to draw a parallel to gold, which is not under the control of any single political entity and does not have a set value, instead being priced based on how much people will pay for it (in another currency). This floating value of cryptocurrencies is one recognized challenge for their continued utility. As economist Paul Krugman and others have pointed out, gold has a minimum value, due to its use in industry and jewelry; cryptocurrencies have no minimum value, and could in principle crash to a level where they have effectively zero worth. Hoarding, regulatory decisions, and fraud can all cause wild swings in currency price. This floating value, which for many cryptocurrencies can be extremely volatile, impedes use as stable media of exchange. If the trading value of a Bitcoin versus a Dollar varies throughout the day, a business owner that primarily buys and sells and pays taxes in Dollars takes a risk any time he or she sets a price in Bitcoins. Some businesses may be willing to swallow that risk in order to gain the support of Bitcoin advocates, but for many others, it's just not worth the hassle.

Solving the floating value problem will be difficult, not for arcane economic reasons, but because there are as yet no physical communities where a cryptocurrency serves as a primary currency, usable for a broad variety of run-of-the-mill transactions. No place for the currencies to create a persistent, mutually-understood perceived value outside of its value in exchange for a sovereign currency. Where the users know at a gut level what it means to say that something costs (for example) 100 Bitcoin, the way an American knows what it means when something costs $100. Until then, cryptocurrencies will always be secondary at best, somewhat more fungible than gold coins from World of Warcraft. And that points to what may be the source of my continued skepticism about the current generation of cryptocurrencies: advocates have embraced the argument that all money is imaginary, that the vast majority of transactions now are digital, and that we now live in a globalized market, but have neglected the corresponding social and political grounding that makes this digital decentralization viable.

Secession in the Valley, and the End of Politics

Andrew Leonard has a short, sharp piece in Salon entitled "Silicon Valley dreams of secession," about a recent talk by tech entrepreneur Balaji Srinivasan calling for the Valley to secede from the US on a wave of 3D printers, drones, and bitcoins. Here's Leonard's capsule of the talk, along with Srinivasan's money quote:

Virtual secession, argues Srinivasan, is just natural evolution. Once upon a time, people seeking better lives left their broken states to immigrate to the U.S. Now, it is time for their descendants to emigrate further, except this time they don’t need to go anywhere physically, except into the cloud.

“Exit,” according to Srinivasan, “means giving people tools to reduce the influence of bad policies over their lives without getting involved in politics… It basically means build an opt in society, run by technology, outside the U.S.”

Long time readers will have guessed what part of Srinivasan's quote bothered me the most: "without getting involved in politics."

In 2009, I wrote a piece entitled "The End-of-Politics Delusion," about a broadly parallel set of arguments emerging from the bowels of Silicon Valley. Democracy is bad, and what we really need is a technology-enabled society to get rid of politics, or so the true believers would have us think. I reacted with this:

Politics is part of a healthy society -- it's what happens when you have a group of people with differential goals and a persistent relationship. It's not about partisanship, it's about power. And while even small groups have politics (think: supporting or opposing decisions, differing levels of power to achieve goals, deciding how to use limited resources), the more people involved, the more complex the politics. Factions, parties, ideologies and the like are simply ways of organizing politics in a complex social space -- they're symptoms of politics, not causes.

Calls to get rid of politics can therefore mean one of two things: getting rid of persistent relationships with other people; or getting rid of differential goals. Since I don't see too many of the folks who talk about escaping politics also talking about becoming lone isolationists, the only reasonable presumption is that they're really talking about eliminating disagreements.

It's the latest version of the notion that "a perfect world is one where everyone agrees with me."

Anyone calling for an end to politics, whether via secession or technocracy or singularity, either has no understanding of how human societies work (the generous interpretation) or has an authoritarian streak itching to show itself (the less-generous version). Srinivasan's version is even worse due to its dependence upon a thoroughly unreliable, opaque, and politically-biased substrate, "the cloud."

Here's what I mean: technologies fail, sometimes briefly, sometimes disastrously, whether because of physical damage, bad code, or intentional attack; telecommunication systems, in particular the commercial telecom carriers in the US, are notoriously unwilling to divulge operational details and abide by network neutrality; and all of these technologies embed norms and choices that are inherently biased [just as one example, the vast majority of home internet connections in the US are asymmetric, with much faster download (consumption) speeds than upload (creation) speeds -- that's a choice, not an inherent fact of the technology]. Using this as the basis of a political system seems... unwise.

Advertisement

More Site Info...

Since November 11, 2007. Based on IEA averages.


Featured in Alltop

Recent Comments

Fortão Suplementos on
   Life Lessons from the Next Decade:
   very nice this article... thanks ve
PatSunter on
   Mirror, Mirror -- Science Fiction and Futurism:
   I see the point of this post is to
Jamais Cascio on
   Imagination Experiment: Visualizing Transformative Tech:
   Excellent observations, Yoshi. Than
Jamais Cascio on
   The End of the World As We Know It (and I'm rather annoyed):
   Very good point, Andrei. Migration
Andrei Shindyapin on
   The End of the World As We Know It (and I'm rather annoyed):
   A better option: Describing some pl
Yoshi on
   Imagination Experiment: Visualizing Transformative Tech:
   Dyson spheres would radiate as much
José Antonio Vanderhorst-Silverio on
   Next Big Thing: Resilience:
   Nassim Nicholas Taleb says "The ant
Jamais Cascio on
   Futures of Human Cultures:
   Whoops. Fixed. Thanks!
Adam on
   Futures of Human Cultures:
   my gut sense is that they'll be all
Randy McDonald on
   End of 2012:
   I'm not sure that Sterling's commen

Archives

Creative Commons License
This weblog is licensed under a Creative Commons License.
Powered By MovableType 4.37
ARCHIVES