Crime and Punishment discussion at Fast Company's Innovation Uncensored
(video) April 2013
Bots, Bacteria, and Carbon talk at the University of Minnesota
(video) March 2013
Visions of a Sustainable Future interview
(text) March 2013
Talking about apocalypse gets dull...all apocalypses are the same, but all successful scenarios are different in their own way.
The Future and You! interview
(video) December 2012
Bad Futurism talk in San Francisco
(video) December 2012
Inc. magazine interview
(text) December 2012
Any real breakthrough in AI is going to come from gaming.
Singularity 1 on 1 interview
(video) November 2012
(text) September 2012
One hope for the future: That we get it right.
Doomsday talk in San Francisco
(video) June 2012
Polluting the Data Stream talk in San Francisco
(video) April 2012
Peak Humanity talk at BIL2012 in Long Beach
(video) February 2012
(text) January 2012
Our tools don't make us who we are. We make tools because of who we are.
Hacking the Earth talk in London
(video) November 2011
(text) May 2011
The fears over eugenics come from fears over the abuse of power. And we have seen, time and again, century after century, that such fears are well-placed.
Future of Facebook project interviews
(video) April 2011
Geoengineering and the Future interview for Hearsay Culture
(audio) March 2011
Los Angeles and the Green Future interview for VPRO Backlight
(video) November 2010
Surviving the Future excerpts on CBC
(video) October 2010
Future of Media interview for BNN
(video) September 2010
Hacking the Earth Without Voiding the Warranty talk at NEXT 2010
(video) September 2010
We++ talk at Guardian Activate 2010
(video) July 2010
Wired for Anticipation talk at Lift 10
(video) May 2010
Soylent Twitter talk at Social Business Edge 2010
(video) April 2010
Hacking the Earth without Voiding the Warranty talk at State of Green Business Forum 2010
(video) February 2010
Manipulating the Climate interview on "Living on Earth" (public radio)
(audio) January 2010
(video) January 2010
Homesteading the Uncanny Valley talk at the Biopolitics of Popular Culture conference
(audio) December 2009
Sixth Sense interview for NPR On the Media
(audio) November 2009
If I Can't Dance, I Don't Want to be Part of Your Singularity talk for New York Future Salon
(video) October 2009
Future of Money interview for /Message
(video) October 2009
Cognitive Drugs interview for "Q" on CBC radio
(audio) September 2009
How the World Could (Almost) End interview for Slate
(video) July 2009
Geoengineering interview for Kathleen Dunn Show, Wisconsin Public Radio
(audio) July 2009
Augmented Reality interview at Tactical Transparency podcast
(audio) July 2009
ReMaking Tomorrow talk at Amplify09
(video) June 2009
Mobile Intelligence talk for Mobile Monday
(video) June 2009
Amplify09 Pre-Event Interview for Amplify09 Podcast
(audio) May 2009
How to Prepare for the Unexpected Interview for New Hampshire Public Radio
(audio) April 2009
Cascio's Laws of Robotics presentation for Bay Area AI Meet-Up
(video) March 2009
How We Relate to Robots Interview for CBC "Spark"
(audio) March 2009
Looking Forward Interview for National Public Radio
(audio) March 2009
Future: To Go talk for Art Center Summit
(video) February 2009
Brains, Bots, Bodies, and Bugs Closing Keynote at Singularity Summit Emerging Technologies Workshop (video) November 2008
Building Civilizational Resilience Talk at Global Catastrophic Risks conference
(video) November 2008
Future of Education Talk at Moodle Moot
(video) June 2008
(text) May 2008
"In the best scenario, the next ten years for green is the story of its disappearance."
A Greener Tomorrow talk at Bay Area Futures Salon
(video) April 2008
Geoengineering Offensive and Defensive interview, Changesurfer Radio
(audio) March 2008
(text) March 2008
"The road to hell is paved with short-term distractions. "
The Future Is Now interview, "Ryan is Hungry"
(video) March 2008
G'Day World interview
(audio) March 2008
UK Education Drivers commentary
(video) February 2008
Futurism and its Discontents presentation at UC Berkeley School of Information
(audio) February 2008
Opportunity Green talk at Opportunity Green conference
(video) January 2008
Metaverse: Your Life, Live and in 3D talk
(video) December 2007
Singularity Summit Talk
(audio) September 2007
Political Relationships and Technological Futures interview
(video) September 2007
(audio) September 2007
"Science Fiction is a really nice way of uncovering the tacit desires for tomorrow...."
G'Day World interview
(audio) June 2007
(audio) June 2007
Take-Away Festival talk
(video) May 2007
(audio) May 2007
Changesurfer Radio interview
(audio) April 2007
(audio) July 2006
FutureGrinder: Participatory Panopticon interview
(audio) March 2006
TED 2006 talk
(video) February 2006
Commonwealth Club roundtable on blogging
(audio) February 2006
Personal Memory Assistants Accelerating Change 2005 talk
(audio) October 2005
Participatory Panopticon MeshForum 2005 talk
(audio) May 2005
Futurism -- scenario-based foresight, in particular -- has many parallels to science fiction literature, enough that the two can sometimes be conflated. It's no coincidence that there's quite a bit of overlap between the science fiction writer and futurist communities, and (as a science fiction reader since I was old enough to read) I could myself as extremely fortunate to be able to call many science fiction writers friends. But science fiction and futurism are not the same thing, and it's worth a moment's exploration to show why.
The similarities between the two are obvious. Broadly speaking, both science fiction and futurism involve the development of internally-consistent, plausible future worlds extrapolating from the present. Science fiction and many (but not all) scenario-based forms of futurism both rely on narrative to explore their respective future worlds. Futurist works and many (but not all) science fiction stories have as an underlying motive a desire to illuminate the present (and the dilemmas we now face) by showing ways in which the existing world may evolve.
But here's the twist, and the reason that science fiction and futurism are not identical, but instead are mirror-opposites:
In science fiction, the author(s) build their internally-consistent, plausible future worlds to support a character narrative (taking "character" in the broadest sense -- in science fiction, it's entirely possible for the main character to be a space ship, a computer network, a city, even a planet). In short, a story. Conversely, futurists develop any story or character narrative (here found primarily in scenario-based futurism) to support the depiction of internally-consistent, plausible future worlds.
Science fiction writers need to build out their worlds with enough detail and system knowledge to provide consistent scaffolding for character behavior, allowing the reader (and the author) to understand the flow of the story logic. It's often the case that a good portion of the world-building happens behind the scenes -- written for the author's own use, but never showing up directly on the page. But there's little need for science fiction writers to build their worlds beyond that scaffolding.
Futurists need to make as much of their world-building explicitly visible as possible (and here the primary constraint is usually the intersection of limits to report length and limits to reader/client attention); any "behind the scenes" world-building risks leaving out critical insights, as often the most important ideas to emerge from foresight work concerns those basic technology drivers and societal dynamics. When a futurist narrative includes a story (with or without a main character), that story serves primarily to illuminate key elements of the internally-consistent, plausible future worlds. (The plural "worlds" is intentional; as anyone who follows my work knows, one important aspect of futures work is often the creation of parallel alternative scenarios.)
In science fiction, the imagined world supports the story; in futurism, the story supports the imagined world.
It's a simple but crucial difference, and one that too many casual followers of foresight work miss. If a futurist scenario reads like bad science fiction, it's because it is bad science fiction, in the sense that it's not offering the narrative arc that most good pieces of literature rely upon. And if the future presented in a science fiction story is weak futurism, that's not a surprise either -- as long as the future history helps to make the story compelling, it's done its job.
Futurists and science fiction writers often "talk shop" when they get together -- but fundamentally, their jobs are very, very different.
It's often frustrating, as a foresight professional, to listen/read what passes for political discourse, especially during a big international crisis (such as the Russia-Ukraine-Crimea situation). Much of the ongoing discussion offers detailed predictions of what one state or another will do and clear assertions of inevitable outcomes, all with an overwhelming certainty of anticipatory analysis. Of course, these various prognostications will almost always be wrong; worse, they'll typically be wrong in a useless way, having obscured or confused our understanding of the world more than they've illuminated it.
It's not just a peculiarity of Central European crises. We can see a similar process play out in nearly every global-scale system with consequences beyond the immediate, economically, militarily, or politically. Detailed claims about imminent inflation or the arrival of an Iranian nuclear weapon by the end of the year get treated as gospel up to the moment when the assertion is shown to be wrong, after which the previous statement drops down the memory hole and is replaced by one about a new threat of imminent inflation or the arrival of an Iranian nuclear weapon by the end of the new year. Those who inflict this Potemkin futurism on us -- predictions without substance portrayed as careful analysis of future outcomes -- never suffer the consequences of being wrong. Anyone offering more subtle or complex analysis will be treated at best as having just another opinion, or even actively ignored if what they say runs counter to the conventional wisdom.
This prediction-error-prediction cycle isn't just a feature of television or Internet punditry. As I've mentioned before, I did my graduate work in political science, and ultimately erroneous predictions dripping with certainty are commonly found in this realm as well. Unlike most other social sciences, political science has to balance both analysis of past+present conditions and grounded forecasts of the implications of those conditions. When there's a revolution in Country X, you'll rarely see an Anthropologist or Social Psychologist quoted in mainstream discussions of What This Means; conversely, you're almost guaranteed to get a juicy quote or two from an academic in the Department of Government and Conventional Wisdom at Ivy-Covered Halls.edu.
This is not a dilemma without a solution, however. Professional Foresight (aka Futurism) also went through a period where specialists would offer up a single prediction of a certain future. In more recent decades -- arguably since Hermann Kahn's On Thermonuclear War in 1960, but more generally since the advent of Shell-derived Scenario Planning in the 1990s -- futurism has been more comfortable with uncertainty, and more willing to offer multiple rival forecasts of possible outcomes instead of singular, certain predictions. Multi-scenario foresight has evolved various iterations since then, but they all come down to a core idea: you can't predict the future, but you can see the shape of different possible futures.
So what would this model look like if employed by political pundits and political science academics? To be honest, it would probably be confusing, and make for bad television. We as a civilization have a bias towards spectacle and a preference for detail over generality; a talking head saying "this could happen, or that, or this other thing, they're all plausible outcomes" will be squished by someone with a loud voice and absolute certainty.
Certain but wrong usually beats complex and observant. Enjoy your future.
Stanford University Civil Engineering professor Mark Jacobson (and team) have published an article in Nature Climate Change showing that a large cluster of offshore wind turbines -- about 300+ GW worth -- could significantly reduce the wind speeds and storm surges from hurricanes. BBC article & video. PDF of NCC article. From the abstract:
Benefits occur whether turbine arrays are placed immediately upstream of a city or along an expanse of coastline. The reduction in wind speed due to large arrays increases the probability of survival of even present turbine designs. The net cost of turbine arrays (capital plus operation cost less cost reduction from electricity generation and from health, climate, and hurricane damage avoidance) is estimated to be less than today’s fossil fuel electricity generation net cost in these regions and less than the net cost of sea walls used solely to avoid storm surge damage.
With the possibility that anthropogenic global warming is increasing the frequency and/or intensity of hurricanes (a still-ambiguous issue), this seems like a good thing. After all, these wind turbines are built to generate power, and the hurricane-dampening effect would be a pleasant side-effect. Reduced wind speeds and storm surges mean reduced losses of life, property, and resources. Good news, everybody.
But remember that the climate is a complex system with myriad interactions with the ocean, plant/animal ecosystems, aquifers, soil, and on and on. If hurricane impacts are reduced to below the pre-AGW norm, it's highly likely that we'll see some level of unintended cost to environmental systems that had evolved to be dependent upon periodic inrushes of water, high winds (think seed and insect dispersal), or other consequences of hurricane landfall.
If Jacobson et al are correct (and for now, this is entirely model-based -- so probably generally accurate, but with the potential for small-but-important errors), think of this as both an opportunity and a warning. Offshore wind turbines, built to generate electricity, may also have the capacity to measurably reduce the intensity of hurricanes approaching land. As attractive as this sounds, we'll have to be all the more alert to the possibility of upstream ecosystem disruptions.
German public radio program DRadio Wissen spoke with me this week on the subject of Google and the Future, with a particular emphasis on privacy. The conversation, which ran about 20 minutes, was edited down to a 12 minute report, mixing German and English.
The title here (also the title given the piece at DRadio Wissen) nicely sums up my argument: Google is a long-term focused company, with plenty of smart people and big ideas, but everything (for now) remains driven by advertising. Gmail, Maps, and all of its other services are offered solely as a way to bring eyeballs to Google's real customers, advertisers.
A couple of years ago, Christian Moran interviewed me for a series of short films he planned to make, focusing on reasons for optimism. That film series is now available at his website, and it's a decent variety of people grappling with big ideas from different perspectives. Technologists, scientists, journalists, artists, doctors... and me. The half-hour interview may be one of the best ones I've done, in terms of how well the ideas I'm trying to articulate come across.
A few caveats, though. Christian was really taken with a somewhat offhand comment I made in the course of the conversation and highlights it in his introduction; fortunately, it's not made the focus of the video. Also, remember that it was recorded in mid-2012, so if there's an obvious reference that I'm not including (e.g., Snowden stuff), that's why. Finally, I really need not to slouch, especially when I wear t-shirts and jackets.
* And yes, I know "alright" isn't grammatically correct, but it's his movie series and he can name it what he wants.
It will likely come as little or no surprise that cryptocurrencies like Bitcoin, Litecoin, and Dogecoin (my favorite) are frequent topics of conversation among futurist types. After all, they're supposed to be paradigm-breaking disruptions of the status quo, or something. But I still haven't gotten over my sense that something isn't quite fully-baked about the current generation of digital currencies, and I'm going to spend my ~500 words here trying to spell out why.
Cryptocurrencies are computationally-derived mathematical artifacts intended to function as money -- they're to be used to store value and to be exchanged for goods and services. The difference between cryptocurrencies and the US Dollar (or other sovereign-state currency) is that the Dollar is backed by the "full faith and credit" of the United States, meaning that as long as the US is a functioning political entity, the dollar can be used to (at minimum) pay American taxes. Conversely, cryptocurrencies are backed by mutual agreement; as long as the market for it exists, a cryptocurrency has some value. The logic behind cryptocurrencies isn't new, and can be seen in the various complementary currencies that have been used for decades in communities around the world, often (as with some cryptocurrencies) with an explicit social or political goal.
Many supporters of cryptocurrencies prefer to draw a parallel to gold, which is not under the control of any single political entity and does not have a set value, instead being priced based on how much people will pay for it (in another currency). This floating value of cryptocurrencies is one recognized challenge for their continued utility. As economist Paul Krugman and others have pointed out, gold has a minimum value, due to its use in industry and jewelry; cryptocurrencies have no minimum value, and could in principle crash to a level where they have effectively zero worth. Hoarding, regulatory decisions, and fraud can all cause wild swings in currency price. This floating value, which for many cryptocurrencies can be extremely volatile, impedes use as stable media of exchange. If the trading value of a Bitcoin versus a Dollar varies throughout the day, a business owner that primarily buys and sells and pays taxes in Dollars takes a risk any time he or she sets a price in Bitcoins. Some businesses may be willing to swallow that risk in order to gain the support of Bitcoin advocates, but for many others, it's just not worth the hassle.
Solving the floating value problem will be difficult, not for arcane economic reasons, but because there are as yet no physical communities where a cryptocurrency serves as a primary currency, usable for a broad variety of run-of-the-mill transactions. No place for the currencies to create a persistent, mutually-understood perceived value outside of its value in exchange for a sovereign currency. Where the users know at a gut level what it means to say that something costs (for example) 100 Bitcoin, the way an American knows what it means when something costs $100. Until then, cryptocurrencies will always be secondary at best, somewhat more fungible than gold coins from World of Warcraft. And that points to what may be the source of my continued skepticism about the current generation of cryptocurrencies: advocates have embraced the argument that all money is imaginary, that the vast majority of transactions now are digital, and that we now live in a globalized market, but have neglected the corresponding social and political grounding that makes this digital decentralization viable.
Andrew Leonard has a short, sharp piece in Salon entitled "Silicon Valley dreams of secession," about a recent talk by tech entrepreneur Balaji Srinivasan calling for the Valley to secede from the US on a wave of 3D printers, drones, and bitcoins. Here's Leonard's capsule of the talk, along with Srinivasan's money quote:
Virtual secession, argues Srinivasan, is just natural evolution. Once upon a time, people seeking better lives left their broken states to immigrate to the U.S. Now, it is time for their descendants to emigrate further, except this time they don’t need to go anywhere physically, except into the cloud.
“Exit,” according to Srinivasan, “means giving people tools to reduce the influence of bad policies over their lives without getting involved in politics… It basically means build an opt in society, run by technology, outside the U.S.”
Long time readers will have guessed what part of Srinivasan's quote bothered me the most: "without getting involved in politics."
In 2009, I wrote a piece entitled "The End-of-Politics Delusion," about a broadly parallel set of arguments emerging from the bowels of Silicon Valley. Democracy is bad, and what we really need is a technology-enabled society to get rid of politics, or so the true believers would have us think. I reacted with this:
Politics is part of a healthy society -- it's what happens when you have a group of people with differential goals and a persistent relationship. It's not about partisanship, it's about power. And while even small groups have politics (think: supporting or opposing decisions, differing levels of power to achieve goals, deciding how to use limited resources), the more people involved, the more complex the politics. Factions, parties, ideologies and the like are simply ways of organizing politics in a complex social space -- they're symptoms of politics, not causes.
Calls to get rid of politics can therefore mean one of two things: getting rid of persistent relationships with other people; or getting rid of differential goals. Since I don't see too many of the folks who talk about escaping politics also talking about becoming lone isolationists, the only reasonable presumption is that they're really talking about eliminating disagreements.
It's the latest version of the notion that "a perfect world is one where everyone agrees with me."
Anyone calling for an end to politics, whether via secession or technocracy or singularity, either has no understanding of how human societies work (the generous interpretation) or has an authoritarian streak itching to show itself (the less-generous version). Srinivasan's version is even worse due to its dependence upon a thoroughly unreliable, opaque, and politically-biased substrate, "the cloud."
Here's what I mean: technologies fail, sometimes briefly, sometimes disastrously, whether because of physical damage, bad code, or intentional attack; telecommunication systems, in particular the commercial telecom carriers in the US, are notoriously unwilling to divulge operational details and abide by network neutrality; and all of these technologies embed norms and choices that are inherently biased [just as one example, the vast majority of home internet connections in the US are asymmetric, with much faster download (consumption) speeds than upload (creation) speeds -- that's a choice, not an inherent fact of the technology]. Using this as the basis of a political system seems... unwise.
It's the big question for the century: how do we manage human society across the planet as the planet itself becomes increasingly hostile? Geophysical systems are complex, slow (in human terms), and deeply interconnected -- exactly the combination of conditions that our existing systems of governance can't handle. Simply continuing to operate as if nothing is changing at a point when everything is changing is a recipe for disaster.
Jake Dunagan has proposed a panel for South by Southwest Interactive 2014 to explore this very question, and asked me to be on it. Here's the tricky part: the selection of the panel depends (in part) on community support. In other words, if you think this is a useful or important idea, you need to vote for the panel.
Governance in the AnthropoceneWelcome to the Anthropocene, a geological epoch defined by humans and human activities. Humans are a global geological and evolutionary force, yet our economies, social formations, consumption patterns, and governments operate with intentional blindness to this enormous power and responsibility. The institutions that support human civilization, many of which have caused the global challenges we face today, do not appear capable of adapting successfully to 21st century realities. What is needed is a global movement to re-think and re-design governance for the Anthropocene epoch.
One of the first rules one is taught as a futurist-in-training is to avoid "normative scenarios" -- forecasts that describe what you want to see, even when the signals and evidence at hand make the scenario highly unlikely. This is much more of a challenge than non-futurists may think, as a good scenarist can usually come up with a plausible set of early indicators and distant early warnings to support just about any forecast. If one's work focuses on issues that have a strong ethical component (around human rights, for example, or the global environment) the problem is further multiplied.
One of the reasons I've been running silent over the past month or so has been the explosion of news around government (and corporate) surveillance of the Internet. Not that I'm especially worried about my own stuff -- I have a fairly public life, and have few secrets worth knowing. But the implications for the futures of privacy, security, commerce, communications, big data, and so forth are so enormous that I'm still trying to wrap my mind around where this is all going. And the desire to imagine normative scenarios about the potential outcomes is almost overwhelming.
Reality has a bad habit of undermining desired futures. Here's a non-privacy example: If you have a moral stance that says that individual access to guns should be strictly controlled or prohibited in the U.S., you may wish to imagine future outcomes where such restrictions are possible and widely accepted. But the evolution of 3D printers has made that kind of future highly implausible, as designs for 3D-printer-friendly firearms have now emerged and spread. As long as 3D printers are available, it will be extremely difficult to eliminate or control access to firearms, and as 3D printers become more capable, we'll see increasingly diverse and powerful printable weapons. Any discussions of "gun control" that don't acknowledge this are doomed to imminent irrelevance.
So when we think about the future of privacy, surveillance, and related concepts, one of the first questions we need to ask is "what real-world conditions constrain our possible futures?" What are the technical aspects of privacy and surveillance, and what kinds of changes would have to happen to shift the balance between the two? What are the political barriers? For example, if a leader took positive steps to reduce government surveillance, and subsequently a major terrorist attack happened, how likely is it that the public (and certainly the political opponents of said leader) would link the two? If the technological standards underlying the present-day Internet make full privacy essentially impossible -- not just vis-a-vis government snoops but also corporate "big data" behavior analysis -- who would actually have the capability to construct a more secure alternative?
I'm still thinking.
The Earth's Environment
"Some of the most thoughtful work on the topic of climate change..."
-- The Futurist (July/Aug 2009)
What do we do if our best efforts to limit the emission of greenhouse gases into the atmosphere fall short? According to a growing number of
environmental scientists, we may be forced to try an experiment in global climate management: geoengineering.
Geoengineering would be risky, likely to provoke international tension, and certain to
have unexpected consequences. It may also be inevitable.
Environmental futurist Jamais Cascio explores the implications of geoengineering in
this collection of thought-provoking essays. Is our civilization ready
to take on the task of re-engineering the planet?
Geoengineering would be risky, likely to provoke international tension, and certain to have unexpected consequences. It may also be inevitable.
Environmental futurist Jamais Cascio explores the implications of geoengineering in this collection of thought-provoking essays. Is our civilization ready to take on the task of re-engineering the planet?
Since November 11, 2007. Based on IEA averages.