Everything Will Be Alright* interview for documentary series.
(video) February 2014
Crime and Punishment discussion at Fast Company's Innovation Uncensored
(video) April 2013
Bots, Bacteria, and Carbon talk at the University of Minnesota
(video) March 2013
Visions of a Sustainable Future interview
(text) March 2013
Talking about apocalypse gets dull...all apocalypses are the same, but all successful scenarios are different in their own way.
The Future and You! interview
(video) December 2012
Bad Futurism talk in San Francisco
(video) December 2012
Inc. magazine interview
(text) December 2012
Any real breakthrough in AI is going to come from gaming.
Singularity 1 on 1 interview
(video) November 2012
(text) September 2012
One hope for the future: That we get it right.
Doomsday talk in San Francisco
(video) June 2012
Polluting the Data Stream talk in San Francisco
(video) April 2012
Peak Humanity talk at BIL2012 in Long Beach
(video) February 2012
(text) January 2012
Our tools don't make us who we are. We make tools because of who we are.
Hacking the Earth talk in London
(video) November 2011
(text) May 2011
The fears over eugenics come from fears over the abuse of power. And we have seen, time and again, century after century, that such fears are well-placed.
Future of Facebook project interviews
(video) April 2011
Geoengineering and the Future interview for Hearsay Culture
(audio) March 2011
Los Angeles and the Green Future interview for VPRO Backlight
(video) November 2010
Surviving the Future excerpts on CBC
(video) October 2010
Future of Media interview for BNN
(video) September 2010
Hacking the Earth Without Voiding the Warranty talk at NEXT 2010
(video) September 2010
We++ talk at Guardian Activate 2010
(video) July 2010
Wired for Anticipation talk at Lift 10
(video) May 2010
Soylent Twitter talk at Social Business Edge 2010
(video) April 2010
Hacking the Earth without Voiding the Warranty talk at State of Green Business Forum 2010
(video) February 2010
Manipulating the Climate interview on "Living on Earth" (public radio)
(audio) January 2010
(video) January 2010
Homesteading the Uncanny Valley talk at the Biopolitics of Popular Culture conference
(audio) December 2009
Sixth Sense interview for NPR On the Media
(audio) November 2009
If I Can't Dance, I Don't Want to be Part of Your Singularity talk for New York Future Salon
(video) October 2009
Future of Money interview for /Message
(video) October 2009
Cognitive Drugs interview for "Q" on CBC radio
(audio) September 2009
How the World Could (Almost) End interview for Slate
(video) July 2009
Geoengineering interview for Kathleen Dunn Show, Wisconsin Public Radio
(audio) July 2009
Augmented Reality interview at Tactical Transparency podcast
(audio) July 2009
ReMaking Tomorrow talk at Amplify09
(video) June 2009
Mobile Intelligence talk for Mobile Monday
(video) June 2009
Amplify09 Pre-Event Interview for Amplify09 Podcast
(audio) May 2009
How to Prepare for the Unexpected Interview for New Hampshire Public Radio
(audio) April 2009
Cascio's Laws of Robotics presentation for Bay Area AI Meet-Up
(video) March 2009
How We Relate to Robots Interview for CBC "Spark"
(audio) March 2009
Looking Forward Interview for National Public Radio
(audio) March 2009
Future: To Go talk for Art Center Summit
(video) February 2009
Brains, Bots, Bodies, and Bugs Closing Keynote at Singularity Summit Emerging Technologies Workshop (video) November 2008
Building Civilizational Resilience Talk at Global Catastrophic Risks conference
(video) November 2008
Future of Education Talk at Moodle Moot
(video) June 2008
(text) May 2008
"In the best scenario, the next ten years for green is the story of its disappearance."
A Greener Tomorrow talk at Bay Area Futures Salon
(video) April 2008
Geoengineering Offensive and Defensive interview, Changesurfer Radio
(audio) March 2008
(text) March 2008
"The road to hell is paved with short-term distractions. "
The Future Is Now interview, "Ryan is Hungry"
(video) March 2008
G'Day World interview
(audio) March 2008
UK Education Drivers commentary
(video) February 2008
Futurism and its Discontents presentation at UC Berkeley School of Information
(audio) February 2008
Opportunity Green talk at Opportunity Green conference
(video) January 2008
Metaverse: Your Life, Live and in 3D talk
(video) December 2007
Singularity Summit Talk
(audio) September 2007
Political Relationships and Technological Futures interview
(video) September 2007
(audio) September 2007
"Science Fiction is a really nice way of uncovering the tacit desires for tomorrow...."
G'Day World interview
(audio) June 2007
(audio) June 2007
Take-Away Festival talk
(video) May 2007
(audio) May 2007
Changesurfer Radio interview
(audio) April 2007
(audio) July 2006
FutureGrinder: Participatory Panopticon interview
(audio) March 2006
TED 2006 talk
(video) February 2006
Commonwealth Club roundtable on blogging
(audio) February 2006
Personal Memory Assistants Accelerating Change 2005 talk
(audio) October 2005
Participatory Panopticon MeshForum 2005 talk
(audio) May 2005
I'm back from the first Climate Engineering Conference, held in Berlin. Quite a good trip, but in many ways the highlight was the talk I gave at the Berlin Natural History Museum. The gathering took place in the dinosaur room, which holds (among other treasures) the "Berlin Specimen" Archaeopteryx fossil, among the most famous and most important fossils ever discovered.
The acoustics of the place, however, were terrible, so I don't know how well any recordings will turn out. Fortunately, I had to script my talk, so I can offer the full text of what I said:
I’ve been doing foresight work for the past 20 years or so, and put simply, my job is to look at the big picture. To get away from the perspective of quarterly results and short horizon thinking. To break away from conventional points of view by stepping way back. Unsurprisingly, these days much of my work focuses on climate disruption and topics like geoengineering. But here’s the secret: in planetary terms, our actions don’t actually matter that much in the long run. The Earth, as a planet, as a global ecological system, will – over time – be just fine.
After all, it’s dealt with worse than us. Environmental scientists may call the current era the “sixth extinction,” but human civilization is still pretty much a comparative amateur when it comes to wiping out the Earth’s species. Given that there’s a past extinction event called The Great Dying, responsible for killing off possibly 90% of the species on the Earth at the time, arguably we’re nowhere near as dangerous to nature as nature is itself.
But here’s the thing: even after the Great Dying, life came back and, over time, flourished. Every extinction event has eventually become the catalyst for a new surge in life. Given time, evolution works. Environmental niches get filled. Species emerge and change to take full advantage of new planetary conditions. The animals and plants we worry will disappear as the result of human carelessness and ignorance are, in evolutionary terms, only temporary residents of the world – ephemeral, just like we are. The image we have in our heads of what the global environment looks like today is just that – a static snapshot of a dynamic system.
This realization – that the Earth will abide, no matter our mistakes – may seem liberating but is actually quite sobering. Because what this knowledge tells us isn’t that we’re free to do what we will, but that the brutal strength of our fears about what human activity is doing to our world comes from its effect on us. The Earth may be fine, but the fragile webs connecting human civilization to the planet’s ecosystems won’t be.
We don’t need to worry about driving the bees to the edge of extinction because the Earth will somehow be harmed; given time, evolution will fill that niche. We need to worry about the bees because without them our ability to feed ourselves will be eviscerated. Any anxiety we have about the creation of ocean dead zones or the collapse of fisheries is really about what these conditions will do to humanity, to the ability of seven-plus billion people to survive. And the dangers from global temperatures rising by five or more degrees over the course of just a century – an increase so fast in geologic terms it seems as if humanity is somehow the warming equivalent of an asteroid hitting the planet – these dangers will simply make it impossible for human civilization to continue on its current path.
So, does that mean civilization will collapse? Probably not. Humans are reasonably smart. As a species, we’ve survived massive natural environmental disruption before, and with less knowledge and fewer tools than we have today. But that’s not the whole story.
When writer William Gibson said that “the future is here, it’s just not evenly distributed,” he wasn’t just talking about technology. Imbalances in resources, in power, in luck all mean that a majority of the world’s population already lives on the precarious edge of catastrophe. From my “big picture” futurist point of view, it’s easy to say that we’ll adapt. But for far too many of us, that process of forced adaptation will be tragic, and painful, and deadly.
Saying that the Earth will be fine isn’t an attempt to absolve ourselves of responsibility for the harm that we’ve done to the planet. Rather, it’s a blunt acknowledgement that the concerns we have about the world are ultimately – and, I think, appropriately – selfish. The health of the environment, here in this moment of the Anthropocene, is directly connected to the health of human civilization. We’re not separate from nature, we’re very much a part of it; in every sense that matters the well-being of the Earth is thoroughly, intimately, interwoven with our future. In other words, when we harm the planet today, we are really harming ourselves over the long tomorrow.
So, the second announcement can now be revealed: I'm one of the speakers at the 2014 TEDx Marin event on September 18. I'll be talking about the Magna Cortica, and will be speaking alongside my IFTF colleague Miriam Lueck Avery (talking about the microbiome), CEO of the Center for Investigative Reporting Joaquin Alvorado (talking about reinventing journalism), UC Berkeley Professor Ananya Roy (talking about patriarchy and power), and Kenyatta Leal, former San Quentin inmate (talking about how education and entrepreneurship can transform prison).
TEDx events can be a bit of a gamble; there have been enough low-quality, misinformation-driven speakers that I've generally steered clear of all of them. TEDx Marin, however, looks to have a solid history of picking good, smart people to offer interesting and provocative observations -- without veering into controversy for controversy's sake.
Tickets are limited, run about $70, and will only be available through August 5. Come out and say hi!
Okay, first of a few announcements (posting as they become public):
In August, I'll be speaking in Berlin, Germany at the Climate Engineering Conference 2014. A major multi-day event, CEC2014 covers the gamut of climate engineering/geoengineering issues, from science to policy to media. I'm on two panels, and then a special extra event.
Natural climate change is a well-understood driver of natural selection & evolution; it stands to reason, then, that human-driven climate change can be a driver of human-directed evolution. I’ll look at some of the implications of directed evolution as a tool for climate adaptation, and the parallels between climate engineering and biosystem engineering.
I'll actually be in Berlin for the entire week, so if any German/EU readers want to ping me about giving a talk nearby, please do let me know.
There are a couple more items I'll be announcing soon, so stay tuned -- same Bat-Time, same Bat-Channel.
One of the projects I worked on for the Institute for the Future's 2014 Ten-Year Forecast was Magna Cortica, a proposal to create an overarching set of ethical guidelines and design principles to shape the ways in which we develop and deploy the technologies of brain enhancement over the coming years. The forecast seemed to strike a nerve for many people -- a combination of the topic and the surprisingly evocative name, I suspect. Alexis Madrigal at The Atlantic Monthly wrote a very good piece on the Ten-Year Forecast, focusing on Magna Cortica, and Popular Science subsequently picked up on the story. I thought I'd expand a bit on the idea here, pulling in some of the material I used for the TYF talk.
As you might have figured, the name Magna Cortica is a direct play on the Magna Carta, the so-called charter of liberties from nearly 800 years ago. The purpose of the Magna Carta was to clarify the rights that should be more broadly held, and the limits that should be placed on the rights of the king. All in all a good thing, and often cited as the founding document of a broader shift to democracy.
The Magna Cortica wouldn’t be a precise mirror of this, but it would follow a similar path: the Magna Cortica project would be an effort to make explicit the rights and restrictions that would apply to the rapidly-growing set of cognitive enhancement technologies. The parallel may not be precise, but it is important: while the crafters of the Magna Carta feared what might happen should the royalty remain unrestrained, those of us who would work on the Magna Cortica project do so with a growing concern about what could happen in a world of unrestrained pursuit of cognitive enhancement. The closer we look at this path of development, the more we see reasons to want to be cautious.
Of course, we have to first acknowledge that the idea of cognitive enhancement isn’t a new one. Most of us regularly engage in the chemical augmentation of our neurological systems, typically through caffeinated beverages. And while the value of coffee and tea includes wonderful social and flavor-based components, it’s the way that consumption kicks our thinking into high gear that usually gets the top billing. This, too, isn’t new: there are many scholars who correlate the emergence of so-called “coffeehouse society” with the onset of the enlightenment.
But if caffeine is our legacy cognitive technology, it has more recently been overshadowed by the development of a variety of brain boosting drugs. What’s important to recognize is that these drugs were not created in order to make the otherwise-healthy person smarter, they were created to provide specific medical benefits.
Provigil and its variants, for example, were invented as a means of treating narcolepsy. Like coffee and tea, it keeps you awake; unlike caffeine, however, it’s not technically a stimulant. Clear-headed wakefulness is itself a powerful boost. But for many users, Provigil also measurably improves a variety of cognitive processes, from pattern recognition to spatial thinking.
Much more commonly used (and, depending upon your perspective, abused) are the drugs devised to help people with attention-deficit disorder, from the now-ancient Adderall and Ritalin to more recent drugs like Vyvanse. These types of drugs are often a form of stimulant -- usually part of the amphetamine family, actually -- but have the useful result of giving users enhanced focus and greatly reduced distractibility.
These drugs are supposed to be prescribed solely for people who have particular medical conditions. The reality, however, is that the focus-enhancing, pattern-recognizing benefits don’t just go to people with disorders -- and these kinds of drugs have become commonplace on university campuses and in the research departments of high-tech companies around the world.
Over the next decade, we’re likely to see the continued emergence of a world of cognitive enhancement technologies, primarily but not exclusively pharmaceutical, increasingly intended for augmentation and not therapy. And as we travel this path, we’ll see even more radical steps, technologies that operate at the genetic level, digital artifacts mixing mind and machine, even the development of brain enhancements that could push us well beyond what’s thought to be the limits of “human normal.”
For many of us, this is both terrifying and exhilarating. Dystopian and utopian scenarios clash and combine. It’s a world of relentless competition to be the smartest person in the room, and unprecedented abilities to solve complex global problems. A world where the use of cognitive boosting drugs is considered as much of an economic and social demand as a present-day smartphone, and one where the diversity of brain enhancements allows us to see and engage with social and political subtleties that would once have been completely invisible. It's the world I explored a bit in my 2009 article in The Atlantic Monthly, "Get Smarter."
And such diversity is really already in play, from so-called “exocortical” augmentations like Google Glass to experimental brain implants to ongoing research to enhance or alter forms of social and emotional expression, including anger, empathy, even religious feelings.
There’s enormous potential for chaos.
There are numerous questions that we’ll need to resolve, dilemmas that we'll be unable to avoid confronting. Since this project may also be seen as a cautious “design spec,” what would we want in an enhanced mind? What should an enhanced mind be able to do? Are there aspects of the mind or brain that we should only alter in case of significant mental illness or brain injury? Are there aspects of a mind or brain we should never alter, no matter what? (E.g., should we ever alter a person’s sense of individual self?)
What are the the rights and responsibilities we would have to the non-human minds that would be enhanced and potentially created along the way to human cognitive enhancement. Animal testing would be unavoidable. What would we owe to rats, dogs, and apes (etc.) with potentially vastly increased intellect? Similarly, whole-brain neural network simulations, like the Blue Brain project, offer a very real possibility of the eventual creation of a system that behaves like -- possibly even believes itself to be -- a human mind. What responsibilities would we have towards such a system? Would it be ethical to reboot it, to turn it off, to erase the software?
The legal and political aspects cannot be ignored. We would need extensive discussion of how this research will be integrated into legal frameworks, especially with the creation of minds that don’t fall neatly into human categories. And as it’s highly likely that military and intelligence agencies will have a great deal of interest in this set of projects, the role that such groups should have will need to be addressed -- particularly once a “hostile actor” begins to undertake similar research.
Across all of this, we'd have to consider the practices and developments that are not currently considered near-term feasible, such as molecular nanotechnologies, as well as techniques not yet invented or conceived. How can we make rules that apply equally well to the known and the unknown?
All of these would be part of a Magna Cortica project. But for today, I’d like to start with five candidates for inclusion as basic Magna Cortica rights, as a way of… let’s say nailing some ideas to a door.
Of course, there’s the inescapably related question: who else would have the right to that knowledge?
And here again we encounter the terrifying and the exhilarating: we are almost certain be facing these questions, these crises and dilemmas, over the next ten to twenty years. As long as intelligence is considered a competitive advantage in the workplace, in the labs, or in high office, there will be efforts to make these technologies happen. The value of the Magna Cortica project would be to bring these questions out into the open, to explore where we draw the line that says “no further,” to offer a core set of design principles, and ultimately to determine which pathways to follow before we reach the crossroads.
Futurism -- scenario-based foresight, in particular -- has many parallels to science fiction literature, enough that the two can sometimes be conflated. It's no coincidence that there's quite a bit of overlap between the science fiction writer and futurist communities, and (as a science fiction reader since I was old enough to read) I could myself as extremely fortunate to be able to call many science fiction writers friends. But science fiction and futurism are not the same thing, and it's worth a moment's exploration to show why.
The similarities between the two are obvious. Broadly speaking, both science fiction and futurism involve the development of internally-consistent, plausible future worlds extrapolating from the present. Science fiction and many (but not all) scenario-based forms of futurism both rely on narrative to explore their respective future worlds. Futurist works and many (but not all) science fiction stories have as an underlying motive a desire to illuminate the present (and the dilemmas we now face) by showing ways in which the existing world may evolve.
But here's the twist, and the reason that science fiction and futurism are not identical, but instead are mirror-opposites:
In science fiction, the author(s) build their internally-consistent, plausible future worlds to support a character narrative (taking "character" in the broadest sense -- in science fiction, it's entirely possible for the main character to be a space ship, a computer network, a city, even a planet). In short, a story. Conversely, futurists develop any story or character narrative (here found primarily in scenario-based futurism) to support the depiction of internally-consistent, plausible future worlds.
Science fiction writers need to build out their worlds with enough detail and system knowledge to provide consistent scaffolding for character behavior, allowing the reader (and the author) to understand the flow of the story logic. It's often the case that a good portion of the world-building happens behind the scenes -- written for the author's own use, but never showing up directly on the page. But there's little need for science fiction writers to build their worlds beyond that scaffolding.
Futurists need to make as much of their world-building explicitly visible as possible (and here the primary constraint is usually the intersection of limits to report length and limits to reader/client attention); any "behind the scenes" world-building risks leaving out critical insights, as often the most important ideas to emerge from foresight work concerns those basic technology drivers and societal dynamics. When a futurist narrative includes a story (with or without a main character), that story serves primarily to illuminate key elements of the internally-consistent, plausible future worlds. (The plural "worlds" is intentional; as anyone who follows my work knows, one important aspect of futures work is often the creation of parallel alternative scenarios.)
In science fiction, the imagined world supports the story; in futurism, the story supports the imagined world.
It's a simple but crucial difference, and one that too many casual followers of foresight work miss. If a futurist scenario reads like bad science fiction, it's because it is bad science fiction, in the sense that it's not offering the narrative arc that most good pieces of literature rely upon. And if the future presented in a science fiction story is weak futurism, that's not a surprise either -- as long as the future history helps to make the story compelling, it's done its job.
Futurists and science fiction writers often "talk shop" when they get together -- but fundamentally, their jobs are very, very different.
It's often frustrating, as a foresight professional, to listen/read what passes for political discourse, especially during a big international crisis (such as the Russia-Ukraine-Crimea situation). Much of the ongoing discussion offers detailed predictions of what one state or another will do and clear assertions of inevitable outcomes, all with an overwhelming certainty of anticipatory analysis. Of course, these various prognostications will almost always be wrong; worse, they'll typically be wrong in a useless way, having obscured or confused our understanding of the world more than they've illuminated it.
It's not just a peculiarity of Central European crises. We can see a similar process play out in nearly every global-scale system with consequences beyond the immediate, economically, militarily, or politically. Detailed claims about imminent inflation or the arrival of an Iranian nuclear weapon by the end of the year get treated as gospel up to the moment when the assertion is shown to be wrong, after which the previous statement drops down the memory hole and is replaced by one about a new threat of imminent inflation or the arrival of an Iranian nuclear weapon by the end of the new year. Those who inflict this Potemkin futurism on us -- predictions without substance portrayed as careful analysis of future outcomes -- never suffer the consequences of being wrong. Anyone offering more subtle or complex analysis will be treated at best as having just another opinion, or even actively ignored if what they say runs counter to the conventional wisdom.
This prediction-error-prediction cycle isn't just a feature of television or Internet punditry. As I've mentioned before, I did my graduate work in political science, and ultimately erroneous predictions dripping with certainty are commonly found in this realm as well. Unlike most other social sciences, political science has to balance both analysis of past+present conditions and grounded forecasts of the implications of those conditions. When there's a revolution in Country X, you'll rarely see an Anthropologist or Social Psychologist quoted in mainstream discussions of What This Means; conversely, you're almost guaranteed to get a juicy quote or two from an academic in the Department of Government and Conventional Wisdom at Ivy-Covered Halls.edu.
This is not a dilemma without a solution, however. Professional Foresight (aka Futurism) also went through a period where specialists would offer up a single prediction of a certain future. In more recent decades -- arguably since Hermann Kahn's On Thermonuclear War in 1960, but more generally since the advent of Shell-derived Scenario Planning in the 1990s -- futurism has been more comfortable with uncertainty, and more willing to offer multiple rival forecasts of possible outcomes instead of singular, certain predictions. Multi-scenario foresight has evolved various iterations since then, but they all come down to a core idea: you can't predict the future, but you can see the shape of different possible futures.
So what would this model look like if employed by political pundits and political science academics? To be honest, it would probably be confusing, and make for bad television. We as a civilization have a bias towards spectacle and a preference for detail over generality; a talking head saying "this could happen, or that, or this other thing, they're all plausible outcomes" will be squished by someone with a loud voice and absolute certainty.
Certain but wrong usually beats complex and observant. Enjoy your future.
Stanford University Civil Engineering professor Mark Jacobson (and team) have published an article in Nature Climate Change showing that a large cluster of offshore wind turbines -- about 300+ GW worth -- could significantly reduce the wind speeds and storm surges from hurricanes. BBC article & video. PDF of NCC article. From the abstract:
Benefits occur whether turbine arrays are placed immediately upstream of a city or along an expanse of coastline. The reduction in wind speed due to large arrays increases the probability of survival of even present turbine designs. The net cost of turbine arrays (capital plus operation cost less cost reduction from electricity generation and from health, climate, and hurricane damage avoidance) is estimated to be less than today’s fossil fuel electricity generation net cost in these regions and less than the net cost of sea walls used solely to avoid storm surge damage.
With the possibility that anthropogenic global warming is increasing the frequency and/or intensity of hurricanes (a still-ambiguous issue), this seems like a good thing. After all, these wind turbines are built to generate power, and the hurricane-dampening effect would be a pleasant side-effect. Reduced wind speeds and storm surges mean reduced losses of life, property, and resources. Good news, everybody.
But remember that the climate is a complex system with myriad interactions with the ocean, plant/animal ecosystems, aquifers, soil, and on and on. If hurricane impacts are reduced to below the pre-AGW norm, it's highly likely that we'll see some level of unintended cost to environmental systems that had evolved to be dependent upon periodic inrushes of water, high winds (think seed and insect dispersal), or other consequences of hurricane landfall.
If Jacobson et al are correct (and for now, this is entirely model-based -- so probably generally accurate, but with the potential for small-but-important errors), think of this as both an opportunity and a warning. Offshore wind turbines, built to generate electricity, may also have the capacity to measurably reduce the intensity of hurricanes approaching land. As attractive as this sounds, we'll have to be all the more alert to the possibility of upstream ecosystem disruptions.
German public radio program DRadio Wissen spoke with me this week on the subject of Google and the Future, with a particular emphasis on privacy. The conversation, which ran about 20 minutes, was edited down to a 12 minute report, mixing German and English.
The title here (also the title given the piece at DRadio Wissen) nicely sums up my argument: Google is a long-term focused company, with plenty of smart people and big ideas, but everything (for now) remains driven by advertising. Gmail, Maps, and all of its other services are offered solely as a way to bring eyeballs to Google's real customers, advertisers.
A couple of years ago, Christian Moran interviewed me for a series of short films he planned to make, focusing on reasons for optimism. That film series is now available at his website, and it's a decent variety of people grappling with big ideas from different perspectives. Technologists, scientists, journalists, artists, doctors... and me. The half-hour interview may be one of the best ones I've done, in terms of how well the ideas I'm trying to articulate come across.
A few caveats, though. Christian was really taken with a somewhat offhand comment I made in the course of the conversation and highlights it in his introduction; fortunately, it's not made the focus of the video. Also, remember that it was recorded in mid-2012, so if there's an obvious reference that I'm not including (e.g., Snowden stuff), that's why. Finally, I really need not to slouch, especially when I wear t-shirts and jackets.
* And yes, I know "alright" isn't grammatically correct, but it's his movie series and he can name it what he wants.
The Earth's Environment
"Some of the most thoughtful work on the topic of climate change..."
-- The Futurist (July/Aug 2009)
What do we do if our best efforts to limit the emission of greenhouse gases into the atmosphere fall short? According to a growing number of
environmental scientists, we may be forced to try an experiment in global climate management: geoengineering.
Geoengineering would be risky, likely to provoke international tension, and certain to
have unexpected consequences. It may also be inevitable.
Environmental futurist Jamais Cascio explores the implications of geoengineering in
this collection of thought-provoking essays. Is our civilization ready
to take on the task of re-engineering the planet?
Geoengineering would be risky, likely to provoke international tension, and certain to have unexpected consequences. It may also be inevitable.
Environmental futurist Jamais Cascio explores the implications of geoengineering in this collection of thought-provoking essays. Is our civilization ready to take on the task of re-engineering the planet?
Since November 11, 2007. Based on IEA averages.