« July 2008 | Main Page | September 2008 »

Monthly Archives

August 30, 2008

Methane Trigger for Geo & Bio Engineering

Siberia2.png

Methane (CH4) is 20-25 times more powerful a greenhouse gas than carbon dioxide (CO2). We're quite familiar with one source of atmospheric methane -- enteric fermentation in cattle (see my innumerable posts about cheeseburger carbon footprints). But there's another source of methane that has the potential to be far greater in volume, and correspondingly far more threatening to the climate. It's the methane from bogs and marshlands that is trapped under the Siberian permafrost.

Well, that was trapped. As the permafrost melts, the methane is now starting to leak out.

Methane, a potent greenhouse gas, is leaking from the permafrost under the Siberian seabed, a researcher on an international expedition in the region told Swedish daily Dagens Nyheter on Saturday.

"The permafrost now has small holes. We have found elevated levels of methane above the water surface and even more in the water just below. It is obvious that the source is the seabed," Oerjan Gustafsson, the Swedish leader of the International Siberian Shelf Study, told the newspaper.

The tests were carried out in the Laptev and east Siberian seas and used much more precise measuring equipment than previous studies, he said.

And that's pretty much all that's been said, so far. It does seem to confirm Russian reports from a couple of years ago. But it's unfortunate that the reporters covering this didn't mention just how much methane is trapped under the permafrost in Siberia, because the amount is staggering.

The most conservative estimates I've seen start at around 70 billion metric tons of methane -- the equivalent in greenhouse terms to 1.6 trillion metric tons of CO2. As a point of comparison, the total annual greenhouse footprint in the US is about 7 billion tons; globally, the annual footprint is about 30 billion tons.

If this methane leak continues to increase, we may be facing a disastrous result that no amount of renewable energy, vegetarianism, and bicycling will help. This is one scenario in which the deployment of geoengineering is over-determined, probably needing to remain in place for quite a while as we try to remove the methane (or, at worst, wait for it to cycle out naturally over the course of a decade or so). It's also a scenario that might require large-scale use of bioengineering. As I wrote a few years ago, when the Russian reports started to come out:

Chemical processes in the atmosphere break down CH4 (in combination with oxygen) into CO2+H2O -- carbon dioxide and water. In addition, certain bacteria -- known as methanotrophs -- actually consume methane, with the same chemical results. [...] It appears to me that what will be the most effective means of mitigating and remediating the gargantuan methane excursion from the Siberian permafrost melt would be using genetically-modified forms of methanotrophic bacteria, with greater oxidation capacity and the Archaea-derived resistance to extreme cold (these may well go hand-in-hand, as one way that deep sea methanotrophs survive the icy depths is through internal energy production from methane consumption). Given the size of the region, we'll need lots of them, but that's another advantage of biology over straight chemistry: the methanotrophs would be reproducing themselves.

Is it a perfect solution? No -- it's unproven, with unknown implications, and (at the very least) would result in some levels of CO2 emissions (although with a far smaller greenhouse footprint than the original methane). But the leak of permafrost methane is one of those lesser-known stories that could end up determining whether we make it through this century or not. It's one of the reasons why I think that geoengineering is a near-certainty.

Such fun.

August 26, 2008

Tuesday Topsight, August 26, 2008

Lots of stuff, some of which I hope to get back to in more detail.

• Crowd (Re)Sourcing: Spot.us is a new bottom-up journalism site with a novel funding model: community members pool their money to pay journalists to go after a particular topic. That story then shows up on the Spot.us site, and is pushed to various local media outlets as appropriate.

This isn't a model for breaking-news journalism, but rather for the deeper investigative stuff that blogs tend not to cover so well, and traditional media seems to have largely given up on. Stories underway include the problematic role of ethanol in California (fully-funded), fact-checking San Francisco political claims (almost there), and the connection between SF Bay Area cement kilns and global warming. I'll give you one guess as to where Spot.us is headquartered.

Here's the bigger drawback, though: it's not up yet. There's a blog, and a wiki, and even a Flickr stream of design ideas, but the real site, with real content, won't be up until the Fall.

Worth bookmarking now, though.

• Go North, Young Cow: Cattle, deer, and other grazing animals apparently tend to align themselves along a north-south axis when feeding. And the scientists who reported on this phenomenon, in the Proceedings of the National Academy of Sciences, used Google Earth to do their research.

Dr Sabine Begall, from the University of Duisburg-Essen, Germany, has mainly studied the magnetic sense of mole rats - African animals that live in underground tunnels.

"We were wondering if larger animals also have this magnetic sense," she told BBC News. [...] The researchers surveyed Google Earth images of 8,510 grazing and resting cattle in 308 pasture plains across the globe.

"Sometimes it took hours and hours to find some pictures with good resolution," said Dr Begall.

The scientists were unable to distinguish between the head and rear of the cattle, but could tell that the animals tended to face either north or south.

Their study ruled out the possibility that the Sun position or wind direction were major influences on the orientation of the cattle.

I'm not sure which is more notable: that cattle have a magnetic sense, or that real scientists writing for a real science journal did their research by looking at Google Earth.

• Blame It On the Oil: Andrew Leonard's How the World Works just posted a compelling new argument as to why the status of women in some Islamic countries remains so abysmal: oil. He quotes UCLA's Michael Ross at length; Ross observes that high oil prices make it cheaper to import goods rather than produce for export, reducing the number of low-end production jobs where women historically first make economic (then political) connections.

Ross (PDF), quoted by Leonard:

Oil production affects gender relations by reducing the presence of women in the labor force. The failure of women to join the nonagricultural labor force has profound social consequences: it leads to higher fertility rates, less education for girls, and less female influence within the family. It also has far-reaching political consequences: when fewer women work outside the home, they are less likely to exchange information and overcome collective action problems; less likely to mobilize politically, and to lobby for expanded rights; and less likely to gain representation in government. This leaves oil-producing states with atypically strong patriarchal cultures and political institutions.

Smart stuff. One of the places it makes me think about, though, is China. Young women moving from the countryside into the cities have over the past couple of decades found work in the big factories assembling consumer goods (that some of the women then move from the factories to the nearby brothels to get a better wage should also be noted). Has the export-driven structure of the Chinese economy led to the development of a stronger civil society, led by women?

• Sharks and Fishes: Over at The Oil Drum, Jeff Vail observes that the pattern of gas prices and oil prices bears a very strong resemblance to classic models of predator-prey population cycles.

It's not intended as anything more than an analogy, but like all good analogies, it serves as a catalyst for new perspectives.

. The importance of this analogy is that it may help us to avoid certain policy mistakes (or at least be aware of them). When the oscillations of price and demand/production are superimposed on top of geological depletion and geopolitical feedback loops, the resulting volatility effectively masks the underlying fundamentals [...]. This presents several problems, each of which may be more avoidable if the medium-term fluctuations in price, production, and demand are seen as oscillations on top of a very worrying underlying trend of peak oil.

I like seeing this kind of analysis, simply because it's exactly the kind of work that tends to kick-start new ideas.

(Section title reference.)

• The Uncanny Hype Cycle:

What does this chart:

Uncanny_Valley.png

...have to do with this chart:

gartner-hype-cycle1.png

(Click image for larger version)

Nothing, ostensibly, but I couldn't help but notice a real similarity in form between the Hype Cycle and the Uncanny Valley. They're not identical, of course; for one thing, in the Gardner "Hype Cycle," technology is somehow more visible when it's still in development (but talked about) than when it's actually in mainstream use. Still, it's something that makes me go hmm.

• Safer Orbital Mechanics: Finally, how do you deal with a problem like a possible asteroid strike? Wrap it for safety. That's the proposal of Australian engineering student Mary D'Souza for preventing a kinetic unpleasantness with asteroid Apophis, now scheduled for a close-approach in 2029 and potentially giving the Earth a good whack in 2036. She argues that wrapping the 270-meter asteroid with a reflective sheet (like Mylar) would make it so that reflected sunlight changes its orbit ever-so-slightly. Do something like this early enough, and an ever-so-slight shift is enough to make the asteroid miss us by quite a bit.

Works for me.

Viropiracy?

avian-flu.jpg

Here's a term to add to the jargon pile: Viral Sovereignty.

This extremely dangerous idea comes to us courtesy of Indonesia's minister of health, Siti Fadilah Supari, who asserts that deadly viruses are the sovereign property of individual nations -- even though they cross borders and could pose a pandemic threat to all the peoples of the world.

The Indonesian argument -- now set to be ratified by the Non-Aligned Movement general gathering in November -- is that the information derived from viruses found in a particular country should be the property of that country to control as it sees fit.

The analogy here is to the properties of local plants and animals. In the past, it wasn't uncommon for big country companies to come in to a developing nation, look around for interesting naturally-occuring products, and patent globally anything that they found -- a practice that became known as "biopiracy." Brazil, India, and other leapfrog powerhouses started to push back both politically and legally, often successfully using claims of "prior art" to defeat patents. Traditional Knowledge Libraries and similar data-gathering projects hope to make biopiracy a thing of the past by carefully documenting local uses.

Yay, good work, and all that (seriously). But the assertion of sovereign control over virus strains seems to push the boundaries of legitimacy.

The focus of Indonesia's complaint is Avian Flu, H5N1. Despite Indonesia being a hot zone for H5N1 infections, the Jakarta government no longer cooperates with the World Health Organization, refusing to provide samples of the virus taken from infected people, or even providing timely notification of outbreaks.

Indonesia claims that the US Naval Medical Research Unit in Indonesia, which has focused its attention on H5N1, is actually a front for biowarfare against the Islamic world, corporations looking to monopolize treatments for the viruses, corporations looking to use the viruses to make people sick to be able to sell more treatments, and even the source of H5N1 in Indonesia.

All of this would be silly and tragic, were it not for the endorsement of the concept of viral sovereignty by the Indian Health Minister, and the agreement of the Non-Aligned Movement to formally consider endorsing Indonesia's claims in its next meeting.

As Richard Holbrooke and Laurie Garrett make clear in their editorial earlier this month -- and as I've written about, myself -- it's extraordinarily important for information about potential pandemic diseases to be made as open as possible, if we want to avoid a global health disaster. Withholding viral data, and refusing to provide samples of the viruses, out of a misplaced fear of viropiracy (or more paranoid fantasies), is simply criminal.

August 22, 2008

Thinking About Thinking

Here's the opening of a work in progress....


Seventy-four thousand years ago, humanity nearly went extinct. A super-volcano at what's now Sumatra's Lake Toba erupted with a strength more than a thousand times greater than that of Mount St. Helens in 1981. Over 800 cubic kilometers of ash filled the skies of the northern hemisphere, lowering global temperatures and pushing a climate already on the verge of an ice age over the edge. Genetic evidence shows that at this time – many anthropologists say as a result – the population of Homo sapiens dropped to as low as a few thousand families.

It seems to have been a recurring pattern: Severe changes to the global environment put enormous stresses on our ancestors. From about 2.3 million years ago, up until about 10,000 years ago, the Earth went through a convulsion of glacial events, some (like the post-Toba period) coming on in as little as a few decades.

How did we survive? By getting smarter. Neurophysiologist William Calvin argues persuasively that modern human cognition – including sophisticated language and the capacity to plan ahead – evolved due to the demands of this succession of rapid environmental changes. Neither as strong, nor as swift, nor as stealthy as our competitors, the hominid advantage was versatility. We know that the complexity of our tools increased dramatically over the course of this period. But in such harsh conditions, tools weren't enough – survival required cooperation, and that meant improved communication and planning. According to Calvin, over this relentless series of whiplash climate changes, simple language developed syntax and formal structure, and a rough capacity to target a moving animal with a thrown rock evolved into brain structures sensitized to looking ahead at possible risks around the corner.

Our present century may not be quite as perilous as an ice age in the aftermath of a super-volcano, but it is abundantly clear that the next few decades will pose enormous challenges to human civilization. It's not simply climate disruption, although that's certainly a massive threat. The end of the fossil fuel era, global food web fragility, population density and pandemic disease, as well as the emergence of radically transformative bio- and nanotechnologies – all of these offer ample opportunity for broad social and economic disruption, even devastation. And as good as the human brain has become at planning ahead, we're still biased by evolution to look for near-term, simple threats. Subtle, long-term risks, particularly those involving complex, global processes, remain devilishly hard to manage.

But here's an optimistic scenario for you: if the next several decades are as bad as some of us fear they could be, we can respond, and survive, the way our species has done time and again: By getting smarter. Only this time, we don't have to rely solely on natural evolutionary processes to boost intelligence. We can do it ourselves. Indeed, the process is already underway.

August 18, 2008

Monday Topsight, August 18, 2008

Special Future of War edition: robots, lasers, brain weapons, and a little thing called "strategic thinking."

• 174th Robot Wing: The 174th Fighter Wing of the US Air Force has flown its last mission, and has been replaced by an all-RPV (Remote Piloted Vehicle) squad. The MQ-9 "Reaper" is a real combat aircraft, carrying literally a ton of bombs; it also can stay in operation for over 14 hours straight, uses far less fuel and costs two-thirds less than the F16s it replaces.

Put simply, It's cheaper, more effective, and safer (for pilots) to use Reapers (or similar aircraft) for a lot of the ground support work. Fighters are still needed to keep the skies clear of enemy aircraft, although Reapers are better suited for the dangerous work of destroying enemy air defenses. But for fighting irregulars, the Reaper is king.
fearthereaper.png

It's unclear how much longer the superiority of fighters for air-to-air combat will last, especially if you can get three Reapers in the air for the cost of one Falcon.

These aren't true robots, of course -- they're remote vehicles, with human operators on the ground with radio controls. This means that sticky questions about autonomous systems pulling the trigger on human targets still remain on the horizon. It also means that we'll probably see even more effort put into figuring ways to jam or take over the radio controls.

Finally, it's not hard to imagine that such vehicles would be more likely to be used in situations which would previously have been avoided in order to not put human pilots in danger.

• ZZZZZZZZZAP!: Question is, how long until these remotely-piloted vehicles get outfitted with high-energy lasers for long-distance pinpoint attacks? Right now, the Advanced Tactical Laser system requires a big old C130 cargo aircraft. But -- if it works the way the Air Force claims (always a big if) -- it really does change the nature of tactical conflict.

The accuracy of this weapon is little short of supernatural. They claim that the pinpoint precision can make it lethal or non-lethal at will. For example, they say it can either destroy a vehicle completely, or just damage the tires to immobilize it. The illustration shows a theoretical 26-second engagement in which the beam deftly destroys "32 tires, 11 Antennae, 3 Missile Launchers, 11 EO devices, 4 Mortars, 5 Machine Guns" -- while avoiding harming a truckload of refugees and the soldiers guarding them.

Over at New Scientist, David Hambling explores some of the implications of a system like this. Since the ATL can "deliver the heat of a blowtorch with a range of 20 kilometers," it's not hard to imagine its use for covert operations. With a laser, there are no munition fragments to identify what hit the target, only an "...instantaneous burst-combustion of insurgent clothing, a rapid death through violent trauma, and more probably a morbid combination of both."

("It happens sometimes. People just explode. Natural causes.")

• Braaaaiinnnnnssssss: Mind bombs and lie disruptors and super-soldiers, oh my. The Guardian gives us a peek at the future of war, and this time, it's heavily medicated.

On the battlefield, bullets may be replaced with "pharmacological land mines" that release drugs to incapacitate soldiers on contact, while scanners and other electronic devices could be developed to identify suspects from their brain activity and even disrupt their ability to tell lies when questioned... Drugs could also be used to enhance the performance of military personnel.

Of course, the first would be restricted by existing chemical weapons treaties -- and while we've seen in recent years that treaties are only as good as the people willing to abide by them, it is an issue -- and the second is one of those "real soon now" developments that remains perpetually on the horizon. As for the last one, the drug-enhanced soldiers, get in line: The military will be following the commercial market, not leading it.

• Whoops. Our Mistake: Of course, this all assumes that war has a future. At least in some cases, it really is the worst option, at least according to those crazy left-wingers at the RAND corporation:

The comprehensive study analyzes 648 terrorist groups that existed between 1968 and 2006, drawing from a terrorism database maintained by RAND and the Memorial Institute for the Prevention of Terrorism. The most common way that terrorist groups end -- 43 percent -- was via a transition to the political process. However, the possibility of a political solution is more likely if the group has narrow goals, rather than a broad, sweeping agenda like al Qaida possesses.

The second most common way that terrorist groups end -- 40 percent -- was through police and intelligence services either apprehending or killing the key leaders of these groups. Policing is especially effective in dealing with terrorists because police have a permanent presence in cities that enables them to efficiently gather information, Jones said.

Military force was effective in only 7 percent of the cases examined; in most instances, military force is too blunt an instrument to be successful against terrorist groups, although it can be useful for quelling insurgencies in which the terrorist groups are large, well-armed and well-organized, according to researchers. In a number of cases, the groups end because they become splintered, with members joining other groups or forming new factions. Terrorist groups achieved victory in only 10 percent of the cases studied.

The key point of comparison here: a terrorist group is more likely to achieve its desired goals than to be put down by military force.

You can download the research monograph for free as a PDF, or buy it in paperback.

August 13, 2008

...And Lest You Think I Was Just Kidding...

Here's a very early version of an augmented reality system for the iPhone from ARToolworks.

(Soundtrack Warning: The 1990s wants its rave music back.)

August 12, 2008

Making the Visible Invisible

The Metaverse Roadmap Overview, an exploration of imminent 3D technologies, posited a number of different scenarios of what a future "metaverse" could look like. The four scenarios -- augmented reality, life-logging, virtual worlds, and mirror worlds -- each offered a different manifestation of an immersive 3D world. Of the four, I suspect that augmented reality is most likely to be widespread soon; moreover, when it hits, it's going to have a surprisingly big impact. Not just in terms of "making the invisible visible" -- showing us flows and information that we otherwise wouldn't recognize -- but also in terms of the opposite: making the visible invisible.

Augmented reality (AR) can be thought of as a combination of widely-accessible sensors (including cameras), lightweight computing technologies, and near-ubiquitous high-speed wireless networks -- a combination that's well-underway -- along with a sophisticated form of visualization that layers information over the physical world. The common vision of AR technology includes some kind of wearable display, although that technology isn't as far along as the other components. For that reason, at the outset, the most common interface for AR will likely be a handheld device, probably something evolved from a mobile phone. Imagine holding up an iPhone-like device, scanning what's around you, seeing various pop-up items and data links on your screen.

Handheld Augmented Reality

That's something like what an early AR system might look like (click on the image for much larger version).

I have what I think is a healthy, albeit a bit perverse, response when I think about new technologies: I wonder how they can be used in ways that the designers never intended. Such uses may be beneficial (think of them as "off-label" uses), while others will be malign. William Gibson's classic line that "the street finds its own uses for things" captures the ambiguity of this question.

The "maker society" argument that has so swept up many in the free/open source world is a positive manifestation of the notion that you don't have to be limited to what the manufacturer says are the uses of a given product. A philosophy that "you only own something if you can open it up" pervades this world. There's certainly much that appeals about this philosophy, and it's clear that hackability can serve as a catalyst for innovation.

You're probably a bit more familiar with a basic example of the negative manifestation: spam and malware.

(continued after the jump, with lots more images)

The Internet, email, the web, and the various digital delights we've brought into our lives were not designed with advertising or viruses in mind. It turned out, however, that the digital infrastructure was a lush environment for such developments. Moreover, the most effective steps we could take to put a lid on spam and malware would also undermine the freedom and innovative potential of the Internet. The more top-down control there is in the digital world, the less of a chance spam and malware have to proliferate, but the less of a chance there is to do disruptive, creative things with the technology. The Apple iPhone application store offers a clear example of this: the vetting and remote-disable process Apple uses may make harmful applications less likely to appear, but also eliminates the availability of applications that do things outside of what the iPhone designers intended. (Fortunately, the iPhone isn't the only interesting digital tool around.)

It seems likely to me that an augmented reality world that really takes off will out of necessity be one that offers freedom of use closer to that of the Internet than of the iPhone. Top-down control technologies will certainly make a play for the space, but simply won't be the kind of global catalyst for innovation that an open augmented reality web would be. An AR world dominated by closed, controlled systems will be safe, but have a limited impact.

This means, therefore, that we should expect to see spam and malware finding its way into the AR world soon after it emerges. Of the two, malware is more of a danger, but also more likely to be controllable by good system design (just as modern operating systems are more resistant to malware than the OSes of a decade ago). Spam, conversely, is unlikely to be stopped at its source; instead, we'll probably use the same reasonably-functional solution we use now: Filtering. Recipient-side filtering has become quite good, and users with well-trained spam filters see just a tiny fraction of their incoming junk email. Spam is by no means a solved problem, but it's become something akin to a chronic, controllable disease.

It turns out, though, that the development of filtering systems for augmented awareness technologies would offer startling opportunities to construct our own visions of reality.

As an AR user, I would want to avoid seeing pop-up labels or data-limnals advertising products or services I wasn't explicitly looking for. Take the hand-held AR image above -- if you look closely, you'll see that there's a pop-up advertisement visible. Is this spam? Or just a regular ad? If we define spam as an "unwanted commercial message," it's definitely that, even if there's no attempt to hide where it comes from.

pop-up-ad-AR.jpg

Spam or ad, it's just the sort of thing I'd want suppressed. But what if, besides being annoyed by the digital ad, I wanted to get rid of the physical world ads, too? Wouldn't it be nice to block any kind of unwanted commercial message? Not much of a point in doing so with a hand-held device, of course; but if we are moving to a world of wearable augmented reality displays (as glasses, perhaps, or as Vernor Vinge describes in Raibows End, as contact lenses), then something that would let me block images I didn't like might become more useful.

As long as the AR device was a passive artifact, only responding to messages sent to it by local info-tags, it's limited as to what it can block. But most discussions of augmented reality embrace the notion that the AR systems will have cameras to be able to observe the world around you, and to "notice" things that you'd find interesting. Connected to the net and various data-sources, such a system would be able to tell you quite a bit about what -- and who -- you're looking at. Here's an image I've used for awhile, showing an active AR system with a reputation manager application:

google-rep-mgr.jpg

So let's combine these ideas.

The camera-enabled augmented reality device, able to do basic image recognition (probably a bit of best-guess text recognition, combined with map records of what's around, local tags, etc.), could easily include a feature that not only blocks the digital ads you don't want to see, but the physical-world ads, as well. Given how popular ad-blocking widgets are for web browsers, and the fast-forward-over/skip commercial features of TiVos, such a system is almost over-determined once the pieces become available. With the first version of the AR device, this gives you something like...

handheldAR-adblocked.png

But remember that this technology now can recognize people (either by face, or by what they carry). What if, instead of just blocking advertisers, I wanted to block out the people who annoyed me? Let's say (to be non-partisan this time), I didn't like anyone who worked in the advertising industry, and I didn't even want to see their faces. That leads to...

google-rep-mgr-adblocked.jpg

Of course, all of those blurry, whited-out spaces can get annoying and distracting, so I'd want to replace them with alternative images. Ads I do want, possibly, or images pulled from my own photo/art stream. For the faces, probably just random recently-seen (but not recognized as "known") faces from other people. It doesn't have to be perfect, it just has to be enough to not interrupt your attention (in fact, you'd want it to be slightly imperfect, so that you don't mistakenly try to read a sign or speak to someone you're actually blocking).

This probably seems a bit fanciful, an excuse to play with Photoshop a bit. But this is something I've been mulling for some time, and feels more likely every time I come back to it. The moment that we can easily display location-aware images on an augmented reality system, we'll have people trying to block images they don't like. Forget ads (or advertisers) -- we'll have people wanting to block even slightly suggestive images, people with beliefs they don't like, anything that would upset the version of reality they've built for themselves.

The flip side of "show me everything I want to know about the world" is "don't show me anything I don't want to know."

August 6, 2008

Mozilla Scenarios

aurora-top-image.png

Last year, I mentioned obliquely that I had been asked to work on something very, very cool, but couldn't talk about it. Finally, I can: I joined with Adaptive Path to create a set of scenarios of the future of the Internet, used to build a model of what the future version of the web browser could look like. Adaptive Path and Mozilla have now announced that model, now dubbed Aurora, with a series of videos demonstrating its use.

Today, Adaptive Path chief Jesse James Garrett put up the original scenarios, and described a bit of the thinking.

Jamais called on a whole lot of smart people and led them (and a bunch more from both Adaptive Path and Mozilla) through a two-day workshop to forecast one possible future for browsers and the Web. Through a series of group exercises, we identified three major trends that we thought would have the biggest impact on the web:
  • Augmented Reality: The gap is closing between the Web and the world. Services that know where you are and adapt accordingly will become commonplace. The web becomes fully integrated into every physical environment.
  • Data Abundance: There’s more data available to us all the time — both the data we produce intentionally and the data we throw off as a by-product of other activities. The web will play a key role in how people access, manage, and make sense of all that data.
  • Virtual Identity: People are increasingly expected to have a digital presence as well as a physical one. We inhabit spaces online, but we also create them through our personal expression and participation in the digital realm.

You can read the scenarios here.

They've been released under a Creative Commons license (Non-Commercial/Attribution/Share-Alike), so if the mood strikes you to play with these stories a bit, feel free.

I'll be on a panel with Jesse next week at the UX Week conference, talking about the Aurora project and the future of the web.

[Updated 10/25/11 to new location for scenarios.]

I'm In Ur Blog, Saving Ur World

hycstw.png

First fun tidbit of the day: the SciFi Channel's new group blog, How You Can Save the World, just launched. Contributors include heavy-hitters like Richard Branson, Michio Kaku, former CIA chief John Deutch, Esther Dyson, Dean Kamen, among many others. I'm there, too, one of the token "who the hell is this guy?" guys -- emphasis on the guys. Only two of the 19 current contributors are female; put another way, only four of the 19 aren't middle-aged or older white guys (and yes, I'm counting myself among the middle-aged group). I'll see what I can do about helping them change that mix.

My initial contribution isn't yet up, but should be soon.

August 3, 2008

Future Salon: A Greener Tomorrow -- The Video

This is the talk I gave at the Future Salon meeting in April of this year.

The video quality isn't great, and there are a couple of points where the talk jumps a few seconds (changing videotape, I suspect). Nonetheless, this was a pretty good event, and is the most complete versions of the green tomorrows story I've yet given. 95 minutes -- the presentation itself runs a little over an hour, with about 30 minutes of Q&A afterwards.

If you get a chance to watch it, let me know what you think.

August 1, 2008

Solar Hydrogen (Update: Not So Much the Solar)

[Updated, changes made throughout.] A possible breakthrough at MIT in energy storage: store the generated energy as hydrogen, using a new, incredibly cheap and easy process that functions akin to photosynthesis. This could be big, and it could give a new boost to the fuel cell field.

For a few years now, I've been in the "hydrogen is a dead-end" camp (the most prominent member probably being Joe Romm, author of The Hype About Hydrogen). The compromises required to get a hydrogen infrastructure up and running -- not the least of which being abandoning the clean path by reforming hydrocarbons rather than cracking water -- coupled with the clear advances in with hybrid and full-electric vehicle technologies have really put hydrogen out of the running as a technology path worth pursuing, in my view. Ultracapacitors and nano-enabled batteries seem like the winners, and given how low-profile the fuel cell world has been in the last couple of years, it seemed like my view wasn't all too uncommon.

But along comes MIT's Daniel Nocera, with a new method -- similar to the way that plants derive energy from sunlight -- that he claims will turn regular ph-neutral water into oxygen and hydrogen using low-cost, easily-obtained materials. (Science abstract here.)

Nocera argues that this will make solar the dominant energy-producing technology, not simply through direct electricity generation, but through the production of hydrogen for fuel cells, which can be used in vehicles, for overnight power, and so forth. I'm unclear as to why Nocera is emphasizing solar here -- if this is as much of a breakthrough as he claims, it would be applicable to any kind of electricity generation.

Fuel cells actually make a great deal more sense as a building power system than for cars, in my view. Issues around weight and density of the storage of hydrogen are far less problematic when all the fuel cell power systems have to do is sit on the ground. Similarly, public concerns about the safety of hydrogen (the Hindenburg will haunt us all for decades more) can be more readily alleviated when the fuel cell has a near-zero likelihood of being in a collision.

I'm still inclined to lean towards battery/ultracapacitor electrics over fuel cells for transportation power, but I'm happy to see revived competition from the hydrogen sector.

Jamais Cascio

Contact Jamais  ÃƒÂƒÃ‚ƒÃ‚ƒÃ‚ƒÃ‚¢Ã‚€Â¢  Bio

Co-Founder, WorldChanging.com

Director of Impacts Analysis, Center for Responsible Nanotechnology

Fellow, Institute for Ethics and Emerging Technologies

Affiliate, Institute for the Future

Archives

Creative Commons License
This weblog is licensed under a Creative Commons License.
Powered By MovableType 4.37