New Science Archives

October 28, 2003

Neuromarketing for fun and profit. Well, okay, just profit.

According to an article in last Sunday's New York Times (free sub required, but you knew that already), technologies such as MRIs, which allow the scanning of brain function, are increasingly being used to understand why people are moved by advertising, how consumerism shapes (and is shaped by) identity, and what marketers can do to take advantage of this.

The article refers to this as "neuromarketing," and suggests that it might be the next revolution in consumer culture. Some ads result in stimulation of pleasure centers, some in the stimulation of identity centers, and others in the stimulation of cognitive centers. The tricky part is that which sections of the brain get the most stimulation varies from person to person. Presumably, this stimulation will also shift over time, as consumers develop brand loyalties and react to shifting styles.

This hits close to home for me, not because I'm in marketing or have access to a home MRI kit, but because I just finished a science fiction game book called Toxic Memes which spends quite a bit of time discussing the implications of a world where brain functions have been fully mapped and many people wear devices that tap directly into cognitive functions. I figured something like this would happen soon, but not this soon.

We still understand only a small fraction of how the brain works, but neuroscience is learning more every day. The development of vision systems for the blind that tap directly into the brain and implants that allow a primate brain to control remote devices would have been considered fanciful science fiction a decade ago, and will seem primitive and clumsy a decade from now. What will the world look like when we can send instant messages to each other not through thumb-driven handheld devices, but simply by thinking? Or when we can tap into each other's cognitive abilities in order to make big decisions? What will "grid" neurosystem networks look like?

You think things are weird now? Just wait.

October 30, 2003

Countering the Misuse of Biotechnology

Rob Carlson knows biotech. He should - he's a research fellow at the Molecular Sciences Institute in Berkeley, and a research scientist at the Microscale Life Sciences Center at the University of Washington. When he talks about just how emerging biotechnologies could be misused, and what we can do about it, pay attention.

Now, the traditional view among many scientists and science-enthusiasts is that the dangers of people with bad intent getting their hands on powerful biotechnologies are so great that we must clamp down, censoring the public release of research which could be used by bioterrorists.

Carlson disagrees. In the most recent issue of the journal Biosecurity and Bioterrorism, he argues that, instead, our best defense is openness. Closing research, he says, would lead directly to black markets, driving much research underground, making it all the more difficult to monitor and respond to unsanctioned and irresponsible work.

I've advocated this position for quite a while. Locking down research and information doesn't keep us safe, it just makes it harder to recognize when a problem has occurred, inhibits effective response, and pushes those responsible for controlling the information to under-report violations in order to protect their own jobs. It's good to see this argument made by a respected scientist in a reputable journal.

November 5, 2003

Democratic Transhumanism

If you could use biotech, nanotech, and intelligence augmentation to repair and rebuild your body, would you? Many people would say no, but a growing number of people say "bring it on*." The notion of using emerging technologies to make oneself better, stronger, faster (and smarter, longer-lived, with better breath, etc.) is known these days as "transhumanism." Problem: most of the vocal proponents of this sort of stuff tend to be adamantly, aggressively, cyberlibertarian. On their nice days.

So where are the post-humans who actually like people? A goodly number of them hang out at the Cyborg Democracy blog, which calls itself a home for

democratic transhumanists, nanosocialists, revolutionary singularitarians, non-anthropocentric personhood theorists, radical futurists, leftist extropians, bioutopians and biopunks, socialist-feminist cyborgs, transgenders, body modifiers, basic income advocates, agents of the Culture and the Cassini Division, Viridians and technoGaians - transmitting a sexy, high-tech vision of a radically democratic future

Very cool. I'm all for it.

The blog seems to be all over the map, covering culture and tech and politics, both U.S. and international, usually (but not always) with a strong Transhumanist bent. Definitely worth checking out.

*...and I have to say, after dealing with a month-long series of arthritis attacks, I can more than sympthize.

November 12, 2003

Playing Nice with the Future

We here at WorldChanging are great fans of foresight. The idea of actually thinking through the implications of emerging trends, technologies, and whatnot fills us with heady glee. When the possibilities are simultaneously distant and world-changing (ahem), foresight is all the more important. It gives us all a chance to consider options, prepare for challenges, and attempt to generate a bit of wisdom without having to go through more painful experiences first.

Which is all to say that I'm very happy to note the activities of the Center for Responsible Nanotechnology. A combination research team, advocacy group, and news source, CRN focuses on the ethical, legal, and social implications of molecular nanotechnology. They approach the issue with a bias that nanotechnology has the potential to do very good things for humanity, and thus research and development should continue -- but, because of the dangers inherent to the technology, such R&D should be done responsibly and carefully.

November 14, 2003


Biologists at the Institute for Biological Energy Alternatives, led by Dr. Craig Venter -- of mapping the human genome fame -- have constructed a "bioactive" bacteriophage out of scratch. This isn't the very first time a virus has been constructed from off-the-shelf chemicals (a synthetic polio virus was built a bit over a year ago), but this virus took only two weeks to create (the polio virus took several years to build), and is indistinguishable from its natural counterpart (the synthetic polio virus had numerous genetic defects). This is the first real evidence that functional biological organisms can be constructed gene-by-gene.

The implications for this are enormous; in many ways, this is a far more important biotech development than cloning. While the bacteriophage constructed by the team, "phiX," is an existing virus, this was a necessary first step for the construction of wholly original forms of life. Venter's crew is focusing on building microbes for the production of energy resources (such as hydrogren), but bacteriophages may have broad applications, including serving as novel forms of antibiotics when traditional medicines fail.

But larger questions loom. If novel life forms can be built using off-the-shelf material and well-understood techniques, how do we defend against misuse? (I have already posted some ideas about this issue.) If a lab builds a new organism, do they own it? Is it a patented product, a copyrighted genome? Are we about to enter the era of GRM, or Genetic Rights Management?

November 20, 2003

Fruit Fly Protein Map

Although the Human Genome Project (and the various plant & animal genome projects that preceded it and continue on) was often hyped as the key to unlocking human biology, it's only the first step in a bigger process. Genes code for proteins. Of far greater utility than a genome map -- and of far greater complexity -- is a map of protein interaction, sometimes called a "proteome." Proteins form the building blocks of tissues, and their interactions are the basis for biological systems. In short, proteins actually carry out the details of being a living being.

A draft map has just been completed of the protein interactions for Drosophila melanogaster, the fruit fly. The abstract is available here; the full article PDF is here. According to the researchers (at CuraGen and a variety of universities), the "map serves as a starting point for a systems biology modeling of multicellular organisms including humans." They also state in their report that they intend for the map "to serve as a public resource for interested scientists."

Believers in the precautionary principle and the application of responsibility and foresight to biotech research should be really pleased by this. Protein interaction maps are critical for understanding the more subtle results of genetic manipulation. Among the concerns reasonable people have about bioengineering is the possibility of unforeseen interactions between apparently distinct biological systems. As biologists build more of these proteomic maps, the better ability we'll have to avoid problems down the road -- and a better idea we'll have about how to fix things to survive in a rapidly changing climate.

November 21, 2003

Nanoassembly via DNA

Scientists in Israel have built working transistors using carbon nanotubes self-assembled via binding to DNA, according to New Scientist.

Carbon nanotubes have remarkable properties, including the ability to function as conductors, resistors, and semi-conductors, depending upon how they're structured. Building circuits using nanotubes has been an expensive, time-consuming process, however. Using the biological method of self-assembly, costs could drop dramatically.

All very cool and nanotechy and still a decade or more away, so... so what?

For me, the key 'so what' is that this underscores the degree to which the key to the future will be biology. Nanotechnology, material science, and information technology are all gradually finding themselves under the umbrella of biotechnology. It is quite likely that, in the coming years, understanding how these now-disparate technological systems work will require understanding how biological systems work.

As environmental shifts (and the corresponding social problems) loom ever-larger around us, a shift towards approaches intended not to replace or control biology, but to work with it -- to collaborate with biology, if you will -- are more likely to be both sustainable and successful.

November 24, 2003

The Elegant Universe

If I have an underlying theory or agenda in my postings to, it's that understanding -- knowledge -- is the fundamental tool for making the world a better place. In most cases, there is a clear link between improved understanding of the world around us and a course of action. But not always. Sometimes, the value of knowledge is not in its application, but in its creation.

The Elegant Universe is the title of a book and six hours of NOVA, the PBS science showcase. PBS has now put the full six hours of The Elegant Universe up on its website in streaming video, in both Quicktime and RealVideo formats.

The Elegant Universe talks about string theory, the latest attempt to create a unified "theory of everything" -- a way to consistently explain it all, from the minutae of quantum interactions to the universe-spanning effects of gravity. Along the way, string theory results in 11 dimensions, parallel universes, and tears in the fabric of space. The NOVA episodes, hosted by physicist Brian Greene (author of the book), tell the story of string theory in a way that both engages and illuminates.

Watch it, and be enthralled. It may not give you practical advice for fixing the environment or saving the planet, but it will give you a greater understanding of the how the planet fits into the universe as a whole.

November 26, 2003

The Future is Plastic

Plastic was to the 1960's what cryonics was to the 1980's -- symbolic of the Future. While freezing one's head after death never really made it to the mainstream, plastics are all around us. With a couple of recent developments, plastic may well again be the wave of the future.

MIT has just announced the development of a new method for creating and forming plastics. Normally, plastic shapes are made at fairly high temperatures, melting polymers and pouring them into molds. Plastic objects made in this way have limited recyclability, as the heating and cooling process weakens the polymers -- so called "thermal degradation." The MIT method can shape plastics at room temperature using high pressure, resulting in "baroplastics" which can be reshaped with no thermal degradation. Plastic objects created with this process require less energy to be produced, too. Less energy use, more recyclable... works for me.

But squeezing plastics into shape isn't the only recent breakthrough. An Engineering professor at USC has invented a low-temperature method of doing plastic sintering, more popularly known as 3D printing or fabbing. 3D printers are a relatively recent invention, using powdered polymers (and, occasionally, metals) and a high-powered, laser to build up objects layer-by-layer. Originally used for rapid prototyping, 3D printers are now used by aerospace companies for direct manufacturing of components. The USC method dramatically reduces the heat necessary for sintering, which in turn greatly lowers the cost.

This is pretty big news. 3D printing, if brought down to consumer-level prices, would reshape the way we make and use various home and office products. If all you need to make a toy or kitchen device is a fabber, a supply of raw polymer powder, and a design file, how long before we see "Napster Fabbing?" Things get even more revolutionary if the plastics used can be easily recycled to be used for the next bit of 3D printing.

And we're not just talking about dolls, garlic presses, and iPod pouches. Electroactive polymers -- "flexonics" -- allow for electronic circuits to be embedded in fabbed objects. This would make printing out a new individually-fit ergonomic keyboard, for example, just as easy as printing out a coffee cup.

A plastic future may not be so bad...

December 3, 2003

Vega's Planetary System

At 25 light years away, Vega is one of the closer stars in the night sky, and one of the brightest. British astronomers, working at the James Clerk Maxwell telescope in Hawaii, have discovered that it possesses a planetary system which is something of a twin to our own.

Vega's system seems to have a gas giant planet about the size of Neptune orbiting at about the same distance as Neptune does from the Sun. This gives Vega plenty of room for smaller, rocky planets to orbit closer in, in what would be Vega's "habitable zone," the "just-right" distance neither too hot (leading to a Venus) nor too cold (leading to a Mars). Just as important, the presence of a gas giant in the outer system means that debris (such as asteroids, comets, and whatnot) tends to get swept up by the larger planet's gravity before it can get deeper into the system, potentially hitting any Earth-like planets.

Continue reading "Vega's Planetary System" »

December 10, 2003

Biochips Ahoy

Arizona State University researchers have developed a new model "biochip" -- an all-in-one laboratory on a chip able to detect and analyze microorganisms and chemicals in the field. While such chips have been built before, the ASU design cuts the price and size. The plastic biochip measures 12x6 cm, and only 2mm thick.

Technology Review notes: "The chip performs all the work needed to test from a raw sample like whole blood, including target cell capture using immunomagnetic beads, cell preconcentration, purification and lysis, and DNA multiplication and detection. The researchers' prototype detected a disease-causing E. coli bacteria in a sample of rabbit whole blood in 3.5 hours."

Moving from R&D to the field may take several more years.

Cheap, powerful bio-detection and analysis chips are key to ongoing measurements of environmental conditions, and to monitor biological or chemical hazards. They can also aid in grappling with climate change. The cheaper the chips are, and the easier they are to produce, the more they can be used, thereby letting us understand how environments are -- or aren't -- functioning. As one of the hallmarks of climate change is the increase in unexpected shifts in local and regional ecosystems, the more we can monitor environmental status, the better chance we'll have of reacting effectively.

December 15, 2003

Economist Technology Quarterly

Even if you don't agree with the magazine's politics or its approach to economic theory, The Economist remains one of the better print sources of news and information around. Every quarter they run a special issue going over interesting developments in the world of technology. The current version includes pieces on high-resolution weather forecasting, biometrics, and advances in chemical sensor technology, among others.

Both the December and September Technology Quarterly articles are freely available. (via CyborgDemocracy)

December 17, 2003

Carbon Nanotubes, Yet Again

Add carbon nanotubes to the list of items that we at WorldChanging can't get enough of. Word from the latest Technology Research News is that carbon nanofibers provoke far less tissue resistance when used for medical implants than traditional materials. Less scarring, less rejection -- oh, and a potential ability to connect to neurons, facilitating brain and nervous system implants. Carbon nanotubes are almost certainly a key structural component of a changed world...

December 26, 2003

Memory Via Prions

Despite all of the breakthroughs in biology over recent years, the functioning of the human brain retains significant mysteries. Chief among these is how memory functions: just how does the mind record events, feelings, ideas in a way which allows later recall? Neuroscientists at MIT's Whitehead Institute for Biomedical Research think they've figured out how memories are stored -- but, surprisingly, the mechanism for storage turns out to be prions, the class of proteins considered responsible for neurodegenerative diseases such as mad cow.

Central to a protein's function is its shape, and most proteins maintain only one shape throughout their lifetime. Prions, on the other hand, are proteins that can suddenly alter their shape, or misfold. But more than just misfolding themselves, they influence other proteins of the same type to do the same. In all known cases, the proteins in these misfolded clusters cease their normal function and either die or are deadly to the cell – and ultimately to the organism.

For this reason, Kausik Si, a postdoc in Kandel's lab, was surprised to find that a protein related to maintaining long-term memory contained certain distinct prion signatures. The protein, CPEB, resides in central-nervous-system synapses, the junctions that connect neurons in the brain. Memories are contained within that intricate network of approximately 1 trillion neurons and their synapses. With experience and learning, new junctions form and others are strengthened. CPEB synthesizes proteins that strengthen such synapses as memories are formed, enabling the synapses to retain those memories over long periods.

This is one of those discoveries that has the potential to open up incredible new vistas. At minimum, better understanding of the role that properly-functioning prions play in biology can help us figure out ways to block or repair badly-behaving prions. Similarly, figuring out the mechanisms of memory could lead us towards cures for memory-attacking diseases such as Alzheimers (which, in its most severe forms, displays symptoms similar to the effects of CJD, the human form of mad cow). Finally, this moves us further on the path towards unlocking deep brain physiology and really figuring out how the brain and mind work.

January 2, 2004

Quest for the Holy Grail

Witchfinder General (and frequent WorldChanging comment participant) "smerkin" let us know about the current edition of IEEE Spectrum, the house journal for the Institute of Electrical and Electronics Engineers. In it, scientists and science writers provide reports about developments in six different technological realms: Communications, Electric Power, Semiconducturs, Transportation, Computers, and Bioengineering. The Spectrum editors chose three reports for each category; one was deemed a "winner," one a "loser," and one a "grail" -- a development so significant that, if/when it comes about, our daily lives will be changed. (As the introductory essay notes, "loser" doesn't mean that the idea is bad, just that, for a variety of technical and social reasons, the editors deemed the idea to be unlikely to bear fruit in the near future, if ever.) All of the developments featured in these articles are in progress, and while some are further along than others, none of them are "blue sky" speculation.

The "winner" articles read like a week's worth of solid "Unlocking the Code" postings on WorldChanging. They include: "analysis engines" able to figure out the meaning of search terms (and, perhaps, provide a way out of the information overload we are now buried in); "smart hybrid" vehicles combining pure-electric motors with hybrid-electric internal combustion engines (making it possible to drive electric-only for short runs, but switching seamlessly to "normal" hybrid-electric for longer trips, with reduction in energy consumption and carbon emissions more than twice that of a current-model Prius); and the Alberta Supernet, a model for extending high-speed broadband throughout public facilities and remote communities.

The "loser" articles range from projects with fatal flaws, such as Microsoft SPOT to good-but-insufficient responses to big problems, such as carbon sequestration.

The "grail" essays focus on projects with longer timelines, but massive payoffs. Digital long-term preservation of knowledge, self-sustaining fusion, and fiber to the home are each given some attention. These are ideas which may have significant flaws, but their potential is so vast if they are successful that investment (in time, in money, in knowledge) is more than warranted.

If you're interested in what may lie ahead technologically, this is a good place to start.

January 5, 2004

Mars Needs Guitars!

If you're even an occasional visitor to Blogistan over the past few days there's no way you could have avoided the abundance of celebratory links about the successful landing of the Mars probe "Spirit" (although I prefer the more dignified official name, "Mars Exploration Rover-A"). Here at WorldChanging, we're certainly ready to do our part to welcome our glorious new Martian overlords. Here are some interesting Mars-related links you may not already have encountered:

  • Maps! -- Want to know more about the neighborhood for good old MER-A (and, in three weeks, MER-B)? Try the MER 2003 Prime Landing Sites page at NASA, which includes massive topographic maps of Gusev Crater (where Spirit now sits) and Terra Meridiani (soon to be home to the second lander, "Opportunity"). Also included are the maps of the now-rejected candidate sites. Perfect for covering your walls!
  • Pictures! -- But not just the current photos. The NASA Photojournal site includes thousands of photos of Mars and every other planet in the solar system (including Pluto, although Pluto may not really be a planet, but a big KBO). Among the pictures from MER-A are the three "descent imager" photos taken by the lander on the way down.
  • Video! -- This is very cool. "Six Minutes of Terror" is a set of interviews cut with high-end computer graphics of the MER during its "Entry, Descent, and Landing" (EDL) phase. This is a Quicktime movie, and totally rocks. I'm serious.

I admit it. I'm a total Areophile (a fancy word for "Mars geek"). I have Mars maps and globes throughout my office, and even worked on a produced-but-never-actually-shown TV show set on Mars. In my view, learning as much as we can about Mars -- and, eventually, going there -- definitely counts as world-changing.

January 20, 2004

10 More Things That Will Change Your World

Awhile ago, I referred to Technology Review's "10 Emerging Technologies..." article from January 2003, and noted that the 2004 edition would be out real soon now. Well, real soon now is here. This year's list of "10 Emerging Technologies That Will Change Your World" is a heady combination of stuff you've probably already heard about, stuff you've probably never heard of, and stuff you may well already be using. Worth reading, and well worth thinking about the implications.

January 24, 2004

Watch the Skies

Spacewatch is a 24-year-old University of Arizona astronomy program which monitors small bodies in our solar system, such as asteroids, to look for potential targets for interplanetary missions and, not incidentally, to watch for objects which might pose a hazard to the Earth. The Spacewatch telescopes keep a constant vigil, taking multi-minute exposure images of the night sky. Problem is, computer software actually doesn't work well to evaluate these images to find potential dangers. The human eye does a far better job.

Enter the Fast Moving Object, or FMO, Project. Announced last October, FMO relies on volunteer amateur astronomers and space-buffs to keep an eye out for asteroids which may be on a collision-course with Earth. Last week, for the first time, a volunteer for the program found something -- a 60-to-120-foot diameter asteroid which came within about 1.2 million miles of the Earth on Thursday, the astronomical equivalent of missing the Earth by a hair's breadth. Although the asteroid would have caused no damage had it actually struck, the discovery was a proof-of-concept that using a multitude of volunteers to watch for asteroids could work.

All you need to be a part of the FMO Project are good eyes, interest, and a willingness to watch computer images for signs of asteroid motion. Knowledge about astronomy is helpful, but not required. Unlike some other collaborative science efforts, this one can't be done by letting your computer do all the work. This one requires a bit more effort, but the payoff could be enormous -- the more people get involved, the better the chances of spotting something before it's too late.

The big drawback of the current Spacewatch system is that it only has two telescopes scanning the heavens. Right now, that's what's available. But on the horizon may be something a bit more radical...

Modern amateur telescopes, although still made using familiar laws of optics, have increasing computer sophistication. Right now, you can buy for around $300 a low-end amateur telescope able to find objects in the sky once it is told the current time and location; spend more money, and you can get one which has a built-in GPS system so that you don't have to tell it. Furthermore, many amateur scopes, particular the mid- to high-end ones, are made to work with CCD camera, so that you don't actually look through the telescope, you watch through a monitor, or even on your computer.

We're not far from the day when we could connect a huge number of amateur efforts into an effective planetary protection system, by collecting the output from amateur scopes over the internet, and using SETI@Home-style distributed computing to grind through the data looking for signs of previously-uncatalogued asteroids. On the input side, you'd have telescopes able to keep track of precisely where in the sky they're pointing, and you could even have networked telescopes running nightly observation routines when not otherwise in use, so as to get maximum coverage; on the analysis side, you'd have a BOINC-based grid of personal computers analyzing the images, comparing the results with known data.

This would clearly need the cooperation of a lot of people -- are there even enough serious amateur scopes in use to make this possible? -- and (based on the report about FOM) would require a new generation of image-processing software to be able to detect the faint traces of distant asteroids. Still, it seems like an idea which could become reality, and relatively soon. I think SpaceWatch@Home has a good ring to it...

February 8, 2004

Cometary Winter

1,500 years ago, something happened to the world. Crops failed everywhere, there was massive starvation, and the spotty historical record shows frost conditions in the middle of the summer. Now two undergraduate students at Cardiff University, in the UK, think they've figured out why.

A half-kilometer-wide comet may have hit the Earth, exploding in the atmosphere, spreading soot and ash. This would have partially blocked the sun, reducing the amount of light and heat hitting the surface. Tree ring data shows a global reduction in temperatures for the years 536-540; the amount of material in the atmosphere required to cause that degree of temperature drop is nicely explained by a moderate-sized comet, big enough to cause problems but small enough to explode in the air rather than actually impact the ground.

If this theory is supported by subsequent research, it will be further evidence that disastrous comet/asteroid strikes are relatively commonplace in our history. A similar-sized hit would exacerbate global climate instability, and the loss of crops resulting from the temperature drop would leave millions starving. It's a wildcard, but nowhere near as unlikely as we'd like to believe. It wouldn't take a technological breakthrough to be able to watch out for dangerous spaceborne objects, just a willingness to do so.

February 10, 2004

A New Kind of Science

Stephen Wolfram is smart. Very, very smart. As a young man, he developed software able to do all sorts of high-level, sophisticated mathematics, and went on to form a company to sell it. Once he got the company up and running, he decided to spend the next few years writing a book on the use of software models for interpreting the way the world works. The resulting tome -- a massive, 700+ page treatise -- was called A New Kind of Science, and it argued that the world around us, from physics and biology to economics and group behavior, could be understood as the complex result of simple elements. The reactions of those who got through it ranged from outright dismissal to full-on epiphany, with quite a few variations on "huh?". The enormous physical size and high price didn't engender high sales, however, so not many people actually had a chance to find out for themselves whether Wolfram is on the right track.

Well, you can now. The entire book, along with commentary, notes, downloadable images, software, and corrections, is now available online at I'd suggest reading chapter 1, skimming 2-6 -- he spends a great deal of time establishing the myriad nuances of software models of complexity -- and reading in detail again with chapters 7+. You may not agree with his interpretations of reality, but your brain will still get a workout.

February 13, 2004


If you've read Neal Stephenson's brilliant novel The Diamond Age, you will certainly remember his description of "toner wars" -- clouds of carbon-based nanoparticles fighting it out as tools of economic or political dominance. Breathing in the microscopic machines wasn't good for you, but that was related to the various nasty things that the overly-aggressive nanoassemblers might do once in your system. In reality, the danger from such a threat would may have more to do simply with how small they are.

According to Technology Review, a variety of researchers around the world are starting to take a look at the biological effects of nanoscale materials. As buckyballs and carbon nanotubes hold the potential to do so much good, it's imperative to understand the potential downsides of the technologies so as to make reasonable choices and to develop countermeasures. The first bit of research suggests that carbon nanotubes (which we've talked about here a few times) can embed themselves into air sacs in the lungs, leading to toxic effects.

A variety of groups are looking into nanoparticle safety concerns, including the American Food and Drug Administration and Environmental Protection Agency, the Royal Society in the UK, and the Center for Biological and Environmental Nanotechnology at Rice University.

Developing a realistic sense of the environmental dangers of nanoparticles is crucial both for protecting our health and for encouraging the development of world-changing (in the positive sense) technologies. It's far better to learn early what the problems may be -- and have the pace of technology innovation slow in order to allow for corrective measures to be developed -- than to plow ahead at full steam and run into public fear, lawsuits, and over-reaching legislation down the road.

February 14, 2004

Hydrogen from Ethanol

Just a quick note: the current issue of Science reports (non-free subscription required) that researchers at the University of Minnesota have developed a system for converting ethanol into hydrogen cheaply and efficiently. As most techniques for producing hydrogen for fuel cells use fossil fuels as source material, this will be a significant step away from non-renewable energy resources. The press release at the UMN website gives details.

When coupled with a hydrogen fuel cell, the unit - small enough to hold in your hand - could generate one kilowatt of power, almost enough to supply an average home, the researchers said. The technology is poised to remove the major stumbling block to the “hydrogen economy”: no free hydrogen exists, except what is made at high cost from fossil fuels.


The researchers see an early use for their invention in remote areas, where the installation of new power lines is not feasible. People could buy ethanol and use it to power small hydrogen fuel cells in their basements. The process could also be extended to biodiesel fuels, the researchers said.


Ethanol is easy to transport and relatively nontoxic. It is already being produced from corn and used in car engines. But if it were used instead to produce hydrogen for a fuel cell, the whole process would be nearly three times as efficient. That is, a bushel of corn would yield three times as much power if its energy were channeled into hydrogen fuel cells rather than burned along with gasoline.

“We can potentially capture 50 percent of the energy stored in sugar [in corn], whereas converting the sugar to ethanol and burning the ethanol in a car would harvest only 20 percent of the energy in sugar,” said Schmidt. “Ethanol in car engines is burned with 20 percent efficiency, but if you used ethanol to make hydrogen for a fuel cell, you would get 60 percent efficiency.”

February 16, 2004

Green Nanotech

I posted the other day about the issue of safety regarding nanoparticles, particularly carbon nanotubes. It's worth noting that there's a good reason why some of the environmental groups looking at nanotechnology (such as Greenpeace(PDF)) have not asked for a moratorium on research. The environmental benefits of advances in molecular nanotechnology could be staggeringly positive. The Center for Responsible Nanotechnology blog has a great post summarizing the various ways in which nanotechnology will help protect and repair the global environment.

Environmental degradation is a serious problem with many sources and causes. One of the biggest causes is farming. Greenhouses can greatly reduce water use, land use, runoff, and topsoil loss. Mining is another serious problem. When most structure and function can be built out of carbon and hydrogen by molecular manufacturing, there will be far less use for minerals, and mining operations mostly can be shut down. Manufacturing technologies that pollute can also be scaled back.

In general, improved technology allows operations that pollute to be more compact and contained, and cheap manufacturing allows improvements to be deployed rapidly at low cost. Storable solar energy will reduce ash, soot, hydrocarbon, NOx, and CO2 emissions, as well as oil spills. In most cases, there will be strong economic incentives to adopt newer, more efficient technologies as rapidly as possible. Even in areas that currently do not have a technological infrastructure, self-contained molecular manufacturing will allow the rapid deployment of environment-friendly technology.

Building a green future will not come from the relinquishment of advanced technologies. If such a strategy was ever possible, we have gone too far in our degradation of the planetary ecosystem for it to work now. Repairing the Earth's environment, instead, will require the proper application of intelligently-designed tools and systems.

February 17, 2004

Small is Powerful

A new generation of microtechnologies received had its coming-out party this week at the annual meeting of the AAAS (American Association for the Advancement of Science). "MECS" -- Microtechnology-based Energy and Chemical Systems -- have the potential to channel large amounts of liquid or gas through microscopic channels allowing for heat transfer, chemical reactions, or material identification.

Chemical and biological warfare suits worn in warm climates, such as the Iraqi desert, can become unbearably hot. The solution may be a portable cooling system that weighs just several pounds.

The concept that makes this possible also is leading to miniature sensors for detecting chemical and biological toxins, as well as tiny chemical reactors for hydrogen fuel processing or environmental cleanup.

In a nutshell, small is powerful. These devices send large amounts of liquid or gas through thousands of microchannels that stand roughly as tall as a human hair. In each channel, heat transfer or chemical reactions happen more efficiently than they do in larger spaces, permitting better process control, shorter channel lengths and overall system miniaturization.

"The channels are to microfluidic devices what wires are to microelectronics," said panelist Brian Paul of Oregon State University.

More pieces coming together for a world filled with realtime observation and analysis of environmental conditions. (Via Ken Novak's Weblog)

February 25, 2004

More Power!

In a bit of serendipity, several items about the future of power generation popped up on my radar recently. They nicely demonstrate alternative sources of electricity now, in the near future, and a bit down the road. Quick synopsis: the days of massive generators like the one shown to the right are numbered.

(Read the extended entry for details:)

Continue reading "More Power!" »

February 26, 2004


When I was in London earlier this month, I visited the British Museum. The pieces of ancient civilization and the various plunderings of empire were interesting, but what I really wanted to see was the Rosetta Stone (that's my picture of it at right). The Rosetta Stone, found by Napoleon's troops in Egypt in 1799 and transferred to British control in 1802 as a spoil of war, was a largish piece of basalt covered with an official pronouncement about Pharaoh Ptolemy, written in ancient Greek, demotic, and ancient Egyptian hieroglyphics. That dark gray slab embodies a fascinating mix of anthropology, archaeology, and cryptography. Prior to the discovery of the Rosetta Stone, hieroglyphics were considered indecipherable pictograms; after the Rosetta Stone, hieroglyphics were a window into the workings of ancient Egypt. It's entirely possible that, had the Rosetta Stone never been found, the meaning of hieroglyphics would have been lost forever. (Simon Singh's fascinating text on cryptography, The Code Book, has a good chapter on how the Stone led to figuring out hieroglyphics.)

Linguists and ethnographers estimate that fifty to ninety percent of the planet's 7,000 languages will disappear over the course of this century. Many of them are poorly-documented, at best. Without language archives, scholars of the future will have no way of translating or understanding a dismally large portion of global civilization. The Long Now Organization, which tries to encourage very long-range thinking about our planet and society, started the Rosetta Project a couple of years ago in order to build an archive of more than a 1,000 languages:

We are creating this broad language archive through an open contribution, peer review model similar to the strategy that created the original Oxford English Dictionary. Our goal is an open source "Linux of Linguistics"- an effort of collaborative online scholarship drawing on the expertise and contributions of thousands of academic specialists and native speakers around the world. [...]

The resulting Rosetta archive will be publicly available in three different media: a free and continually growing online archive, a single volume monumental reference book, and an extreme longevity micro-etched disk.

The disk is physically etched with words in 1,000 languages, requiring a high-power optical microscope to read. This is a more survivable format than digital media; there's no risk that the particular reader technology will be lost to obsolescence or market whims. The disk contains 10 categories of linguistic descriptors for every language, including a parallel text (Genesis chapters 1-3, which is apparently the most widely, and carefully, translated text on Earth).

Starting from the premise that "lots of copies keeps stuff safe," the disk will be mass-produced and globally distributed. Actually, very shortly it will be extraterrestrially distributed, as well. A copy of the Rosetta Project disk has been fitted to the ESA's Rosetta comet-chaser probe. As the disk is designed to withstand extreme environmental conditions, its presence on a space probe on a very long orbit around the sun means that the language data it contains will be archived for a very, very long time. (The probe was supposed to launch today, but high winds at the launch site delayed the lift-off for a day.)

The Rosetta Project is more than the disk. The Archive is a regularly-updated online database of languages. It currently contains 1,671 different languages, and the Rosetta Project recently received seed funding to build a database of all documented human languages. The effort to preserve human civilization continues.

In the end, we hope the process of creating a new global Rosetta, as well as the imaginative power of having a 1,000 language archive on a single, aesthetically suggestive object, will help draw attention to the tragedy of language extinction as well as speed the work to preserve what we have left of this critical manifestation of the human intellect.

February 29, 2004

Public Human Genome

The Human Genome Project completed its first draft listing of humankind's genetic blueprint in 2001, but did so in an unusual way. A publicly-funded, international consortium finished a draft blueprint and made the data available via the U.S. National Institutes of Health, while a private concern, Celera, completed an alternative draft -- which varied in some important ways -- but kept the data largely private, only available to corporate subscribers. Fortunately, according to the Genome News Network, that's about to change.

Now the complete Celera sequence, which included publicly available data generated by the HGP, will be in the public domain.

A second Celera human genome sequence, this one created months after the first and never made public, will also be placed in GenBank. The second sequence includes only DNA sequences generated at Celera.


Differences in the sequences are attributable at least in part to the methods used, say the researchers. Celera used the whole-genome shotgun method, while the HGP used a more traditional method as well as the shotgun method.

March 4, 2004


It's one of those assertions that a reasonable person might immediately dismiss -- sound waves can make bubbles in liquid blow up in such a way that they produce temperatures and pressures equivalent to the inside of the sun. But sonoluminescence is a well-known phenomenon (here is an intro to the subject from Lawrence Livermore National Laboratories); since 1934, physicists have known that pulsing low-density sound waves through a liquid medium can causes flashes of light. Now, a group of physicists at Purdue university have concluded that, under the right conditions, pulsing sound through liquid can result in sufficient energy to produce nuclear fusion.

The device is a clear glass canister about the height of two coffee mugs stacked on top of one another. Inside the canister is a liquid called deuterated acetone. The acetone contains a form of hydrogen called deuterium, or heavy hydrogen, which contains one proton and one neutron in its nucleus. Normal hydrogen contains only one proton in its nucleus.

The researchers expose the clear canister of liquid to pulses of neutrons every five milliseconds, or thousandths of a second, causing tiny cavities to form. At the same time, the liquid is bombarded with a specific frequency of ultrasound, which causes the cavities to form into bubbles that are about 60 nanometers - or billionths of a meter - in diameter. The bubbles then expand to a much larger size, about 6,000 microns, or millionths of a meter - large enough to be seen with the unaided eye.

"The process is analogous to stretching a slingshot from Earth to the nearest star, our sun, thereby building up a huge amount of energy when released," Taleyarkhan said.

Within nanoseconds these large bubbles contract with tremendous force, returning to roughly their original size, and release flashes of light in a well-known phenomenon known as sonoluminescence. Because the bubbles grow to such a relatively large size before they implode, their contraction causes extreme temperatures and pressures comparable to those found in the interiors of stars. Researches estimate that temperatures inside the imploding bubbles reach 10 million degrees Celsius and pressures comparable to 1,000 million earth atmospheres at sea level.

There are still plenty of questions about the discovery, but the paper reporting the work apparently went through a far greater-than-usual checking process at Physical Review E, where it will be published. Unlike Cold Fusion, the sonofusion work seems to have both good data and an explanation for the mechanism that doesn't require rewriting any physical laws. Skeptics are (quite correctly) waiting for other labs to be able to replicate the experiment before celebrating the find.

Assuming the discovery is validated, what does it mean for the world? At minimum, much more work. The sonofusion research is still in the earliest of stages, and requires much more power to produce the effect than is produced -- the so-called "breakeven" level required for fusion energy to be useful. Even if breakeven is achieved, there's no guarantee that it could scale to a point where it would be competitive with other methods.

But what this discovery does do right now is provide us with a friendly reminder that we can't assume that all the tools we'll have for fighting global problems have already been invented. New discoveries, new technological or social innovations add to our response capabilities. While we certainly shouldn't assume that a deus ex machina is going to save us all, neither should we despair that our current abilities are insufficient for the task at hand.

March 8, 2004

Miniature Fuel Cell Power Boost

New Scientist reports that Stanford University researchers have figured out a way to boost the power output of miniature hydrogen fuel cells by up to 50%. The trick is to reduce the size and increase the number of channels leading from the fuel source to the cell's center. Laptop fuel cells which could run for 20 hours with earlier versions can run nearly 30 hours. Researchers hope to replace batteries with fuel cells because of their longer life and fewer toxic components.

Ah, yes, the catch: this only works with hydrogen fuel cells. Methane mini-fuel cells have been the preferred choice so far, because methane is easier to handle than hydrogen and packs more power per volume. But methane produces CO2 as waste, while hydrogen fuel cells produce only water. Environmentally, H2-based mini-fuel cells would be better than methane ones. This Stanford discovery makes hydrogen minis once again a reasonable alternative... if someone comes up with a good, safe way of distributing the hydrogen for the fuel cells.

This is important not just because having one's laptop battery give out after 3-4 hours is annoying, but because the batteries most often used in portable electronics these days -- lithium-ion -- contains sufficient levels of toxic lithium metal [PDF} that they are largely prohibited from landfills. Given that it's been estimated that over a hundred million mobile phones will be discarded (along with their batteries) in the US in 2005 alone, moving to a portable power source that doesn't threaten to leach metals into groundwater seems wise. If the alternative doesn't add to carbon emissions, all the better.

March 14, 2004

Welcome, Sedna

Continuing with my space-themed weekend, I want to give a warm WorldChanging welcome to Sedna, our solar system's 10th planet. Probably. We'll know more tomorrow, when NASA has a press conference about it.

Discovered last November using Caltech's Palomar telescope on Earth, and just confirmed by the Spitzer Space Telescope, Sedna is a Kuiper Belt Object (KBO) -- one of the ice and rock bodies out past Neptune. Several large KBOs have been discovered over the past few years, but none have been as large as Pluto, also a KBO but also generally considered a planet, too. Sedna appears to be roughly as big as Pluto, or possibly even a bit bigger, and is in a normal orbit. If Pluto's a planet, then Sedna is, too.

Sedna is the Inuit goddess of the ocean -- perfect for the deep black sea of space.

Here is a press release from CalTech with a bunch more information, and here is NASA's information page, which includes the first pictures taken of Sedna:

"Sedna" will become closer and brighter over the next 72 years before it begins its 10,500-year trip to the far reaches of the solar system and back again. "The last time "Sedna" was this close to the Sun, Earth was just coming out of the last ice age; the next time it comes back, the world might again be a completely different place," said Brown.

Makes you wonder what else there is out there in the deepest reatches of the solar system.


March 15, 2004

100-Meter Nanotube Pull

No, it's not a new sport, it's the new record for a length of carbon nanotube. Given that the previous best length was around 30 centimeters, this is a bit of an improvement. The process sounds oddly familiar:

The carbon nanotubes are made by injecting ethanol into a fast-flowing stream of hydrogen gas. The gas carries the carbon-containing molecules into the centre of a furnace where temperatures soar above 1000° C.

The high temperature breaks the ethanol down and the carbon atoms reassemble into nanotubes, each about a micron in length. These float in the stream of hydrogen, loosely linked to each other in what Windle describes as an "elastic smoke".

When a rod is poked into this amorphous cloud, it catches a few nanotubes. Rotating the rod pulls on these, which in turn pull on their neighbours, dragging out a continuous thread of closely-aligned nanotubes. This wraps around the rod at a rate of centimetres per second.

It is similar to spinning wool, Windle told New Scientist: "You have this ball of entangled wool and you put a needle in to pull out the threads".

Spinning wool? Maybe. But it sounds more to me like making cotton candy.

But don't start planning your space elevator trip just yet; the nanotubes created by this method are nowhere near as tough or conductive as traditional carbon buckytubes. Still, it's a good step towards making this nanoscale material more usable in the macro world.

Image of traditional nanotubes from University of Basel Nanoscale Science Center

March 25, 2004

Magnetic Nano-foam

The various forms of carbon (diamond, graphite, buckyball, and the mighty nanotube) now welcome a new sibling: nanofoam. According to PhysicsWeb:

Physicists in Greece, Australia and Russia have made a new form of carbon that has the lowest density ever reported for a solid - just 2 milligrams per cubic centimetre. The material is a nano-foam of carbon clusters and is the first form of pure carbon to display ferromagnetism, albeit temporary, at room temperature.

Welcome to the family!

April 10, 2004

30 Second Recharge

One of the problems with the use of rechargeable batteries, particularly batteries used as a replacement for liquid-fuel systems (such as in cars), is that they take awhile to recharge. And while current-generation nickel-metal-hydride and lithium-ion batteries are much better than older nickel-cadmium batteries, they can still take more time to recharge than one might want. But this may soon change:

NEC Corp has developed a battery that can be recharged only in 30 seconds, company sources said. Called an organic radical battery, it can be recharged to the same level of power as that stored in nickel-hydrogen cells, which are widely used in digital cameras, portable MD players and other electronic devices.

It takes only about 30 seconds to recharge the battery enough to allow 80 hours of continuous operation of an MD player, compared with around an hour needed by conventional rechargeables, the company claims.

The cost, once production ramps up, should be about the same as a current NiMH battery. This technology should be of particular interest to manufacturers of hybrid cars, which currently use NiMH batteries. Here's why:

Hybrid car batteries are recharged via regenerative braking, vehicle momentum, and directly from the gas engine; since it normally happens just in the course of regular driving, the driver doesn't notice how long it takes for the batteries to top back up. (In less-common circumstances, such as driving over mountains, the fact that the batteries don't charge as quickly as one would like can be a bit more disconcerting.) The time it takes for the batteries to be recharged under normal driving conditions is a function of battery recharge speed and the amount of power returned via braking, etc.; logically, this is a key engineering factor when determining how much battery power can be used to replace or assist the gasoline engine. A shift to faster-recharging batteries could then make it possible to use the batteries more often, thereby making the overall mileage and emissions of hybrids even better.

April 18, 2004

Earthquake Forecasts

quake mapHaving lived my whole life in California, I've never been particularly frightened of earthquakes. Appropriately concerned, of course, and certainly alarmed during one, but not terrified. This is undoubtedly due to the bolt-from-the-blue nature of quakes; you don't know when the big one is going to hit, so there's no use worrying about it -- just stock up on water, canned food, and blankets, and be ready to deal with it when it happens. Which could be today... or a century from now.

If Vladimir Keilis-Borok at UCLA's Institute of Geophysics and Planetary Physics is right, though, such carefree days of "if it happens, it happens" may be coming to a close. Keilis-Borok, whose team uses a "combination of pattern recognition, geodynamics, seismology, chaos theory and statistical physics" to predict earthquakes, asserts that there's a 50-50 chance of a major earthquake hitting a stretch of east-central Califorina by September 5. Residents in the (potentially) affected area should stock up on water and canned food -- the group successfully predicted two previous quakes last year using this method, an 8.1 quake in Japan and the 6.5 near San Simeon in California in December.

If the methodology bears out -- and Keilis-Borok emphasizes that two good predictions are not enough to justify calling the technique a success -- then we may be moving into an era when regions can be given alerts months in advance of high likelihood of quake activity. Even without being able to pinpoint the exact date and epicenter, a prediction such as that given for the Mojave region would be enough to save lives, as it makes people more apt to build up earthquake supplies and to have their residences checked for structural soundness.

So far, the prediction have been for quakes in areas with relatively low populations -- central California or off the coast of Hokkaido. If the technique works, what happens when the prediction happens for a densely-populated area? A six to nine month prediction window is close enough to trigger prompt action, but not so close to as cause panic. Would people evacuate, or ride it out? At the very least, such forecasts would give governments and groups like the Red Cross/Red Crescent time to build up on emergency response materials. Being able to predict earthquakes would be enormously valuable.

So if you hear about an earthquake in the Mojave Desert some time over the next couple of months, remember that it's a good sign that the world has changed.

April 23, 2004

Entangled Photon Cryptography

One of the weirder elements of quantum physics (and that's saying something) is particle "entanglement:" two particles (usually photons or electrons, but research suggests larger particles may be entangled, too) are linked in a way that means that any disturbance done to one affects the other, instantly, no matter how far away it is. Einstein called this "spooky action at a distance," and we're now starting to see the possibility of practical applications for it. New Scientist reports that quantum entanglement is the basis of a new cryptography method developed by the University of Vienna and the Austrian company ARC Seibersdorf Research. The use of entanglement means that cryptographic key communication can be guaranteed secure, even over completely unsecured lines:

When these [entangled] photons arrived at their destination, their state of polarisation was observed. This provided both ends of the link with the same data, either a one or a zero. In this way, it is possible to build a cryptographic key with which to secure the full financial transaction.

Quantum entanglement ensures the security of communications because any attempt to intercept the photons in transit to determine the key would be immediately obvious to those monitoring the state of the other photons in each pair.

And because the resulting key is random it can be used to provide completely secure link even over an unprotected communications channel, provided a new key is used each time.

Cryptography -- which underlies all electronic commerce, and allows for private conversations -- is often compared to an arms race. Sometimes code creators develop systems that code breakers can't defeat, and sometimes code breakers develop methods of cracking cyphers that were previously thought untouchable. The rise of quantum cryptographic methods suggests we may be moving into an era where encryption is dominant over code-breaking -- a win for privacy, to be sure, but raising questions about our ability to enforce corporate and government transparency regulations.

April 26, 2004

3D Printing Improved

3D printing, also known as 3D fabrication, "fabbing," and "stereolithography," is high on my list of potentially ground-shaking technologies -- emphasis on the "potentially." It's been around in various forms for awhile now, but the steady pace of improvement hasn't quite matched the intensity of the excitement around the concept in certain circles. Nonetheless, since the systems do continue to get faster, cheaper, and more precise, a bit of excitement is warranted.

The latest improvement in 3D printing comes from the University of Illinois:

University of Illinois researchers have come up with a new type of quick-setting three-dimensional ink that works a bit like a microscopic tube of toothpaste. The researchers' printer robotically deposits a continuous, elastic-like ink filament into a liquid rather than putting ink drops onto a surface.

The filament hardens in the liquid rapidly enough to allow for printing three-dimensional structures that have features like unsupported spanning elements. The process yields complete three-dimensional structures in about five minutes, and provides resolutions that are close to two orders of magnitude finer than existing methods, according to the researchers.

One of the first uses of this technology is likely to be "bioscaffolding," the creation of 3D frameworks for tissue engineering.

(We've mentioned 3D printing before, but here's a quick summary: using a system akin to an ink-jet printer, 3D layers are deposited -- or, with this new technique, extruded -- building up complex objects; given the right base materials, design software, and cost, a wide array of goods could be printed as needed, rather than purchased ready-made from a retailer.)

The Amazing Spider-Van-Der-Waals Forces

How does a spider stick to the ceiling? According to scientists at the Institute for Technical Zoology and Bionics in Germany, along with a colleague in Switzerland, the spider's secret is all about the van der Waals force, a kind of interaction between individual molecules within a nanometer of each other. Using this simple molecular interaction, spiders can hold up to 170 times their own weight. But there's nothing unique to spiders about this form of adhesion; humans could design objects which take advantage of this molecular-scale force. And it's the potential for biomimicry that makes this discovery particularly compelling.

While the news headlines regarding this research refer to making better sticky notes, the implications are much greater. Van der Waals-based adhesion is not affected by changes in the surrounding environment -- get it wet, expose it to sunlight, get it greasy, and it remains tightly bound. One application that immediately comes to mind for me is remote sensor placement: being able to attach sensing devices to walls, fences, even trees without worry about rain or heat adversely affecting the adhesion seems clearly valuable. More prosaic applications, such as "tape" that can adhere nearly anywhere and doesn't leave residue when removed, also come to mind.

The Eurekalert article gives a good explanation of how this all works, and includes links to six different very high resolution scanning electron microscope images of the foot of the jumping spider species used in the research. The actual article, in the journal Smart Materials and Structures, is also available for the next month.

April 30, 2004

Nanocomputers on the Horizon

Oh, carbon nanotube, is there nothing you cannot do? Probably, but it's time to add another item to the list of carbon nanotube uses. Researchers at UC Irvine have figured out how to construct a high speed nanotransistor using a single-walled carbon nanotube. This is a good step towards nanoscale computers. Moreover, this early-stage work suggests that carbon nanotube-based computers would be able to operate significantly faster than current-generation silicon-based chips, and perhaps faster than the maximum possible speed for silicon technology.

Although Burke's group demonstrated that nanotube transistors could work in the GHz range, he believes that much faster speeds are possible. "I estimate that the theoretical speed limit for these nanotube transistors should be terahertz [1 THz=1,000 GHz], which is about 1,000 times faster than modern computer speeds." His team is currently doing related research on the theoretical prediction of the cutoff frequency, or so-called speed limit, for these transistors.

Every transistor has a cutoff frequency, which is the maximum speed at which it can operate. For silicon, the cutoff is about 100 GHz, but current circuits typically operate at much slower speeds, according to Burke.

The usual caveats -- it's early work, it will take a couple of years to come to fruition, they may run into unexpected problems, etc. -- apply. But this is another good indicator that the acceleration of information technology is nowhere close to reaching its limits.

May 18, 2004

Three Months on a Red Planet

What's it like to spend three months on another world? Ask the Mars rovers. NASA has created movies of the activities of both Spirit and Opportunity entitled "90 Sols in 90 Seconds' -- a "sol' being a planet's rotation period (planetary scientists use this term to distinguish another planet's rotation period from a "day," which is the 24 hour Earth rotation period). The movies -- each a 5 megabyte .mov -- are frantic black & white recaps from the main rover cameras.

I know I won't be getting to Mars any time soon, so clips like this give a fun "you are there" sensation. Some WC readers disagree with me on this, but I strongly believe that extra-planetary exploration is a useful part of a greater understanding of planetary evolution, and ultimately helps us figure out how to prevent human activities from kicking off unrecoverable non-linear environmental changes. Mars may be red, but the study of it is a nice subtle shade of green.

May 25, 2004

Solar Nanotech

Advances in solar cell technology just keep coming (see previous stories here, here, and here, among others). According to Technology Research News, a group of Los Alamos National Laboratory researchers have figured out how to employ nanotechnological processes to improve solar cell energy production by up to 37 percent. By using lead selenium nanocrystals (measuring about 100 hydrogen atoms across), the LANL scientists could trigger a process called "impact ionization," which lets a photon move two electrons instead of one. Solar cells using lead selenium nanocrystals would have a potential conversion efficiency of 60 percent; most commercial solar cells max out at around 35 percent.

The LANL report, which is to appear in Physical Review Letters, suggests that cells using this technology could be practical in a couple of years. No word on cost or engineering difficulties yet, of course. I'm particularly curious as to whether this technique could be combined with the band-gap improvements developed by the UC/MIT/LLNL team last month.

June 2, 2004

Lab On A Chip

Speaking of sensors, what do you do if you want your autonomous probe network to be able to monitor something other than salinity, temperature, or humidity? Monitor something biological, perhaps? NASA has something for you, a lab-on-a-chip designed to work in extreme environments on Earth, and eventually to go to Mars:

NASA researchers are developing complex, portable microarray diagnostic chips to test for all the genes and DNA responsible for determining the traits of a particular organism, detect specific types of organisms, or use biosensor-like probes such as antibodies to detect molecules of interest. By applying this technology in laboratories and in the field where organisms live in extreme environments on Earth, astrobiologists can compare Earth-life with that which may be found on other planets.

"The micro array chip system developed to go to Mars will be lightweight, portable and capable of detecting organic molecules," says Dr. Lisa Monaco, the project scientist for the Lab-on-a-Chip Applications Development program.  "This instrumentation can easily be adapted for monitoring crew health and their environment."

Such biosensing technology could have broad applications in agriculture, environmental research, healthcare, and (in particular) security, with the ability to detect trace amounts of pathogens. Not to mention helping us figure out if Mars actually does have life on it...

June 15, 2004


Horsehead Nebula, from Rent-A-ScopeI've written about what might happen when digital-imaging telescopes get connected to the Internet. With the advent of good CCD-based cameras and digital motion controls, hooking a telescope up to the net is a fairly straightforward task. But while I mused about a peer-to-peer network of digital scopes, I didn't think about a different model: the telescope as mainframe.

Arnie Rosner, however, saw the possibilities inherent in connecting a big, expensive telescope to the net. Four telescopes, in fact. In the dark skies of New Mexico, all available for rent over the web. Once you schedule your time, all you need to do is enter in the name of what you want to see and the telescope slews to it. You can take long-exposure CCD images, just sit back and watch, or even hop from object to object.

These telescopes are far beyond what an interested hobbyist would be able to use, beyond what even some academic astronomers can regularly get their hands on. Even if you had the resources, if you live in an area without clear, dark skies, the images available through these scopes would never be possible. In years past, only a small number of people could control telescopes like these; now, with the web, anyone can. If you would love to be able to do deep-sky photography, or even just gaze upon "live" (if speed-of-light-delayed) images of stellar and galactic objects, you can now do so without spending tens of thousands of dollars.

While seeing the colors of a star-cradling nebula or the fossil light from galaxies which died out long before humans ever evolved on Earth may not grant you insights in how to change the world, it is a humbling experience, one we all should have.

(Image from Rent-a-Scope)

June 28, 2004

Evolution in Action

Evolution is a pretty amazing process. The combination of internal change (mutation) and environmental pressure (fitness) can have pretty dramatic results, given enough time. And when you do it in a computer, "enough time" can be surprisingly brief.

Evolutionary design is a computerized creative process which relies on the same notions of natural selection and mutation that underlie biological evolution. Take a large number of individuals, each slightly different. Introduce some mutation, either by randomizing small bits or mixing elements from individuals (the electronic version of sexual reproduction). Check the resulting generation against the goal -- how well do the various designs accomplish the needed task? Get rid of some of the designs that do very poorly, add more of the designs that do fairly well. Now repeat the process. Many thousands of times.

We've talked about it here in brief, and it's one of the more powerful techniques underlying the biomimicry concept: you're not just copying nature's results, you're copying how nature comes by its results.

NASA's 2004 conference on evolvable hardware just finished up, and it turns out that NASA is doing some of the most interesting work around with evolutionary design. The Evolvable Systems Group researches techniques for engineering hardware for NASA missions without explicit blueprints. Antenna design consumes a great deal of their attention, as some of the interactions between components in the antenna frame and with the spacecraft itself can be very difficult to model. The antennas that the evolutionary designs come up with often don't really look like traditional devices (see above), but that's okay: it's how they work that counts.

(Antennas have been the focus for evolutionary design researchers for awhile now; Derek Linden, who did some work with NASA on this prior to the ESG's efforts, was working on antennas back in 1997 (PDF), and has suggested that the evolved antenna he patented in 1999 may have been the first patent issued to a nonhuman designer.)

As funky as these antenna designs are, I think that the most intriguing research the ESG is now doing is with coevolutionary algorithms. (The ESG site links to a paper on coevolutionary design the group did for the 2002 IEEE Congress on Evolutionary Computation here -- PDF. It's heavy going, but interesting.) Whereas traditional evolutionary design is a strictly Darwinian, competitive process, coevolutionary design integrates cooperative aspects as well, making for a richer, more complex evolutionary environment -- and one which better mimics reality.

June 29, 2004

Saturn Awaits

Tomorrow, at 7:36pm PDT (10:36pm EDT), the Cassini-Huygens probe, a joint NASA/ESA/ASI mission, will make its orbital insertion burn to slow down and enter Saturn's orbit. Launched October 15, 1997, it's the most expensive unmanned mission yet -- and probably the last of its kind for awhile. If all goes smoothly (and, so far, all has gone smoothly with Cassini-Huygens), the craft will spend the next four years studying Saturn, orbiting around the planet more than 70 times. The pictures (and the science) should be incredible.

But the real excitement will come in January of next year, when the Huygens probe separates from Cassini, dropping into the thick, cold atmosphere of Titan, the second largest moon in the solar system. Nobody is entirely certain what Huygens will find; Titan's atmosphere is actually thicker than that of Earth's, and is an opaque soup of nitrogen, methane, and other fairly unpleasant gases. The ESA developers don't actually expect Huygens to survive the landing, but are hoping to get abundant data from the six different instruments on the probe.

Contact with Cassini-Huygens will be cut off, briefly, as it passes through the outer edges of Saturn's ring system, by far the riskiest part of the whole mission. The probe will send back its "here's how things went" data around midnight, California time. The first pictures will be sent received around 7:30 in the morning PDT on Wednesday.

Among the mysteries the joint research teams hope to solve: why does Saturn appear to be rotating six minutes slower than it did when Voyager 1 & 2 passed through in 1977? (That story also includes a link to an audio file of Saturn's radio signal...)

July 1, 2004

Catching Up (Science & Technology Edition)

I've been pretty busy lately, but the WC suggestions box still keeps pinging me and my RSS feeds keep pointing me towards new and interesting stuff. Rather than continue to let them pile up, I'm going to do a few QuickChange-style entries collected by category.

  • Cassini made it! The school bus-sized probe is now in orbit around Saturn, having successfully slipped through the F & G rings and fired its retrorockets to slow down to orbital speed. NASA, JPL, and the ESA have really embraced the web as a means of distributing space science data: rather than wait for one or two photos to show up in the newspapers, you can browse the raw image feed directly.

  • SciScoop points us to Stanford University press release detailing the development of an implantable chip which could serve as a prosthetic retina and as a drug-delivery system for neurological illnesses. One developer describes it as "almost like an ink-jet printer for the eye" -- able to do controlled releases of neurotransmitters using electro-osmosis. Researchers caution that (as you should expect) this is still a few years away from actual human testing.

  • The Center for Responsible Nanotechnology has been exploring thirty essential studies which should be undertaken before we actually manage to develop molecular manufacturing nanotechnology. The most recent entry -- "Nanotech Arms Races" -- is sobering. Molecular nanotechnology has the potential to be incredibly destabilizing militarily and politically; it's quite possible that, as the development of the technology becomes imminent, nations will race to be the first to get it -- or to stop their more advanced adversaries from getting there first.

  • Open Access News points us towards an article in Nature Medicine arguing for the expansion of "biobanks" -- networks of genetic and population data -- to give medical researchers better tools for studying and reacting to outbreaks of human diseases. Ironically (given the link is from Open Access News), the actual Nature Medicine article is restricted to NM subscribers. They do make an abstract (which says less than the OAN brief) available, as well as a table of links to existing biobank networks.
  • July 6, 2004

    Elevator Update

    Jon wrote on June 26th about space elevators, sometimes referred to as "beanstalks," and increasingly -- with an eye to marketing -- referred to as "space bridges." He mentioned the Third International Conference on Space Elevators, which ran from June 28th through June 30th. The meeting is now over, and while we wait for the official archive of the presentations, we're starting to get a bit of information about what went on. Blaise Gassend, a presenter at the conference, kept notes during nearly all of the presentations; these notes are now available on his website. His own presentations are also available on the site: Exponential Tethers for Accelerated Space Elevator Deployment (PDF) and Non-Equatorial Space Elevators (PDF). This last one is particular intriguing, as it runs against what amounts to the conventional wisdom about beanstalks -- that they need to come up from the equator.

    Other relevant space bridge/elevator pages include: Andrew Price (who also spoke at the 3rd International); the Gizmonics Space Elevator page, which discusses some of the physics involved, and points to the as-yet-content-free "Elevator 2010" competition; NASA's Centennial Challenge, which looks to non-traditional sources of innovation and ideas for space exploration; and LiftPort Group, a Bremerton, Washington-based corporation planning to build an elevator, and which includes a "countdown to Lift: April 12, 2018" clock, showing their belief that we have a bit more than 5,000 days before a space elevator is up and running.

    July 12, 2004

    Curing Cancer

    Cancer haunts us: its ability to manifest in seemingly-healthy tissue without any evident provocation; the speed with which it can hit -- and the years over which it can linger; the knowledge that, whatever the trigger, it is ultimately the body turning against itself, going mad at a cellular level. While other diseases have emerged with equally (or more) devastating consequences for their victims, cancer holds a deeply-rooted place in the human imagination. A "cure for cancer" stands alongside "living in space" and "thinking machines" as key symbols for many of what The Future will hold.

    That part of the future may well be much closer than any of us had dared hope.

    Researchers at Rice University, along with a company called Nanospectra Biosciences, have determined that gold-covered nanoparticles, 20 times smaller than a red blood cell, will quickly pool in tumors when injected into the bloodstream. The nanoshells, when illuminated with a near-infrared laser (which otherwise passes harmlessly through living tissue), will heat up sufficiently to incinerate the tumors completely, in every test.

    The report of this research was in Cancer Letters (vol. 209, issue 2) in late June. The full report is only available to subscribers, but the abstract tells the story (albeit with considerable jargon). An excerpt:

    The following study examines the feasibility of nanoshell-assisted photo-thermal therapy (NAPT). [...] Polyethylene glycol (PEG) coated nanoshells (~130 nm diameter) with peak optical absorption in the NIR [Near-InfraRed] were intravenously injected and allowed to circulate for 6 h. Tumors were then illuminated with a diode laser (808 nm, 4 W/cm2, 3 min). All such treated tumors abated and treated mice appeared healthy and tumor free >90 days later. [...] This simple, non-invasive procedure shows great promise as a technique for selective photo-thermal tumor ablation.

    The researchers on this project were D. Patrick O'Neal, Leon R. Hirsch, Naomi J. Halas, J. Donald Payne, and Jennifer L. West.

    The Cancer Letters report wasn't the first time this group showed that the nanoshell + NIR laser combination could kill cancer, but this was the first time they've demonstrated that the nanoshells could be administered into the bloodstream, not requiring a direct injection into the tumor. Previous reports (available online) discuss direct-injection nanoshell-mediated treatment (PDF) and the use of nanoshells as an enabler for tumor detection (PDF). Another article from this group, this time including magnetic resonance guidance of the particles, was published on PubMed late last year; a write-up about that research appeared on the website HealthScout.

    To answer the obvious follow-up question, the nanoshells (which, in every test since they were developed in the 1990s, appear completely non-toxic) are eventually ejected from the bloodstream, and do not accumulate over time. Information about how the nanoshells work and how the cancer treatment functions from a tech perspective can be found on the Nanospectra website.

    From a WC perspective, perhaps the most exciting part of this development is the relative simplicity of the operation. It's non-invasive, does not require the use of elaborate and expensive equipment (relatively speaking), and given the time involved (6 hours of nanoshell circulation, 3 minutes of laser illumination), can be taken care of in a single-day's visit to a medical technician. The only difficult part will be in the intial identification of tumors, and the nanoshell method appears to have application in that regard, as well. If the treatment works in humans as well as it does in mice (and on human cancer cells in vitro), dealing with cancer could be nearly as simple as getting laser vision correction -- and you can do that in some shopping malls. This is not a medical treatment that would be limited by difficulty and equipment expense to the richest nations.

    If it works in people as well as it does in mice, that is, and we won't know for a while. Human trials are not yet scheduled.

    July 26, 2004

    Cooler Fuel Cells on the Horizon

    Gizmodo links to a press release from the University of Houston about breakthroughs in "cool" thin-film solid oxide fuel cells (SOFCs). TF-SOFCs combine lower cost than traditional fuel cells, smaller size, and lower temperatures (hence the "cool" appelation). The press release spells out some of the implications:

    Compared to the macroscopic size of traditional fuel cells that can take up an entire room, thin film SOFCs are one micron thick – the equivalent of about one-hundredth of a human hair. Putting this into perspective, the size equivalent of four sugar cubes would produce 80 watts – more than enough to operate a laptop computer, eliminating clunky batteries and giving you hours more juice in your laptop. By the same token, approximately two cans' worth of soda would produce more than five kilowatts, enough to power a typical household.

    Keeping in mind that one thin film SOFC is just a fraction of the size of a human hair with an output of 0.8 to 0.9 Volts, a stack of 100 to 120 of these fuel cells would generate about 100 volts. When connected to a homeowner's natural gas line, the stack would provide the needed electrical energy to run the household at an efficiency of approximately 65 percent. This would be a twofold increase over power plants today, as they operate at 30 to 35 percent efficiency. Stand-alone household fuel cell units could form the basis for a new 'distributed power' system. In this concept, energy not used by the household would be fed back into a main grid, resulting in a credit to the user's account, while overages would similarly receive extra energy from that grid and be charged accordingly.

    Sounds great. There are a few potential downsides, however. Firstly, this is still in the "works in the lab" phase -- let's keep our eyes open for announcements of actual application development. Secondly, the definition of "cool" is rather contextual -- normal fuel cells operate at 900-1,000° C, while these operate at a mere 450-500° C. My current laptop battery gets hot enough, thank you. Thirdly, as shown in the SOFC illustration at the top of this entry (from this 2003 NASA article discussing TF SOFC applications), one of the outputs from the fuel cell energy production process is our old friend CO2, at least when natural gas is used as the source fuel. None of these are deal-breakers, but they help us remain realistic about the system's prospects.

    August 12, 2004

    Japan Tests Solar Sail

    Japan's Institute of Space and Astronautical Science has successfully completed the first in-space deployment of an experimental solar sail. A solar sail uses the pressure of photons streaming from the sun to slowly accelerate, allowing for travel within the solar system without the need to carry on-board fuel. Although the acceleration is gradual, it's also relentless; a fully functional solar sail should be able to achieve speeds far greater than any current rocket system.

    The experimental deployment did not test the ability of the sail material to catch sunlight for propulsion, only the ability to unfurl a large, thin, sheet in orbit. (None of the articles I found on this, including the project's English-language homepage at ISAS, indicated the area of the sails, only the thickness (0.0075 mm). If anyone finds a reference to the square meter area of the sails, drop me a note and I'll update this post.)

    August 20, 2004

    Microbot Helicopters

    Okay, so I really don't know what I'd do with one, but this Bluetooth-controlled microhelicopterbot made by Seiko Epson is pretty damn cool. Weighing in at just under 9 grams without the battery, this is the lightest wireless flying robot around. Bluetooth is used to send the control program -- so it's not a remote-control unit so much as a remote-programmed unit, kind of like the Mars landers -- and to receive pictures from the on-board image sensors.

    But what do you do with a wireless microhelibot? It's too big to serve as a mobile spycam, fortunately, and Bluetooth, while useful, is not a long-range medium, so individual applications are presumably limited. But if we think of the microroboflyers not as individual units, but as parts of a larger system, some ideas spring to mind. Imagine a network of sensors using small, cheap, rugged "smart dust" technologies, but mobile, like the "feral" robotic dog project at Yale. They'd be able to keep in touch with each other via an ad-hoc "mesh" network, extending the range of the pack well beyond the normal Bluetooth distance; as long as at least one remained within range of a more powerful transceiver, data from any of the units could be sent back to researchers/operators. They'd be perfect for emergency exploration of damaged buildings, or as tools for knowing nature through technology.

    So the question isn't "what would you do with one of these?" it's "what could you do with a swarm of these?" The answer, I suspect, is "quite a bit"...

    (Via Near Near Future -- which has a very cool picture of one in flight, check it out -- and Engadget)

    August 27, 2004

    The Personal Genome Project

    The first million base pairs of the human genome took four years to decode; the second million base pairs took four months. The rate of improvement in the computational ability to sequence DNA base pairs is progressing at a rate comparable to -- and occasionally faster than -- the famous "Moore's Law" doubling curve, a testament to both improvements in processor capability and improvements in process. Within the next ten years, possibly by the end of this decade, biologists will be able to sequence fully a given DNA sample in a matter of minutes. In other words, it will soon be possible to have not just a human genome sequence, but your human genome sequence. Among the many questions which will result will be who owns the DNA, and who will own the process of telling your what your DNA holds?

    The Personal Genome Project (PGP), at the Harvard Molecular Technology Group & Lipper Center for Computational Genetics at Harvard Medical School, attempts to answer at least one of those questions. The PGP is an effort to make sure that an open, public domain approach to personal genome sequencing has a hold on the space, so as to better compete with the inevitable commercial efforts. The project is relatively new, and is still seeking participants.

    As the PGP technology is advancing rapidly we would like to begin discussing the best ways to recruit people to have their genome sequenced (in part or whole). This web page is a work-in-progress draft. Volunteers for designing this plan as well as potential volunteers for genome sequencing are welcome. This may involve ethicists, attorneys, database-security experts, medical records, management, fund raising, public health, public relations, education, etc.

    The founders of the PGP published recently (PDF) in Nature Reviews Genetics an examination of recent and potential advances in DNA sequencing, including an argument for the likely existence of a $1,000, 90 second genome sequencing system before 2010. The article looks beyond the technology to discuss the clinical, political, and ethical aspects of the ability to sequence an individual's DNA cheaply and quickly:

    In the case of Moore versus Regents of the University of California, [...] the court rejected the idea of [an individual's] property rights to the cells themselves, and that informed consent implies a right to information that is derived from the biological material itself. Fewer than half the states in the United States require informed consent for genetic testing, and there are no US federal laws that ban genetic discrimination for medical insurance or in the workplace. [...] A second category of explicit legal concern is that of patent law. In the United States, Europe and Japan, only portions of DNA that are non-obvious, useful and novel can be patented. ULCS [ultra-low cost sequencing] technologies will probably not be able to avoid the resequencing of patented genes. Interesting legal issues arise around the question of patients’ rights to have analysed (or to self-analyse) their own DNA sequence versus corporate interests that presumably own the rights to that analysis.

    "Interesting legal issues" indeed. In one of my recent science fiction scenario books, Broken Dreams,I posited the implementation of a "Genetic Rights Management" system, allowing those companies which develop advanced biotechnologies to prevent the unlicensed duplication of their patented gene sequences (such as by having a baby). What the PGP team warns of, however, goes beyond even that: your right to even read your own DNA could be challenged if your genome contains sequences already patented by a biotech firm.

    A public domain genome effort like the PGP is an attempt to forestall such a future. By making the technology for gene sequencing fast, cheap and, ultimately, out of any company's control, the Personal Genome Project serves two vital purposes: making knowledge as widely available as possible, and keeping knowledge as free as possible. We wish them success.

    August 28, 2004

    The Elevator Prize

    We've talked about space elevators ("beanstalks," "space bridges") a few times before. A ground-to-orbit elevator system would dramatically reduce the cost (and danger) of getting material into space, potentially opening it up for non-governmental exploration far more effectively than commercial/private launch vehicles. But that doesn't mean that beanstalk planners can't learn something from private spaceships -- or, more precisely, from the effort that seems to have catalyzed private space craft production.

    The Elevator:2010 site (which was empty when I looked at it in early July) has announced the first annual Space Elevator Climber Competition. The goal is to build prototype ribbon climbers -- a necessary mechanism for building an elevator -- which maximize speed and efficiency while minimizing weight. The prizes for the best teams at the competition: $50,000 for first, $20,000 for second, and $10,000 for third place.

    MSNBC has a lengthy article about the effort, but the details of the competition requirements can be found on the Elevator:2010 site itself:

    The competition provides the race track, in the form of a crane-suspended vertical ribbon, and a strong light source to power the climbers. Competing teams provide climbers, which have to use the power beamed to them and scale the ribbon while carrying some amount of payload. Climbers will be rated according to their speed and the amount of payload they carried.

    The climbers (unmanned, of course) will weigh 25-50 kg [50-100 lbs], and will ascend the ribbon at about 1 m/s. [3 feet per second or 2.5 MPH]

    The beam source is a 10 kWatt Xenon search-light (80 cm beam diameter, about 25% efficient), which should yield a climber power budget of about 500 watts.

    The ribbon is roughly 30cm (1 foot) wide by 1 mm thick, is about 60m (200 feet) long, and is tensioned to about 1 ton.

    The competition will be held next summer, in the San Francisco Bay Area. You can bet we'll be there to watch it.

    August 31, 2004

    New Class of Extrasolar Planet

    We've posted in the past about notable discoveries of planets outside of our solar system. In every case, however, the planets identified (over 125 so far) were so-called "gas giants" -- planets like Jupiter or Saturn, not rocky "terrestrial" planets like Earth or Mars. Despite occasional anxious supposition that this may mean that Earth-like planets are vanishingly rare, the reality is that our current tools for finding planets outside the solar system work best at finding very large planets very close to their host stars.

    But that doesn't mean that's all we'll find. Today, NASA announced something new -- the first sub-Neptune-sized planets found outside the solar system. One is in the system 55 Cancri, about 41 light years away, and part of a system already known to have 3 gas giants; the other is in Gliese 436, only 30 light years away. They're each only about twice the diameter of Earth, and about 10-20 times the Earth's mass (Jupiter, by comparison, is 11 times the diameter, over 300 times the mass, and 1300 times the volume of Earth). Like nearly all the other extrasolar planets found so far, these planets orbit extremely close to their parent stars, much closer than Mercury does to the Sun. Because of this, it's highly unlikely that planets that small could form gas atmospheres; therefore, these are not only the first sub-Neptune sized extrasolar planets we know of, they're also probably the first rocky -- terrestrial -- extrasolar planets found.

    (NASA put together an animated "fly through" of the 55 Cancri system, available at this link.)

    I know that the discovery of extrasolar planets ranks pretty low on the immediately world-changing scale. But longtime readers of WorldChanging should know by now that I like to think Big Picture and Long Term, and discovery of extrasolar planets, especially terrestrial ones, fits both patterns. Moreover, part of understanding our world is understanding its place in the universe. I, for one, am very happy to see this ongoing exploration.

    September 10, 2004

    Gigapixel Camera to Capture the Stars

    Sometime after 2010, the European Space Agency will be launching the Gaia mission, which is to create the most comprehensive map yet of our galaxy and beyond. Mission details include a stellar census as well as a search for a wide range of objects:

    Gaia will pinpoint exotic objects in colossal and almost unimaginable numbers: many thousands of extra-solar planets will be discovered, and their detailed orbits and masses determined; brown dwarfs and white dwarfs will be identified in their tens of thousands; some 50 000 supernovae will be detected and details passed to ground-based observers for follow-up observations; Solar System studies will receive a massive impetus through the detection of many tens of thousands of new minor planets, and even new trans-Neptunian objects, including Plutinos, may be discovered. Amongst other results relevant to fundamental physics, Gaia will follow the bending of star light by the Sun, over the entire celestial sphere, and therefore directly observe the structure of space-time.

    Although the mission was approved a few years ago, the ESA just released information on its primary cargo. Gaia will be outfitted with a gigapixel camera: a mosaic of 170 9-megapixel CCDs, linked together to form what will be the most sensitive and detailed camera ever put outside the atmosphere. Coupled with a 1.4 meter telescope, it will be map objects down to "V=20" magnitude. Gaia will be stationed 1.5 million kilometers from Earth, at the "L2" point along Earth's orbit, one of the locations where the gravitational pull of the Earth and the Sun allow for a stable position, and well away from the reflected light of the Earth and Moon.

    September 16, 2004

    Eye at the Bottom of the World

    Who would ever think that building something to function deep in Antarctica in the middle of winter would be the cheap option?

    New Scientist reports that astronomers at the University of New South Wales, Australia, have proposed building a new telescope (an "Extremely Large Telescope," no less, with at least a 30 meter mirror) at Dome C, an Antarctic plateau at 75° south and over three kilometers above sea level. Preliminary tests of the location, using an 85 milllimeter scope, showed that the plateau had extremely low atmospheric "jitter" because the location has very low wind speeds and little turbulence in the air. The researchers claim a two-meter telescope in that location -- a moderate size for a professional device -- would be able to image galactic phenomena with a clarity comparable to Hubble. A larger scope would be far better. Dome C could be home to the first serious Earth-based terrestrial planet-finder scope.

    According to PhysOrg, the UNSW proposal has another interesting feature: much of it would be built of "icecrete" -- snow compressed to concrete-hardness.

    Although a Dome C telescope would be inaccessible during use -- the temperatures that deep in the Antarctic during winter plunge as low as -86° C (-122° F) -- it could be serviced during the summer for far less expense than a trip to an orbital telescope.

    September 25, 2004

    Safe Nano, Green Nano

    PhysOrg yesterday had two nanotechnology-related reports of particular interest to WorldChangers.

    The first, Rice finds 'on-off switch' for buckyball toxicity, is a follow-up to a story we talked about in February: the possible danger of inhaled nanoparticles. In short, earlier this year nanotech researchers were surprised to find that buckyballs (an incredibly useful form of carbon) seemed to show high toxicity under certain conditions. But now, researchers at Rice University's Center for Biological and Environmental Nanotechnology have figured out that buckyball toxicity is dependent on the shape of the molecule -- and there are simple steps that can be taken to render the carbon nanoparticles one-ten-millionth as toxic. While this was a study of individual cell toxicity, not whole-body danger, it is a promising sign.

    The second, Nanotechnology to Create Green Hydrogen, is more-or-less a press release from the British firm, Hydrogen Solar. Nonetheless, what it discusses sounds pretty intriguing: a significant improvement in the ability of solar-powered systems to "crack" hydrogen from water. The innovation is the use of a "nano-crystalline coating" of metal oxides which apparently make the splitting of H2O into hydrogen and oxygen twice as efficient as previous systems. A bit of Googling shows that development was announced in August; what's new is that they're about to run major tests in Las Vegas. (While this reminded me of a University of Massachusetts development we talked about last December, some of the elements don't quite match -- the efficiency numbers differ, and the UMass folks suggested that it might take 20 years for their process to be commercialized, not 10 months.)

    October 14, 2004

    Nano News

    Several nanotechnology-related items have popped up on my radar in the last few days. Here are some highlights:

    Nanotechnology, the Environment and Brazil
    Next week, October 18 and 19, the University of São Paulo will be hosting the First International Seminar on Nanotechnology, Society and the Environment. Reading the description gave me chills: this is exactly the right conversation to be having now, and in exactly the right place. WorldChanging Ally Mike Treder, executive director of the Center for Responsible Nanotechnology, will be speaking at the seminar both days. We'll definitely link to his report about the proceedings.

    Computerworld reports about "nanograss" -- a "bed of upright silicon posts a thousand times thinner than a human hair." Developed at Bell Labs, with a variety of fascinating (and somewhat bizarre) properties. It could be used to create tiny "smart" heat sinks, liquid lenses, nearly frictionless boat hulls, and allow batteries to remain fresh on the shelf almost indefinitely. Such batteries would also have three to four times the power-to-weight ratio of ordinary power cells.

    Nanotechnology and Global Poverty
    The 1st Conference on Advanced Nanotechnology: Research, Applications, and Policy runs October 21-24, in Washington DC (October must be nanomonth). Among the technical and speculative seminars is "Applying Nanotechnology to the Challenges of Global Poverty," a presentation given by Bryan Bruns of the Foresight Institute. The abstract reads like a checklist of what I want to see more discussion about when thinking about nanotechnology. I won't be able to attend, but I will keep my eyes open for any details about this presentation.

    Finally, Wired reports about the increasing application of nanotechnology to water filtration.

    The NanoWater congress [last week in Amsterdam], which kicked off the Aquatech 2004 water-technology trade show, outlined how nanotechnology can create drinking water from contaminated water, salt water and all forms of waste water, including bong water (it is Amsterdam, after all).

    The promise of nanofiltration devices that "clean" polluted water, sifting out bacteria, viruses, heavy metals and organic material, is driving companies like Argonide and KX Industries, which developed technology used in Brita filters, to make nanotechnology-based filters for consumers. Two products incorporating nanotechnology are going to hit the market within the next year and are already being tested in developing nations.

    One wonders how well high-tech nanomaterial filters stack up against ceramic water filters made by Potters for Peace. Of course, it appears that the nanofilters may be able to handle the dirtiest of water, and even help with desalination, so they will certainly be an important addition to the worldchanger's toolkit.

    October 15, 2004

    The Methane Mystery

    The European Space Agency's Mars Express orbiter, which arrived at the Red Planet earlier this year, has been somewhat overshadowed by two successful landers and one failure: NASA's Mars Exploration Rovers 1 & 2 ("Spirit" and "Opportunity") still crawl along the Martian surface, well beyond their scheduled end-of-mission; conversely, the ESA's "Beagle II" lander, which was carried on the Mars Express, plummeted to Mars and destruction. When the lander crashed, much of the media seemed to forget about the orbiter, either focusing on the NASA missions or the controversy around the failed Beagle II. But the Mars Express mission, despite the lander disaster, has nonetheless been producing some valuable science (as well as some great photos). The most intriguing reports concern methane.

    Continue reading "The Methane Mystery" »

    October 26, 2004

    Pocket Translation of the Spoken Word

    It's a recurring element of near-future worlds: the phone (or video screen or implanted chip) that automagically translates spoken phrases to or from your own language, allowing you to converse freely with anyone. No longer shackled to a tour group or three months of an intensive language class, one could travel the world with ease, confident of the ability to communicate with the locals. I suspect this fantasy of a translation device is more prevalent among those of us in cultures less prone to learning multiple languages, but even multilingual global nomads would find occasional technical assistance useful. NEC's pocket spoken-word translation device, as discussed in a recent New Scientist, seems to be a step in this direction. Translating between Japanese and English, it's aimed at Japanese tourists.

    I have my doubts about how well it will work. Machine language translation is a hard problem; if you want to get a sense of the difficulty, just try entering a phrase into Google's "Language Tools" page, translating it to another language, then translating it again back into English. While there are certainly machine translation systems better than that on a freely-available webpage, I haven't seen evidence of any which are good enough to be of use beyond emergency circumstances. Translation of spoken language is even more difficult, in part because conversational speech recognition technology isn't quite ready yet, and in part because so much of human communication goes beyond words. Tone, body position, gestures, and especially the context of the world around you all play into the meaning of the words chosen.

    Still, even if it doesn't work all that well, it's a harbinger of an inevitable development. At some point in the next couple of years, we'll have very rough spoken translation tools available to us; shortly thereafter, rough spoken translation tools will be embedded into mobile phones. After that, it's just a case of improvements in the hardware allowing for faster processing and improvements in the algorithms allowing for less-error-prone translation. I would expect that, by 2015 or so, it will be as hard to buy a mobile phone without a translation chip as it is now to buy one without a camera.

    Eventually those two technologies could eventually converge, or at least intertwine. I would expect to see a "what does that say?" cameraphone application to show up any month now. And it's possible that a camera, allowing the translation system to recognize body position, gesture and perhaps even context as well as the tone and the words, may be the necessary step to making machine translation really work.

    October 27, 2004

    New Hominid Species Unearthed

    "There are more things in Heaven and Earth, Horatio, than are dreamt of in your philosophies..."

    Archaeologists working on Flores Island, at the eastern tip of Java, have discovered the skeletal remains of a new hominid species -- a relative of Homo sapiens, but a different branch of the family tree. Two aspects make this story particularly compelling: the hominid, tentatively named Homo floresiensis, stood only three feet tall as an adult; and it died out in relatively recent times, only 12-13,000 years ago, when a nearby volcano killed off much of the island's life. The full report will appear in next week's Nature; detailed press accounts can be found at New Scientist, the BBC, and the Washington Post.

    The evidence currently suggests that H. floresiensis is an offshoot of Homo erectus, a hominid species ancestral to Homo sapiens. It is most decidedly not modern human: the skull morphology is all wrong, and the multiple skeletons found from different layers of the cave site show that the dwarfism was a population-wide characteristic. The species appears to be the first example of a higher primate evolving via what biologists call the "island rule:" an isolated population, in an environment with limited resources but no real predators, will tend towards species dwarfism. The prehistoric dwarf elephant Stegodon is known to have inhabited Flores Island during the same period.

    Continue reading "New Hominid Species Unearthed" »

    December 6, 2004

    Hydrogen 101

    Perhaps the most widely-accepted vision of what a greener future will look like is that of the "Hydrogen Economy." Everyone from ecofuturists like the Rocky Mountain Institute to petroconservatives like the Bush administration extol the virtues of hydrogen powered vehicles. We talk about hydrogen as a future fuel here on WorldChanging with some frequency; it remains our best bet for moving beyond greenhouse gas-emitting transportation technologies.

    But the move to hydrogen is not without its challenges. We've mentioned that a few times here, but if you're interested in the future of transportation and energy, it's a subject worth understanding. Fortunately, the current issue of Physics Today has a lengthy and detailed article covering the current issues and possible developments in the efforts to build the hydrogen economy. (While not overly-complex, it is a Physics Today piece, so it assumes some comfort with scientific and technical terms.)

    The challenges to the shift to hydrogen are in three fundamental areas:

    Continue reading "Hydrogen 101" »

    December 18, 2004

    Give My Creation... LIFE!

    What if you could create life in a test tube?

    The BBC reports work done at the Rockefeller University which comes closer than ever to just that. Vincent Noireaux and Albert Libchaber, at the Center for Studies in Physics and Biology at New York's Rockefeller University, have constructed what they call "vesicle bioreactors" which can express genes and have crude cell-like parts.

    The soft cell walls are made of fat molecules taken from egg white. The cell contents are an extract of the common gut bug E. coli, stripped of all its genetic material.

    This essence of life contains ready-made much of the biological machinery needed to make proteins; the researchers also added an enzyme from a virus to allow the vesicle to translate DNA code.

    When they added genes, the cell fluid started to make proteins, just like a normal cell would.

    A gene for green fluorescent protein taken from a species of jellyfish was the first they tried. The glow from the protein showed that the genes were being transcribed.

    The researchers published their work in the Proceedings of the National Academy of Sciences; the article requires a paid subscription, but a free abstract is available here.

    The researchers go out of their way to state that these are not living organisms, and are just biological machines which happen to replicate many of the functions of living organisms. Even taking their claims at face value, it's clear that we're not too far from being able to construct entirely new organisms in laboratories, and have made significant progress along those lines since the last time we mentioned synthetic biology here on WorldChanging.

    If ever there was a situation that called for the precautionary principle, this is it. But I would like to see a real application of precautionary analysis, not simply knee-jerk prohibition. Wholly synthetic creations may actually be a safer pathway than altering more complex extant organisms when it comes to efforts in bioremediation or the like, and a precautionary approach should pay attention to those possibilities. Simpler organisms would have fewer opportunities for unexpected interactions and outcomes, for example, and "kill switch" genes (making the synthetic cell require a rare resource to survive, for example) would be easier to introduce. Synthetic biology has obvious potential dangers, and these should not be underestimated, but no one should assume that the only possible outcomes from this kind of research is negative.

    December 21, 2004

    Neural Interfaces

    One of the classic tropes of 80s cyberpunk is "jacking in" -- connecting one's neural interface from a hardware-augmented brain to the computer networks at large. The neural interface was one of those science fiction technologies that made for good stories, but as a real-world development, it raised all sorts of questions. Who'd want to go through the surgery for that? What about upgrading when better technology came out? And who's going to beta test the thing?!?

    Well, get ready, because we're about to get some answers.

    Cyberkinetics, a Massachusetts company, has launched the first human trials of their new BrainGate neural interface. This won't be for console cowboys trying to make their big cyberspace break, but for the physically disabled needing communication and activity.

    The System could potentially be used to help increase the independence of people with disabilities by allowing them to control various devices with their thoughts. Through their control of a personal computer, users of the BrainGate™ System may be able to control a variety of devices to complete everyday tasks such as composing an email, answering the telephone and controlling a television...

    The principle of operation of the BrainGate™ Neural Interface System is that with intact brain function, neural signals are generated even though they are not sent to the arms, hands and legs.   These signals are interpreted by the System and a cursor is shown to the user on a computer screen that provides an alternate "BrainGate pathway".   The user can use that cursor to control the computer, just as a mouse is used.

    Not surprisingly, the other probable candidate for early adoption of this technology is the military.

    All the usual caveats apply regarding experimental technology, but the chances are good that the BrainGate (or something very much like it) will eventually be a mechanism for the severely physically disabled to continue to be productive and engaged with the world. And while it's undoubtedly enormously expensive now, there's no reason why such systems wouldn't be just as subject to Moore's Law as any other digital device. Within a decade of its initial release, the costliest part of a neural interface would likely be the surgery.

    While assistive technology for the disabled quite often picks up mainstream uses, I don't see too many people choosing to go under the knife for an implant. The reason is the combination of the risks of surgery and the continued improvement of the technology. Who'd want to choose between being stuck using (effectively) the first computer they ever get and having brain surgery every couple of years? I suspect that the next step in the technology -- driven both by mainstream users and the desire to bring down implementation costs -- will be a non-invasive version, able to pick up on changes in brain electrical activity without opening the skull. And then make it wireless...

    (Via CybDem)

    December 23, 2004

    Voyage Into Space

    esamars2.jpgTwo tangentially related space items today -- one about Mars, the other Saturn, both from the ESA.

    If you've paid any attention to space news over the past year, you know that NASA landed two rovers on Mars; these have been sending back fascinating images and data about the Martian geology and environment. If you have followed the news a bit more closely, you probably heard that the European Space Agency also sent a lander to Mars this year -- Beagle II, named after the ship that carried Darwin -- only to have it smack unceremoniously into the Martian surface. But it seems only real Areophiles know about Mars Express, the ESA orbiter that carried the ill-fated Beagle II. Mars Express continues to work just fine, thank you very much, and has been sending back some of the best color images of Mars I've ever seen, along with some potentially revolutionary data.

    Continue reading "Voyage Into Space" »

    December 29, 2004

    Asteroids, Tsunamis, and Knowing When To Shout

    impact.jpgThe December 26 tsunami was a deadly reminder that even exceedingly rare natural events can happen, and can have devastating results. The tragedy was compounded by the fact that warnings could have been issued and responded to, but the systems to do so weren't available. But around the same time, we came very close to having a second reminder, one which could have led to an even more terrible result.

    On December 23, astronomers noted that asteroid 2004 MN4, which orbits the Sun near the Earth, had a 1-in-300 chance of hitting the Earth in April of 2029. This corresponded to a rating on the Torino Scale of 2 -- a slight chance of impact, with widespread regional damage if it happened. The Torino Scale was developed as a way of putting asteroid strike possibility announcements in context, to make sure that the level of risk -- so far, always quite low -- was clear. Every time a possible asteroid hit has been announced, refined study of the rock's orbit over the subsequent few days eliminated the possibility, lowering the Torino Scale rating to 0.

    Late on December 24, the Torino Scale rating of 2004 MN4 was upped to 4, with a 1-in-60 chance of hitting the Earth. On December 26, the 1-in-60 chance was quietly increased to 1-in-37. While still a 97% chance of missing the Earth, the Torino 4 rating was by far the highest level ever given to a potential impact. Finally, on December 27, astronomers were able to refine the asteroid's orbit sufficiently to determine that 2004 MN4 will miss the Earth in 2029, albeit only by a few tens of thousands of miles.

    Continue reading "Asteroids, Tsunamis, and Knowing When To Shout" »

    January 4, 2005

    Biomimetic Adhesives -- The Lessons of the Gecko

    gecko.jpgAs far as lizards go, geckos are pretty amazing. Terrifically successful -- there are 850 different gecko species -- the gecko body is also quite energy-efficient. One species, the Frog Eyed Gecko from the Gobi desert, can move more than three times as far per unit energy than other creatures of similar size. But what makes geckos truly amazing is their uncanny ability to stick to pretty much any surface. Gecko feet adhesion "leaves no residue, is directional, detaches without measurable forces, is self-cleaning, and works underwater, in a vacuum, and on nearly every surface material and profile." In 2002, biologist Kellar Autumn at Lewis & Clark University in Oregon discovered that the adhesion came from van der Waals Forces, minute molecular-scale attraction, (spider feet work in a similar, albeit simpler, manner). Each gecko foot is covered with millions of tiny hairs, or setae, which branch in to nanoscale tips, or spatulae; each seta is strong enough to lift 20mg. The combined adhesive power of a gecko's four feet is over 90 lbs.

    Autumn's lab has continued to work on revealing the secrets of gecko feet, and yesterday unveiled their latest discovery: why gecko feet actually get cleaner in use. It turns out that the solution wasn't in biochemistry, but in biogeometry. The shape and structure of the spatulae will discard small dirt particles even as they continue to adhere to more stable surfaces. The implications of this discovery are wide-ranging. As Autumn puts it:

    This means that synthetic self-cleaning adhesives could be fabricated from a wide variety of materials. The possibilities for future applications of a dry, self-cleaning adhesive are enormous. We envision uses for our discovery ranging from nanosurgery to aerospace applications.

    Or, as he's quoted in the Times, "We're not talking about just the glue of the future, we're talking about the screw of the future." Working with Ron Fearing at UC Berkeley, Autumn's research has already led to larger-scale synthetic versions of gecko setae. In 2003, they managed to create artificial setae with adhesion on the order of 0.5 Newton per square centimeter; their eventual goal is adhesion force equivalent to gecko setae, 10 Newton per square centimeter.

    One feature of systems of millions of tiny attraction points is that they can be both incredibly strong and readily detached. A gecko is able to peel its feet up and run at a good clip, or stand still and be nearly impossible to pull from a window or wall. As Autumn suggests, geckomimetic adhesive could be of enormous value in object construction, as the adhesion force could be quite strong. At the same time, being able to simply peel apart components -- with no chemical residue -- would enhance our ability to design for disassembly, an important part of cradle-to-cradle thinking.

    January 10, 2005

    Breakthrough in Paint-On Solar

    A quantum dot may be tiny, but this development has the potential to be quite big.

    Researchers at the University of Toronto have developed a new form of photovoltaic material using "quantum dots" -- confined sets of electrons with unusual optical and electronic properties -- embedded into a thin polymer film. The material can generate the photoelectric effect using infrared light, and is the first photoelectric polymer to have significant infrared sensitivity. When working across infrared and visual spectra, it has a photoelectric efficiency of 30% -- six times better than other polymer photovoltaics. 30% efficiency would make quantum dot polymer solar cells competitive with traditional silicon-and-glass panels, and far more functional. Quantum dot polymer material could be used in device cases, and Ted Sargent, one of the researchers responsible for this development, claims that the material could readily be spray-applied or painted on.

    "We made particles from semiconductor crystals which were exactly two, three or four nanometres in size. The nanoparticles were so small they remained dispersed in everyday solvents just like the particles in paint," explains Sargent. Then, they tuned the tiny nanocrystals to catch light at very long wavelengths. The result – a sprayable infrared detector.

    As this suggests, the implications of the discovery are broader than just photovoltaics. The quantum dot polymers have a sensitivity to infrared which will be of enormous value in medical imaging, fiber optic communications, and sensor technologies. As the polymer could be readily woven into fabric, it could enable better wearable biosensors, and would definitely be an enabling technology for fabric computers. The big win, of course, would be the possibility of an easily-added solar power layer to the external shell of any device using electricity. It doesn't have to replace plug-in power completely to be a significant efficiency improvement.

    The research will be published in the February 2005 edition of Nature Materials, but is now available for download by subscribers. A summary can be found here, along with supporting graphics and materials (free subscription required).

    January 13, 2005

    Global Dimming, Global Warming, and Bad Reporting

    Reports that British researchers are claiming that cutting back the use of fossil fuels will make global warming worse are popping up on the web today, and many strike the headline and text pose of this piece from the otherwise reliable Reuters, which mixes an alarmist lede with long-discredited crap that "scientists differ" about global warming. The implication of the article is that attempts to fight climate disruption are silly and will just make matters worse. Because this story will undoubtedly be racing around the blogosphere (especially on the "it's not happening, we didn't make it happen, and it will hurt us too much if we stop making it happen" circuit), here's what the report really says.

    Global Dimming is the observed effect of increases in particulate matter in the atmosphere. There has been a substantial drop in the amount of sunlight hitting the planet over the past half-century -- 22% globally, and up to 30% in some locations. It seems to be caused by pollution from burning coal, oil and wood. Because it cuts the amount of sunlight hitting the ground, it cuts the amount of heat trapped by greenhouse gases, and that has troubling implications.

    Continue reading "Global Dimming, Global Warming, and Bad Reporting" »

    January 14, 2005

    Organic Polymer Electronics

    polymerchip.jpgWe pay attention to developments in polymer electronics for a couple of reasons: they can be flexible, meaning that they can have applications beyond traditional computing devices; and they can be printable, meaning that they can be produced very inexpensively, and potentially even via next-generation fabbers. The technology is still fairly immature, but this week saw some details (and pictures) of a pretty interesting breakthrough.

    PolyIC, a German start-up partially funded by Siemens AG, announced in November that it had developed prototype entirely-plastic RFID tags operating faster than any previous polymer circuit, 600 kilohertz. More details have now emerged:

    The developers have created the world’s fastest (600 kilohertz) integrated circuit made of organic material. What’s more, they have succeeded in using printing techniques to produce highly stable circuits made of polymers, something no other group of researchers in the world has achieved, according to information released by PolyIC. The distance between the two conductors is less than 50 micrometers, about as thin as a human hair. These chips even function after being stored for two days at a temperature of 60 degrees Celsius and at 100 percent humidity, and will continue to work until temperatures climb above 120 degrees Celsius.

    PolyIC is targeting a per-chip price of under one cent after these are commercially available next year, bringing us closer to the world WorldChanging Ally #1 Bruce Sterling discusses when he talks of "spimes" and the "internet of things." RFID is an reasonable first application of this technology, but printable polymer electronics will have a broader scope as the technology matures. The idea of printable electronics is particularly interesting, as it maps nicely to printable photovoltaics.

    January 17, 2005


    biobot.jpgAs harbingers of the future go, this one has it all: self-assembly, biomimicry, cybernetic integration of biology and machine, and revolutionary potential for both medical applications and swarm robotics. It's very much the kind of scientific report that makes one feel like this is, in fact, the 21st century. As with many such breakthroughs, this one will take some time to play out, but even this early stage is pretty amazing.

    Scientists at the University of California, Los Angeles, department of bioengineering have developed a method of growing frog heart muscle cells linked to an artificial skeletal framework, and powered by glucose in solution. Unlike previous approaches using developed muscles to power artificial systems, the muscles were grown, self-assembling along a polymer framework. Once fully attached to the frame, the muscles could contract and expand, moving the entire biobot along.

    In this system, individual cells grow and self-assemble into muscle bundles that are integrated with micromechanical structures and can be controllably released to enable free movement. Having realized such an assembly [...] we demonstrate two potential applications: a force transducer able to characterize in situ the mechanical properties of muscle and a self-assembled hybrid (biotic/abiotic) microdevice that moves as a consequence of collective cooperative contraction of muscle bundles.

    Near Near Future points to a BBC News piece for details; a New Scientist article also fills in the specifics.

    Muscle-powered microelectromechanical systems (MEMS) represent an attractive alternative to micromotors. They could operate inside the human body by feeding on glucose in the blood. "It could be used for micro-surgery," says Jeff Xi, one of the team. "Perhaps this could be used to push away plaque in an artery."

    But integrating biological and man-made materials could have a variety of potential applications. The technique could, for example, enable paralysed patients to breathe without the aid of a ventilator by stimulating the phrenic nerve - which controls the movement of the diaphragm - with a small electrical pulse.

    [...] And more fantastic ideas have been proposed by NASA, which has provided funding for the project. The US space agency hopes that swarms of muscle-powered microbots could one day repair damage to remote spacecraft automatically.

    The research was reported in Nature Materials; as usual, an abstract and supplementary materials are freely available (site sign-up required), but the full article requires a subscription to the journal. In this case, the supplementary links are well-worth checking out: two Quicktime movies of the musclebot in action!

    January 20, 2005


    cellprinting.jpgYou'll never look at your ink-jet printer the same way again.

    Researchers at the University of Manchester, UK, have developed a process for building human skin, bones and organs to spec using ink-jet printer technology.

    This breakthrough overcomes problems currently faced by scientists who are unable to grow large tissues and have limited control over the shape or size the tissue will grow to. It also allows more than one type of cell to be printed at once, which opens up the possibility of being able to create bone grafts.

    "Using conventional methods, you are only able to grow tissues which are a few millimetres thick, which is fine for growing artificial skin, but if you wanted to grow cartilage, for instance, it would be impossible," Professor Derby says.

    The key to the advance which Professor Derby and his team have made is the innovative way in which they are able to pre-determine the size and shape of the tissue or bone grown.

    Using the printers, they are able create 3-dimensional structures, known as 'tissue scaffolds'. The shape of the scaffold determines the shape of the tissue as it grows. The structures are created by printing very thin layers of a material repeatedly on top of each other until the structure is built. Each layer is just 10 microns thick (1,000 layers equals 1cm in thickness).

    [...] Professor Derby believes the potential for this technology is huge: "You could print the scaffolding to create an organ in a day," he says


    Continue reading "Bioprinters" »

    February 3, 2005

    The Crossbar Latch

    It's become accepted wisdom that Moore's Law -- the pace at which transistor density increases (or, roughly, the pace at which computers keep getting faster) -- will run into a fabrication wall fairly soon. Etching chips already requires the use of high-energy lithographic techniques, and quantum effects at the smaller and smaller distance between components is becoming harder to deal with. For some pundits, this means the end of the steady growth of computer improvements. They're right, but not in the way they expect. The imminent demise of silicon transistors has led researchers down new pathways with far greater potential than mere doubling every 18 months.

    Last week's Journal of Applied Physics contained an article by physicists from HP's Quantum Science Research center entitled "The crossbar latch: Logic value storage, restoration, and inversion in crossbar circuits." The article describes the development of nanoscale circuits which perform the core logical functions of transistors -- AND, OR, and NOT.

    The experimentally demonstrated latch consists of a single wire acting as a signal line, crossed by two control lines with an electrically switchable molecular-scale junction where they intersect. By applying a sequence of voltage impulses to the control lines and using switches oriented in opposite polarities, the latch can perform the NOT operation, which, along with AND and OR, is one of three basic operations that make up the primary logic of a circuit and are essential for general computing. In addition, it can restore a logic level in a circuit to its ideal voltage value, which allows a designer to chain many simple gates together to perform computations.

    The BBC report on the research gives a good explanation of how it works, and the usual disclaimers apply -- still years away, may not work in the way initially thought, etc. Still, expect to see more of these kinds of announcements over rest of the decade. The "end of silicon" threat was recognized awhile back, and most of the big IT research groups -- Intel, IBM, Motorola, along with HP -- started their post-silicon teams long ago. That research should soon be bearing fruit. Most of it will almost certainly involve nanotechnology, but be prepared for interesting results with polymers and even biotechnology.

    February 13, 2005

    IDFuel on the Cell

    cellproc.jpgI had noted the announcement of the IBM-Sony-Toshiba "Cell" processor last week, but had not paid it close attention. While (as a relative of the PowerPC architecture) it may end up in future-generation Macs, its initial use will be in the Playstation 3; console game systems really aren't on my radar these days. However, IDFuel's Dominic Muren gave me a well-deserved (virtual) smack upside the head -- the Cell processor looks to be a pretty big deal for some distinctly worldchanging-relevant reasons.

    If you haven't heard of it, the Cell is a new processor design which starts out at speeds well exceeding the best from Intel, with the potential to run ten times the speed of today's fastest computers (and you thought Moore's Law was dead...). That's nice, but faster processors are hardly a surprise. One aspect that makes it interesting is it's "multi-core" architecture: each processor in turn comprises multiple sub-processors, each able to handle tasks independently (even running different operating systems). What makes the Cell very interesting, however, is that the task-shuffling between and among cores isn't limited to a single Cell processor: the chips can send tasks across networks, creating ad-hoc distributed computing groups. And since the Cell runs at relatively low power -- the one used in the Playstation 3 will likely consume 30 Watts, the equivalent of a mobile Pentium -- it can be used in a much wider array of hardware than traditional PC processors. IBM, Sony and Toshiba are already talking about the Cell as being useful for mobile devices (phones, PDAs) and home electronics. In a Cell-enabled environment, not only can various devices talk to each other, they can share computing resources.

    Continue reading "IDFuel on the Cell" »

    February 15, 2005

    HIV vs. Cancer

    The human immunodeficiency virus (HIV) is pretty remarkable in its ability to spread throughout the body and, as a retrovirus, to rewrite the DNA of the cells it attacks. But the part of the virus genome which makes it so prolific is not the same part which triggers such a deadly disease. It is possible, in fact, to remove the disease-causing payload of HIV entirely, creating an "impotent" strain of the virus. As will be reported in the upcoming edition of Nature Medicine (abstract online), such a neutered form of HIV has been used by a UCLA research team to target melanoma cells in mice -- in effect, to lay the groundwork to use HIV to fight cancer.

    The UCLA team employed a two-step approach to transform HIV into a cancer-seeking machine. First, the scientists used a version of HIV from which the viral pieces that cause AIDS had been removed. This allowed the virus to infect cells and spread throughout the body without provoking disease.

    "The disarmed AIDS virus acts like a Trojan horse – transporting therapeutic agents to a targeted part of the body, such as the lungs, where tumors often spread," said Chen [Irvin S.Y. Chen, Ph.D., director of the UCLA AIDS Institute], a professor of medicine, microbiology, immunology and molecular genetics and a member of the Jonsson Comprehensive Cancer Center at the David Geffen School of Medicine at UCLA.

    Continue reading "HIV vs. Cancer" »

    February 26, 2005

    The Seas of Elysium

    marsice.jpgIs there life on Mars?

    Not little green men, of course, but bacterial life. This last week has seen a flurry of stories coming from the Mars Express conference in the Netherlands relating to the possibility of life on the Red Planet. Solid, irrefutable proof isn't here, but a growing number of discoveries are pointing to the very real possibility that the fourth planet in our system may harbor tenacious microbes of its own.

    The big announcement this week was the discovery of what seems to be a sea of pack ice frozen just underneath the ground in Elysium Planum. Water ice is by no means unknown on Mars; the planet's north pole is largely covered by it. But Elysium lies just north of the planet's equator, where summertime temperatures can actually come close to 0° C. The ice appears to be covered in ash, protecting it from sublimating away in the thin atmosphere. The coverage also makes the presence of ice hard to confirm through direct observation, and while other theories of what could have caused the unusual formations don't fit the facts as well, only a lander able to drill down through the soil could confirm or refute the hypothesis.

    Continue reading "The Seas of Elysium" »

    March 8, 2005

    Son of Sonofusion

    sonolum.jpgA little over a year ago, we posted a piece about research done at Purdue on "sonofusion" -- the energy released when sound waves compress bubbles in liquids. It's been known for almost a century that pulsing sound through liquid can cause flashes of light, a process called sonoluminescence; some scientists believe that the process results in high enough temperatures that fusion is possible. But many people are skeptical -- in order for this to be fusion, you have to see (among other things) plasma generated in the bubble.

    Well, guess what?

    Continue reading "Son of Sonofusion" »

    March 14, 2005

    Biological Engineering

    bioen.jpgLast month, the Massachusetts Institute of Technology inaugurated its first new major in 29 years (PDF). The field of study? Biological Engineering.

    Biological Engineering is not the same as what is commonly called "biotechnology" or "genetic engineering." It is the application of mathematically-driven engineering principles to the construction of novel genetic structures; in contrast, genetic engineering is often a trial-and-error process, with numerous opportunities for and examples of unanticipated results. Many of the reasonable concerns about GMO foods and animals come from this hit or miss aspect of biotech. Biological Engineers have a more systematic approach, and use an increasingly deep understanding of how DNA works to then make microorganisms perform narrowly specified tasks.

    Last Thursday, the Guardian ran a lengthy article on MIT's Biological Engineering program, covering its history and its intent.

    (Note: as of this afternoon, the MIT Biological Engineering website appears to be down.)

    Continue reading "Biological Engineering" »

    Technology and Science, Around the World

    TRcover.gifThis month's Technology Review has a fascinating set of stories about how technological development and problems are viewed in seven different countries: China, Chile, Brazil, the United States, South Africa, Germany and the Netherlands. The articles are written by locals (or regional authors), sometimes even people involved in the technological efforts described. The introductory piece, What Matters Most Depends On Where You Are, features a nifty set of maps displaying technological aspects of different parts of the world, from Internet and mobile phone use to production of genetically modified crops. Some of the statistics may be a bit surprising -- were you aware that Argentina was the second largest producer of GMOs? -- while others will confirm suspicions -- the map of Internet access per capita is very nearly the precise opposite of the map of cost of Internet access.

    And, you know, there's something about the issue's cover that seems strangely familiar...

    Some interesting tidbits:

    Continue reading "Technology and Science, Around the World" »

    March 22, 2005

    For Space, For Relief

    Just in time for World Water Day, AP reports that a water purification system designed by NASA for long-term space flights has been licensed by a humanitarian relief group to bring fresh, water to areas in need at a fraction of the cost of shipping in clean water. Designed to be able to turn gray water, urine and even sweat into pure H2O, the system can handle water from wells poisoned by dead animals and contaminated by ocean salt water (such as in post-tsunami South East Asia). The humanitarian group, Concern for Kids, is set to start production next month, and plans to send 13 mobile water purification systems to Iraq and 12 to South East Asia by this fall.

    The AP report was fairly light on details, but a bit of searching around the NASA's website dug up more information. NASA's Water Recovery System (WRS) comprises a urine processing device and a refrigerator-sized system called the Water Processor Assembly (WPA), produced in coordination with Hamilton Sundstrand Space Systems International. The WPA can produce 35 gallons of fresh water every day; the device itself weighs nearly three-quarters of a ton, and requires over 900 watts of power to operate. A 2003 page describes some of the work going into the development of advanced water recovery technology; primary processing uses both chemical and biological methods of treating water.

    The two advantages of this system for relief efforts appear to be its relatively compact size and its relatively low cost. The AP report claims the system runs $29,000 in equipment costs plus 3 cents/gallon, presumably for materials, as compared to nearly $400,000 for a stationary setup. Coupled with a solar/wind+battery system for off-grid power, this could be a perfect system for re-imagined, transformed relief efforts worldwide.

    March 23, 2005

    Genetic Backup Files

    arabidopsis.jpgThe current issue of Nature includes a report by biologists at Purdue University about the Arabidopsis thaliana plant's ability to reverse mutations in subsequent generations. While the article itself is only available to subscribers, summaries are available at Nature News, the New York Times and the Washington Post, among others. This is one of those discoveries that sounds a little esoteric at first, but could have some pretty important implications.

    In the traditional understanding of genetics, organisms which reproduce sexually only express mutations when both parents carry the trait, while organisms which reproduce asexually will pass any mutation along to subsequent generations. Arabidopsis thaliana, a mustard weed, is of the latter type, reproducing through self-fertilization. The plants being studied had developed a mutation known as "hothead," where the flowers were fused (as in the photo). The researchers noticed that about 10% of the offspring of the mutated plants had reverted to the normal configuration and, when examined, the normal non-mutated genes, the same as the grandparent plants. In effect, the mutation had been wiped away.

    Continue reading "Genetic Backup Files" »

    April 6, 2005

    Reversible Thermoelectric Nanomaterials

    This is one of those stories that may seem a bit dense and technical at first, but should make you say "woah" (in your best Keanu Reeves voice) when the implications hit.

    Two physicists -- Dr. Tammy Humphrey, Australian Research Council Fellow, and Dr. Heiner Linke, at the University of Oregon -- have determined that a particular structure and configuration of nanowires can have remarkable thermoelectric properties. Electricity can be generated from heat differentials across materials; historically, applications of this thermoelectric effect has been terribly inefficient, generally working at about 15% of maximum possible efficiency (the so-called Carnot limit). In a paper published in Physics Review Letters (PDF), Humphery and Linke have shown that specially structured nanomaterials can operate at much higher efficiency, perhaps even right up to the Carnot limit. What's more, the nanomaterial's thermoelectric effect is completely reversible, meaning that the application of electricity to the material would allow it to function as a heat-pump, pulling heat out of one end and pushing it to the other. The press release from the Nanoscale Device and System Integration conference (where the breakthrough was presented) is good for non-technical readers; the review of the article in Nature Materials online (free subscription required) is a bit more technical.

    Thermoelectric generation is attractive for a number of reasons, including its utility at a variety of scales (from microscale on up) and its ability to take advantage of energy that would otherwise be wasted as lost heat. But the inefficiency of current thermoelectric technology limits its use. Current thermoelectric applications generally have a "ZT" rating (the measure of temperature-electricity conversion performance) of less than 2; a rating of around 5 is generally considered necessary for economical use. The Humphery-Linke model has a ZT rating of 10 at room temperature -- more than twice the level needed.

    So what does this mean?

    It could mean refrigeration without pumps and chemicals, and battery-powered "cold packs" able to maintain a given temperature for as long as the power held out (very useful for transporting sensitive medical supplies). It could mean high-end microchips able to operate at full speed with power-consuming fans replaced by heat-ferrying materials. It could mean very precise control over temperature for lab equipment and sensor technology.

    It could also have significant applications in energy production and transportation. It could make marginal geothermal sites much more useful, opening up myriad new sources of non-polluting generation. It could be matched with photoelectric technology to capture additional electricity from otherwise wasted heat. And it has clear applications in hybrid vehicles, allowing for electricity to be generated from engine heat, providing an additional source for battery power beyond regenerative braking and direct engine recharge.

    The Humphery-Linke article discusses only the physics of the idea, not the engineering. However, the Nature Materials review of the piece suggests that applications will not require new breakthroughs, and the thermoelectric nanomaterial model should be readily testable using "quantum dots." It's likely that real-world use won't match the maximum theoretical efficiency of the materials, but that's okay -- even if thermoelectric nanomaterial applications are only half as efficient as they could be, they could still be remarkably transformative.

    April 13, 2005

    Plastic Electronics and the Ink-Jet Future

    printedtransistor.jpgIt's entirely possible that one of the most important technological innovations of the 20th century will turn out to have been the lowly ink-jet printer. As it happens, the technology that makes it possible to squirt minute quantities of ink in precise patterns onto a sheet of paper is perfect for spraying out other materials (such as resin, plastic and even biological tissues), assembling them into solid objects. In some cases, the only significant difference between an experimental fabricator and the cheap printer that came free with a box of cereal is the content of the "ink" cartridge.

    We're rapidly approaching a time when useful objects can be printed out as easily as a photo. Related breakthroughs are happening regularly. The Material Science group at Northwestern University published a paper in the Proceedings of the National Academy of Sciences last month, describing a new, very low voltage thin-film organic transistor material which will allow both inexpensive production and significantly lower power-consumption for plastic electronics.

    "This means having plastic electronics the size of a pen battery -- rather than an automobile battery -- power your cell phone," said [Northwestern Professor of Chemistry Tobin] Marks. "And, instead of being carved out of silicon, transistor structures would be printed in a fashion similar to that of newspapers, but with organic molecules as the ink and plastic as the paper. Much as the New York Times prints a different edition of the newspaper every day, we could flexibly print a wide variety of electronic devices quickly, easily and cheaply."

    We talk about fabbing (aka "3-D printing" and "stereolithography") with alarming frequency here for a few reasons: the necessary technologies are coming together very quickly; it has a significant "open source" potential; and, for both the developed world and the developing world, it has the potential to be seriously worldchanging. Material fabrication using ink-jet technology will be something we'll be dealing with far sooner than many might expect; of the various near-term and medium-term technological and social changes we talk about here, this will be one of the first to hit big. All the more critical, then, that we start thinking now about what we're going to want and need from the ink-jet future.

    So thought-experiment time: what would be required to make a 3-D printing world sustainable? What does "cradle-to-cradle" fabbing look like? What does the capability to print electronics & photovoltaics make possible that we couldn't do before?

    April 19, 2005

    Microscopic Microscope

    biochip.jpgThis is one of those inventions that sounds as if it was ripped from the pages of an overly-technical science fiction novel: an optical biochip, about the size of a cell, that uses lasers to analyze biological samples and then signal its findings. Researchers at the by UK's Biotechnology and Biological Sciences Research Council developed the technology as a way of making testing faster and more detailed.

    The research team moved away from the idea that a microscope is something you have to look through to create optical biochips onto which scientists can place biological samples. Special fluorescent chemicals are then used together with tiny light emitting lasers to allow the scientists to analyse the cells or targets within the cells. Researchers can use this capability to examine cellular conditions for certain diseases or to develop new treatments by studying the way cells react to a drug.

    The biochips also raise the possibility of a micro-laboratory, the size of a credit card, which would be able to perform medical diagnostics, improving patient treatment by reducing the number of hospital visits needed for tests.

    We're moving quickly to a point where sophisticated biomedical analysis can be moved from the lab to the field -- that is, from a centralized system to a distributed system. The interesting and valuable aspect of technologies such as this optical biochip isn't just the new feats it specifically can accomplish, but the new paradigm it suggests. Going from centralized systems to peripheral systems has been revolutionary in nearly every instance, from computing to energy to politics; distribution doesn't just replicate the centralized model in multiple locations, it offers up entirely new -- and usually unanticipated -- opportunities.

    Imagine, for example, home medical testing units ("the size of a credit card") able to run a variety of diagnostic tests on cheek swabs, smears of blood, urine, etc. In the traditional model, this information would be sent to one's doctor or hospital, which would then "control" the information (and the patient may not even be able to see the data being sent); the only real difference is that the test happens at home, instead of in a doctor's office. But why would it need to work like that? How about being able to send one's results to multiple locations for analysis, getting multiple "second opinions" simultaneously? As this all would happen over the net, there's no reason to be limited to one's local medical establishment. Hospitals outsource testing; why couldn't patients?

    April 25, 2005

    Microbial Hydrogen Generation

    Although some fear that the hydrogen economy, should it come, will be built atop of nuclear power plants, and others hope that solar and wind will provide enough juice to crack hydrogen from water, it may well turn out that the ideal source of hydrogen for fuel cells is the lowly bacteria.

    We've mentioned microbial fuel cells before, tiny powerhouses that generate electricity while cleaning wastewater. But researchers at Penn State have taken the microbial fuel cell off in a new direction, pulling hydrogen out of wastewater at a rate four times greater than the standard fermentation process, and ten times greater than straight electolysis.

    In their paper, the researchers explain that hydrogen production by bacterial fermentation is currently limited by the "fermentation barrier" -- the fact that bacteria, without a power boost, can only convert carbohydrates to a limited amount of hydrogen and a mixture of "dead end" fermentation end products such as acetic and butyric acids.

    However, giving the bacteria a small assist with a tiny amount of electricity -- about 0.25 volts or a small fraction of the voltage needed to run a typical 6 volt cell phone -- they can leap over the fermentation barrier and convert a "dead end" fermentation product, acetic acid, into carbon dioxide and hydrogen.

    Logan notes, "Basically, we use the same microbial fuel cell we developed to clean wastewater and produce electricity. However, to produce hydrogen, we keep oxygen out of the MFC and add a small amount of power into the system."

    [...] The researchers call their hydrogen-producing MFC a BioElectrochemically-Assisted Microbial Reactor or BEAMR. The BEAMR not only produces hydrogen but simultaneously cleans the wastewater used as its feedstock. It uses about one-tenth of the voltage needed for electrolysis, the process that uses electricity to break water down into hydrogen and oxygen.

    The process does produce CO2, but as it's derived from biomass, the setup is closer to carbon neutral than other carbon dioxide generating methods of distilling hydrogen.

    The big potential here is suggested in the final line of the excerpt: this process is significantly more efficient than straight electrolysis as a means of separating out hydrogen from water. One of the strongest arguments made in support of the use of nuclear plants for hydrogen generation is that the electricity generated by solar and wind will be insufficient to generate enough hydrogen. If this process does in fact work as described, it could be the breakthrough making solar & wind a competitive path to hydrogen generation.

    April 27, 2005

    Suspended Animation

    Suspended animation is a classic science fiction trope. In many stories, travelers on long space trips are put into sealed pods where they sleep away their journey -- and, often, stop aging. Sometimes the suspended animation was explained to be an advanced freezing technique, sometimes as elaborate technobabble, and occasionally directly compared to hibernation. It was science fiction verging on fantasy, and nobody thought it would be available any time soon.

    Well, get your pajamas ready.

    Researchers at the Fred Hutchinson Cancer Research Center in Seattle have discovered a remarkably simple way to induce an extreme hibernation state in mice (which do not normally hibernate). The mice were exposed to air containing 80 parts per million of hydrogen sulphide -- a gas characteristic of rotten eggs -- resulting in a suspended condition. The state was readily reversed by exposure to normal air, with no ill effects.

    While hibernating, their metabolic rates plummeted by about 90% as their cells dropped their usual demand for oxygen. Core body temperatures dropped from the normal 37°C to 15°C.

    “Once down to around 15°C, instead of 150 breaths per minute, they were down to just a couple of really shallow breaths a minute,” says Roth.

    The researchers are still investigating precisely how the hydrogen sulphide induces this state. Although no human tests are in the works, medical scientists are excited by the prospect of functional, reversible suspended animation. Such a state would be enormously useful in the case of heart attacks and strokes, preventing tissue damage while the patient is transported to a hospital, and for preserving transplantable organs.

    Any use in space travel would be far off, of course. The furthest any human has gone so far has been to the Moon and back, and even a trip to Mars -- the most likely voyage over the next few decades -- would be short enough to handle without hibernation. Still, it's nice to see a widely-recognized bit of science fiction leap into the realm of the possible.

    Sadly, any age-stopping benefits remain purely speculative.

    April 28, 2005

    Low Cost AIDS Monitoring

    easycd4.jpgWhile much attention is (rightfully) given to the dilemmas surrounding the provision of AIDS drugs in the developing world, medication is not the only expense incurred in the fight against HIV. Accurate diagnosis and monitoring of AIDS can be costly, and is often only available in large urban hospitals. As individuals can have varying reactions to treatment, accurate measurement of T cells is critical for determining whether an intervention is successful. The cost of technologies to perform those counts -- called "flow cytometry" -- can be a barrier to effective treatment.

    The biotech company Guava Technologies, in cooperation with the AIDS Healthcare Foundation, has developed a new T cell enumeration tool aimed specifically at "resource-limited nations." The "EasyCD4" system is described by Guava as being "20 times more affordable" than standard flow cytometry systems. The entire system is built to reduce costs -- in training, in materials, and in infrastructure:

    Performance of the Guava assays is quite simple, with even novice users learning to use the EasyCD4 method in just a day or two. Testing requires only 10 microliters of whole blood per patient, making the method suitable for use in pediatric as well as adult patients. The Guava EasyCD4 assay also requires far less reagent per sample than other testing methods, dramatically lowering the overall costs of performing the assay. Moreover, the Guava EasyCD4 assay does not require nearly the amount of dedicated laboratory infrastructure. The system also does not require large amounts of buffered water as sheath fluid that are required by conventional flow cytometers. The elimination of the use of sheath fluid also results in less bio-hazardous waste and significantly further reduces the running costs of using the system.

    While we focus on developing vaccinations against and a cure for HIV-AIDS, it's important to remember that treatment is a bigger issue than drugs. The EasyCD4 system looks to be a useful new tool, allowing smaller clinics to provide early diagnosis and ongoing monitoring away from urban centers. This, in turn, has the potential to improve survival rates, as earlier diagnosis means earlier treatment -- and lower cost diagnosis can mean more resources for medication.

    (Via MedGadget)

    May 9, 2005

    Twenty Questions

    20Qpocket_anim.gifMost of us who grew up in the US (and quite probably many outside the US, as well) know of the "Magic 8 Ball." Ask the ball a question, shake it up for a moment, then flip it over: on the glass you'll see the mystical answer to your question.

    Reply hazy, try again.

    While amusing, it's not terribly satisfying. It's more interesting, however, when the ball asks you questions. 20Q plays the "twenty questions" game, wherein a person thinks of a specific thing. The sphere asks the questions, and has 20 shots to narrow down the possibilities; the user can answer "yes," "no," "sometimes," "rarely" and a few other relatively ambivalent responses. Nearly every time -- about 8 times out of 10 -- the ball will arrive at the right answer by the time its 20 questions are up.

    Impressive, to be sure, and a fun example of the increasing sophistication of artificial intelligence software. 20Q uses a neural network with a million synaptic connections. It's based on, an online twenty questions game; unlike the toy, the online version continues to learn with each new player. After over 16 million queries, the online 20 questions game is startlingly sharp. While nobody would assert that 20Q was at all sentient, its ability to ferret out the right answer via guesses is uncanny. According to an entry on Kevin Kelly's "Cool Tools" list, the online version has 10 million synaptic associations, and well over 10,000 objects that it knows about. At this point, the major factor limiting its continued improvement is the number of people connecting to it who don't speak much English.

    The 20Q project, along with ontological knowledge bases like Cyc, demonstrate the importance of broad knowledge about the world for artificial intelligence. The massive database for the Cyc project -- over 300,000 assertions about nearly 50,000 concepts -- makes it possible for Cyc to display a "common sense" understanding of potentially ambiguous situations and natural language phrases. 20Q uses non-hierarchical neural networks and Cyc uses a structured hierarchical knowledge tree, but it's clear that they're both relying on the same underlying philosophy: more information, with more connections, gives better results. I'd be interested in seeing how well a Cyc routine would do against 20Q.

    Until that day, 20Q will have to put up with guessing what humans are thinking. Warning: it's easy to find that a couple of hours have passed while trying to stump the computer. Trust me.

    May 15, 2005

    Ultra-Long-Life Battery

    One of the problems with spreading environmental sensors far and wide is the need to power them. While most of these sensors are designed to use as little power as possible, few can be run solely on photovoltaics; batteries, therefore, are a necessary component. So what can provide the best power over an extended period?

    It may be tritium. Tritium is an isotope of hydrogen and, yes, it's radioactive. But before you click the comment button, read on.

    Tritium batteries work by absorbing beta-decay electrons in a silicon panel similar to traditional photovoltaics. The concept isn't new, but earlier designs were unable to capture a sufficient number of electrons to provide a significant amount of power. The new design, figured out by researchers from the University of Rochester, the University of Toronto, Rochester Institute of Technology and BetaBatt, Inc. of Houston, Texas, uses a 3D porous silicon matrix which gives it vastly increased surface area. Tritium batteries can last for at least 12 years (the half-life of tritium) of continuous use up to over a century, depending upon battery design -- a significant improvement over traditional chemical batteries.

    But what about the safety?

    Continue reading "Ultra-Long-Life Battery" »

    May 19, 2005

    Water of Life

    microcyn.jpgWith the right mixture, water and salt can work wonders.

    Oculus Innovative Sciences, a Petaluma, California-based biomedical company, has developed a formulation of "ion-imbalanced, super-oxygenated" water which is able to kill bacteria, viruses and spores, but leave multicellular organisms unharmed. But not untouched -- the super-oxygenated water actually speeds healing of severe burns, diabetic ulcers, even necrotic flesh. The product is called Microcyn, and this week it received "510K" approval from the FDA as a medical device.

    Super-oxygenated water was first developed in the 1990s in Japan as a means of disinfecting water in nuclear reactors. While early signs suggested it might have medical applications, researchers at the time couldn't figure out how to keep it stable for more than a few days. Prior to the development of Microcyn, hospitals paid hundreds of thousands of dollars for small amounts of super-oxygenated water for limited, time-constricted use. In 2000, Iranian-born biologist Hoji Alimi bought the license for the water, and his company, Oculus, spent the next three years figuring out the stability problem. Unlike earlier super-oxygenated water formulations, Microcyn has a shelf-life of at least a year.

    And it's already transforming trauma care.

    Continue reading "Water of Life" »

    May 24, 2005

    Quantum Dots for Solar Power

    Solar photovoltaic generation of electricity has a big problem: with currently-available technology, it's not terribly efficient. I mean that literally; the "solar constant" is ~1.35 kilowatts of power per square meter, but most off-the-shelf solar panels can only convert about 20-30% of that to electricity. Improvement is clearly possible, and some researchers have figured out ways to boost that efficiency to 50% or more (although some promising developments in flexible, polymer-based photovoltaics are far worse, with only 5-15% efficiency). One of the more interesting approaches involves using selenium "nanocrystals" to boost efficiency to up to 60%. Now researchers at the US Department of Energy's National Renewable Energy Laboratory have pushed that concept to a new potential efficiency peak.

    By using lead sulfide as the nanocrystal -- or "quantum dot" -- material, the NREL team claims a potential efficiency of more than 65%. We've noted various worldchanging applications of quantum dots before (for infrared-sensitive polymer photovoltaics with ~30% efficiency, and for high-efficiency reversible thermoelectric materials), and it's clear that nanomaterial and nanofabrication research will be critical for making solar photovoltaic sufficiently efficient for widespread adoption.

    As usual, the actual article is behind a subscription wall (and the journal is probably too obscure for most non-university libraries), but Alan at @Monkeysign has an excellent dissection of how the technology works, and what challenges remain before it could conceivably make its way to real-world use. I highly recommend checking out his write-up.

    Return of Sonofusion

    In Sonofusion (2004), we talked about Purdue physicists demonstrating thermonuclear fusion takes place in tiny bubbles in liquids hit by a pulse of neutrons and ongoing acoustic oscillation (i.e., sound); In Son of Sonofusion (2005), we talked about researchers at the University of Illinois Urbana-Champaign confirming aspects of the research, including the extremely high temperatures found in the collapsing bubbles (we're talking much hotter than the surface of the Sun). Now sonofusion returns, hotter and more powerful than ever...

    The Purdue-led team -- Richard Lahey Jr., Rusi Taleyarkhan and Robert Nigmatulin -- have an article in the current IEEE Spectrum Online, the newsletter of the Institute of Electrical and Electronics Engineers, a well-regarded and very serious group of technical professionals. In "Bubble Power," Lahey et al go into substantial detail about how they figured out sonofusion (now referred to as "acoustic inertial confinement fusion") and the current state of their research. The summary version: this looks like it could be surprisingly, startlingly real, with pretty astounding potential.

    Continue reading "Return of Sonofusion" »

    May 26, 2005

    How Much Does DNA Weigh?

    DNA array
    Optical microscope photo (a) shows arrays of cantilevers of varying lengths. (b) Zoomed-in scanning electron microscope (SEM) image of several cantilevers, and (c) Oblique angle SEM image of a single 90nm thick silicon nitride cantilever with a 40 nm circular gold aperture centered 300 nm away from the free end.  Copyright © Cornell University
    The answer: 995,000 Daltons (1 Dalton is about the weight of a single proton or neutron), or a bit over an attogram (10-18 gram).

    Cornell researchers -- who built a scale last year sensitive enough to weigh a virus -- have refined the system enough to be able to weigh a single DNA molecule. The scale actually measures the frequency of the vibration of a solid object, which will vary with its mass. The "nanoelectromechanical system" (NEMS) has not yet reached the limits of its potential sensitivity.

    So what do you do with a scale that can measure DNA?

    Continue reading "How Much Does DNA Weigh?" »

    June 7, 2005


    nanoprinter.jpgBooks existed well before the invention of the printing press, but they were individual, painstakingly-crafted affairs. The printing press meant that books could be assembled both more easily -- itself a revolutionary development -- and more consistently. Mass production, as we conceive of it today, has its roots in the printing press. Now that concept is set to be applied to the nanoworld.

    Researchers at MIT's Department of Materials Science and Engineering Supramolecular Nano Materials Group have developed a process for "nano-printing," using DNA/RNA information transfer as a mechanism for the mass production of complex organic and inorganic molecular devices.

    In the new printing method, called Supramolecular Nano-Stamping (SuNS), single strands of DNA essentially self-assemble upon a surface to duplicate a nano-scale pattern made of their complementary DNA strands. The duplicates are identical to the master and can thus be used as masters themselves. This increases print output exponentially while enabling the reproduction of very complex nano-scale patterns.

    Continue reading "Nano-Printing" »

    June 8, 2005

    Another Path to Desktop Fusion

    pocketfusion.jpgLooks like sonofusion is going to have some competition. Researchers at UCLA have succeeded in producing a fusion reaction on the desktop using a "pyroelectric" crystal (i.e., a crystal that produces electricity due to temperature changes). The crystal fusion technique is less exotic than sonofusion, but also (apparently) less controversial. The report of the desktop fusion reaction appeared in Nature; as usual, the article itself is subscribers-only, but supplemental material (including two mpeg videos) are freely available.

    The design of the "reactor" is startling in its simplicity: a small pyroelectric crystal (lithium tantalite) rests inside a chamber filled with deuterated hydrogen. Warming the crystal from -30 F to 45 F results in an electrical field of about 100,000 volts across the crystal, which is then concentrated by the insertion of a metal wire tip near the crystal. The result is a neutron flux over 400 times background, about 1,000 neutrons per second -- a characteristic sign of fusion.

    The amount of energy coming from the reaction is much lower than the energy used to produce it, so this is by no means is an indicator that limitless fusion energy is just around the corner. Nonetheless, this is a simple way of producing a neutron flux, useful for a variety of tasks, from scanning luggage at airports to tumor removal to microthrusters for tiny satellites. And while it's not likely, the possibility that this breakthrough could eventually lead to energy production can't be ruled out.

    Decent write-ups of the story can be found at Nature News, the Christian Science Monitor, MSNBC, and WBCSD.

    Power Generating Microbes

    Bacteria that can clean up industrial wastes are not new. Bacteria that can generate electricity by feeding on sugars in wastewater are not new. But microbiologists at this week's General Meeting of the American Society for Microbiology report finding a species of bacteria able to both remediate some pretty nasty industrial chemicals and produce usable amounts of electricity while doing so.

    Desulfitobacterium is a recently-discovered genus of "anaerobic dehalogenating" bacteria, meaning they are able to neutralize a variety of chlorinated compounds, solvents and even PCBs. But Charles Milliken and Harold May of the Medical University of South Carolina have found that Desulfitobacterium has an additional useful property: it can generate electricity, and do so by consuming a wide variety of substances:

    Continue reading "Power Generating Microbes" »

    June 10, 2005

    New Generation

    A variety of stories about ideas and technologies for energy generation have popped up recently, and all seem worth paying a bit of attention to. Rather than QuickChange them one at a time, I'm going to mix them all together here and see what results.

    News combining a couple of favorite themes -- environmental megaprojects and the UK -- shows up in plans to build what would amount to the world's biggest wind farm in the Greater Thames Estuary. This 270 turbine, 1-gigawatt project would provide power to a quarter of London's homes, and would be running by 2011. The BBC has details (of course), but James at the Alternative Energy Blog has more. The wind farm would be far enough out not to be visible from the shore, but objections have (predictably) nonetheless been raised. However, Friends of the Earth have come out squarely in favor of the project, and a new study suggests that fears of bird kills by wind turbines appear "over-inflated," and that the observed risk is much lower than previously thought.

    Continue reading "New Generation" »

    June 13, 2005

    Claytronics and the Pario World

    catom.jpgIf you've read the more esoteric nanotechnology treatises, you're undoubtedly familiar with the concept of "programmable matter" -- micro- or nano-scale devices which can combine to form an amazing assortment of physical objects, reassembling into something entirely different as needed. This vision of nanotechnology is light years away from today's world of carbon nanotubes or even the practical-but-amazing world of nanofactories. It shouldn't surprise you, however, to note that -- despite its fantastical elements -- serious research is already underway heading down the path to programmable matter.

    Called "claytronics" at Carnegie-Mellon University, and "dynamic physical rendering" at Intel (which is supporting the CMU work), the synthetic reality project has already made some tentative advances, and the researchers are confident of eventual success. Just how long "eventual" may be is subject to debate.

    According to the Claytronics project's Seth Goldstein and Todd Mowry, programmable matter is:

    Continue reading "Claytronics and the Pario World" »

    June 14, 2005

    Off-The-Shelf DNA Tagging

    Y-DNA.jpgWe've written about "DNA Barcoding" before, but as a means of identifying species via short strands of unique DNA. But it turns out that there's a different kind of DNA Barcoding out there, one that uses non-expressing DNA strands as tags for monitoring pathogens. A group of researchers at Cornell have just come up with a way to make these tags easier to create, less expensive, and more useful.

    The researchers use "dendrimer-like DNA," short Y-shaped strands of DNA. The DNA employed doesn't actually code for anything; instead, it's used as a physical structure for the tags.

    An antibody or some other molecule that will bind to the molecule to be detected is attached to one of the loose ends of the DNA. To other ends are attached molecules of fluorescent dye in a predetermined pattern.

    Continue reading "Off-The-Shelf DNA Tagging" »

    June 21, 2005

    Engineering with DNA

    dnascaffold.jpgNanoscientists and biochemical engineers are starting to play with DNA not as replicating code, but as physical tools for nanoassembly. Researchers at Arizona State University and at New York University have independently come with ways to use DNA as a framework upon which to build sophisticated molecules. The work has implications for biochemical sensors, drug analysis, even DNA computing. And while the practical applications of these specific techniques are interesting, their larger importance is as a demonstration of the increasing sophistication of our ability to work at the nanoscale.

    Continue reading "Engineering with DNA" »

    June 22, 2005


    ScalaBLAST.jpgIt may be awhile before it shows up on your desktop, but the Pacific Northwest National Laboratory has just made a massive leap in the ability to sequence genomes.

    ScalaBLAST is a software tool for use with multiprocessor systems, dividing up the work of analyzing biological information. With the PNNL's supercomputer -- number 30 in the latest Top 500 supercomputer list -- the software allows genome sequencing to happen hundreds of times faster than before. Prior to ScalaBLAST, sequencing the DNA of a single organism took 10 days, a remarkably short time compared to the months and years such a process took less than a decade earlier. With ScalaBLAST, the same machine can analyze 13 organisms in 9 hours, or about 42 minutes per organism.

    The PNNL scientists are enthusiastic about the opportunities this could provide:

    Continue reading "ScalaBLAST" »

    June 29, 2005

    Microbial Nanowires

    Geobacter is quite the interesting genus of bacteria. As extremophiles, they can live quite happily under conditions too toxic for most creatures big or small. Moreover, many Geobacter microbes are able to convert those toxins into materials far less dangerous -- a process referred to as "bioremediation" -- sometimes generating electricity in the process.

    But in trying to better understand how Geobacter is able to do all of this, researchers at the University of Massachusetts at Amherst -- including the original discoverer of the Geobacter line -- stumbled across another remarkable characteristic of these creatures: nanowires. Geobacter is criss-crossed with tiny (3-5 nanometer wide) protein wires able to conduct electrons out of the cell.

    The remarkable and unexpected discovery of microbial structures comprising microbial nanowires that may enable a microbial community in a contaminated waste site to form mini-power grids could provide new approaches to using microbes to assist in the remediation of DOE waste sites; to support the operation of mini-environmental sensors, and to nano-manufacture in novel biological ways. This discovery also illustrates the continuing relevance of the physical sciences to today’s biological investigations." [...]

    Continue reading "Microbial Nanowires" »

    July 4, 2005


    PIA02123.jpgThis is one of the first pictures of what the inside of a comet looks like.

    On schedule, the Deep Impact "impactor" probe smacked into the surface of comet Tempel 1, resulting in a blast of ejecta visible from Earth. The NASA website has a variety of interesting images and animations, including what the Hubble Telescope saw and a movie of the impact from the point-of-view of the impactor zooming in at over 23,000 miles per hour. Over at the ESA website for its Rosetta probe, more animations and data, including the (expected) detection of water in the ejecta.

    The explosion from the impact was bigger and brighter than expected, which is a very good sign that this mission was entirely worthwhile.

    The Earth has been hit by comets before; it's a rare event, but it will eventually happen again. And while known comets like Tempel 1 have orbits that can be plotted well in advance, not every object emerging from the cometary cloud surrounding our solar system has such a widely-recognized cycle. If we're going to be able to prevent a comet from hitting us, we need to know what we're dealing with. Comets don't have the same composition as asteroids, and it's possible that the plans for pushing an asteroid out of the way wouldn't work as well with a comet. The more we know about what they're made of, the better the chance we'll have of preventing planetary disaster.

    July 11, 2005

    The Net of Life

    netoflife.jpgIt's generally understood how genetic information is spread: from an organism to its offspring. For this reason, the traditional representation of evolution and the relationship between organisms is portrayed as a "tree," with an increasing number of branches emanating from the origin. But it turns out that this well-understood structure doesn't apply to microbes.

    Bacteria don't just transmit genetic information to their offspring, they also exchange genes with other bacteria -- even bacteria from different species. This "horizontal gene transfer" happens with great regularity, and is responsible for a nearly 10% of the "gene transfer events" in bacterial evolution. As a result, the relationship between different microbial species is closer to a network than a tree, with otherwise distant types connected through gene swapping. Now researchers at the European Bioinformatics Institute (EBI) have determined what that bacterial network looks like -- and it turns out that the bacterial horizontal gene transfer network (PDF) has a lot in common with the world wide web.

    Continue reading "The Net of Life" »

    July 13, 2005

    Saving People by Vaccinating Pigs

    Cysticercosis kills upwards of 50,000 people every year, nearly all in the developing world; those who are not killed suffer symptoms ranging from epileptic seizures to blindness, as the disease attacks the brain and nervous system. Millions of people suffer from the disease, which is caused by tapeworms. (Here's the fact sheet from the US Centers for Disease Control.) But the life-cycle of the parasite requires an intermediate host -- pigs -- and that turns out to be the ideal point of intervention.

    New Scientist reports on the work of researchers at the University of Melbourne, Australia. The scientists have developed a vaccine that triggers the production of antibodies in pigs that cause the eggs of the tapeworm to burst before developing further. The work has been remarkably effective:

    Small-scale trials, in which eggs isolated from adult tapeworms were fed to up to 30 pigs, have already been conducted in Mexico, Peru, Cameroon and Honduras. The vaccine provided between 99.5 and 100 per cent protection in every trial. The Melbourne researchers, together with collaborators in Lima, Peru, now have plans for larger field trials in which the pigs will be allowed to forage as normal, they reported at a conference on parasitology in Melbourne last week. At the moment, two vaccinations about one month apart provide several months of immunity. The team's aim is to provide lifelong immunity with one or two shots, though they say the vaccine will still be beneficial even it has to be given yearly.

    Although the same type of vaccine could work in humans, it's less costly -- and much faster -- to develop and deploy animal vaccinations. Existing tapeworm infections in humans can be treated effectively (if caught soon enough); by breaking the cycle in pigs, the spread of the disease can be prevented.

    Aside from being of enormous benefit in the developing world, this is a good example of the value of looking at the ecology of infection.

    July 15, 2005

    H2 Nanostorage

    Good news, everyone! Because of unanticipated quantum-level effects in the interaction between carbon and hydrogen, the absorption capacity of carbon nanostructures (particularly nano-graphite platelets) is greater than previously modeled. A graphite-based physisorption system should therefore be able to meet the US Department of Energy standard of 62 kg/m3 volume density and 6% mass ratio of H2 storage.

    Everybody follow that?

    Okay, I'll try again. Read on for the extended explanation.

    Continue reading "H2 Nanostorage" »

    July 19, 2005

    Detecting and Monitoring Disease

    cd4monitor.jpgIt's often tempting to think about the microchip revolution in terms of computers and communication devices, bits of machinery with functions obviously derived from their digital components. But microprocessors have had some of their most worldchanging effects in the realm of biomedicine, and not just in terms of faster computer sequencing of DNA. Two more examples of this have come up this week: one to monitor the progress of HIV infections, and the other to keep watch for pathogens of all types. Both will see the greatest use -- and greatest impact -- in the developing world.

    The July 2005 issue of PLoS Medicine includes an article describing the development of a new tool for counting CD4 lymphocytes in the blood of people infected with HIV. CD4 count is a key measure for monitoring the progress (or stability) of HIV/AIDS, but the lack of inexpensive, easy-to-use equipment has hindered the ability of doctors in the developing world to keep an eye on the health of infected patients. Existing "flow cytometry" equipment is expensive and fragile, and even cheaper, more rugged versions still have reagent and training costs. The authors (from Massachusetts General Hospital, Harvard Medical School, Brigham and Women's Hospital, University of Texas, and Botswana-Harvard AIDS Institute Partnership) have developed and tested a prototype of a low-cost system based on "low-cost microfabrication, efficient light sources, and affordable microelectronics and digital imaging hardware" along with inexpensive "microfluidic" chips.

    In tests, their cheap system performed as well as far more expensive flow cytometers. What's more, the design, with further refinement, could be made as a hand-held monitor:

    Continue reading "Detecting and Monitoring Disease" »

    July 27, 2005

    Optoelectronic Tweezers

    optoelectronic.jpgResearchers at UC Berkeley have developed a method of sorting and separating microparticles and cells using little more than a bright light and a photosensitive surface. The system, referred to as "optoelectronic tweezers," can control literally thousands of particles in parallel, and allows for the differentiation between living and dead cells. The system can even be used to move particles around like conveyor belts.

    The idea is similar to that used in the ubiquitous office copier machine. In xerography, a document is scanned and transferred onto a photosensitive drum, which attracts dyes of carbon particles that are rolled onto a piece of paper to reproduce the image.

    In this case, the researchers use a photosensitive surface made of amorphous silicon, a common material used in solar cells and flat-panel displays. Microscopic polystyrene particles suspended in a liquid were sandwiched between a piece of glass and the photoconductive material. Wherever light would hit the photosensitive material, it would behave like a conducting electrode, while areas not exposed to light would behave like a non-conducting insulator. Once a light source is removed, the photosensitive material returns to normal.

    Continue reading "Optoelectronic Tweezers" »

    July 28, 2005

    Nanotech vs. Cancer

    nanocell.jpgBroadly put, there are two approaches to fighting cancers: chemotherapy, which is highly toxic to both cancerous and healthy cells; and anti-angiogenesis, which attacks cancers by cutting off their blood supply. Both have drawbacks -- chemo can kill much more than cancer cells (and against which cancer can develop resistance), anti-angiogenesis can trigger cancer survival responses such as metastasis -- and while they are in principle complementary, the very blood vessels cut off by anti-angiogenesis are those needed to apply the iterative rounds of chemotherapy.

    But a new nanotechnology-based method, devised by MIT's Biological Engineering Division, manages to bring the two approaches together, resulting in a cancer treatment far more effective than anything currently in use.

    "We designed the nanocell keeping these practical problems in mind," he said. Using ready-made drugs and materials, "we created a balloon within a balloon, resembling an actual cell," explains Shiladitya Sengupta, a postdoctoral associate in Sasisekharan's laboratory. [...]

    Continue reading "Nanotech vs. Cancer" »

    July 29, 2005

    Welcome, 2003 EL61

    2003EL61.jpgIn March of 2004, we offered a hearty "welcome" to Sedna, then held up as a candidate for the solar system's 10th planet. Although ultimately Sedna was relegated to non-planet status, over the last year we've learned a bit more about that most distant body -- it's not a Kuiper Belt Object, for example, but most likely a member of the Inner Oort Cloud, and the moon it was thought to have (because of its slow rotation) seems mysteriously missing.

    We've continued to keep a watch for objects in orbit around the Sun that could possibly be called a "planet" -- and it turns out that the most recent candidate has been hiding in plain sight. 2003 EL61 (it has yet to be given a formal name), although officially announced yesterday, was spotted initially in 2003, and images of it have been found in archives as far back as 1955. A newly-identified member of the solar system may not be immediately worldchanging, but provides a useful lesson in how science works and how we've come to understand our corner of the universe.

    Continue reading "Welcome, 2003 EL61" »

    Welcome, 2003 UB313

    2003 UB313Okay, enough.

    I admit I was a bit curious as to why Cal Tech's Michael Brown could be so blasé about being scooped on the announcement of 2003 EL61. After all, planet-type bodies aren't found every day... or maybe they are.

    This afternoon, Dr. Brown announced that his team, too, had found something new -- and what they found is pretty special. 2003 UB313 -- its real name is still pending approval by the International Astronomical Union -- is now being called the "tenth planet" by NASA. It's no wonder; 2003 UB313 is bigger than Pluto, closer in size to the Earth's moon (which is actually one of the biggest moons in the solar system, and at 2,100 miles in diameter, half again the size of Pluto). It's also at something of an odd angle, about 44 degrees off the ecliptic (the plane at which most planets orbit). It's currently at close to its peak distance from the Sun, 97 times the distance from the Earth to the Sun (a distance known as an AU); when it swings in close -- in another 500 years or so -- it will be only around 38 AUs away, not much further than Pluto.

    Continue reading "Welcome, 2003 UB313" »

    August 1, 2005

    New Tool for DNA Sequencing

    DNA-sequencing.jpgInnovation often comes in unexpected ways. We posted recently about the ScalaBLAST system, able to run genome sequences in under an hour, using one of the top 50 supercomputers on the planet. An enormous leap, certainly -- 9 hours is the current sequencing standard -- but its reliance on a supercomputer puts it out of the reach of medical professionals for a few years yet, at best. But it turns out that brute force isn't the only option.

    Researchers at 454 Life Sciences have developed a method of running genome sequences that takes far less time than the current standard method, at a far lower cost -- and doesn't require a supercomputer. Instead, it relies on fireflies.

    The machine uses the chemistry of fireflies to generate a flash of light each time a unit of DNA is correctly analyzed. The flashes from more than a million DNA-containing wells, arrayed on a credit-card-sized plate, are monitored by a light-detecting chip, of the kind used in telescopes to detect the faintest light from distant stars. Then, they are sent to a computer that reconstructs the sequence of the genome.

    The main downside to this technology is that it's currently limited to DNA fragments of about 100 base pairs in length, far short of what the traditional method can accomplish. This means that, until the technology is able to read longer segments of DNA, it will likely be limited to the sequencing of organisms with smaller or simpler genomes. This still has significant value, as the tropical disease sequencing we posted about recently can attest.

    If it takes another few years for the firefly method to reach a point that allows it to reliably sequence human DNA, what we'll see is a race between that and the brute force method -- on Moore's Law-cheap desktop computers -- as a tool for the hospital. A few years after that, it'll be a tool for any doctor's office. And a few years after that, something one could even get for the home. By the 2020s, you'll be getting DNA sequencing systems as the surprise in a box of cereal.

    August 12, 2005

    Plant Stress and Changing Environments

    We know that when animals undergo stress biochemical changes result, some of which may be situationally useful (such as increased alertness), and some of which may be deleterious if too frequent or too persistent (such as increased blood pressure). It turns out that plants have a biochemical reaction to stress, as well. Stress, for plants, tends to mean environmental conditions outside the range for which they evolved -- too hot, too cold, insufficient sunlight or moisture or CO2, etc.. A typical plant response to such conditions is to shut down its own metabolism, to stop growing and to stop producing seeds and pollen.

    It's possible, in principle, to increase a plant's ability to withstand harsh environmental changes, either through traditional hybridization, smart breeding or, for more extreme cases, genetic modification. But such changes don't necessarily mean increasing a plant's tolerance for stress; a modified plant could still undergo metabolic shutdown in environmental conditions it could otherwise survive easily.

    Botanist Wendy Boss and microbiologist Amy Grunden of North Carolina State University have come up with a way to increase a plant's ability to handle stress through the suppression of the plant's stress chemicals. This is done through the introduction of genes from Pyrococcus furiosus, an extremophile undersea microbe that regularly gets pushed from superheated volcanic vents to sub-freezing temperature open water and back -- and survives without injury or stress. Such anti-anxiety modification is a necessary first step to making it possible for plants to be manipulated to make them survive better during extended droughts, radical temperature shifts... or in greenhouses on Mars.

    Continue reading "Plant Stress and Changing Environments" »

    EMAS -- Concrete to Save Your Life

    Anyone who has driven through mountain passes has seen them: signs indicating an upcoming "runaway truck ramp," followed by what looks to be an off-ramp made entirely of sand. The logic of the runaway truck ramp is very simple -- if the truck can't stop itself, let physics take over by having the moving truck sink into the loosely packed surface. But as the Air France jet that ran off the runway in Toronto last week demonstrated, it's not just trucks that could be helped by getting bogged down.

    EMAS -- Emergency Material Arresting System -- is a form of foamed concrete used to slow down and stop aircraft that have overshot the end of runways, safely. According to the US Federal Aviation Administration, [the] material deforms readily and reliably under the weight of an overrunning aircraft and the resulting drag forces decelerate the aircraft to a safe stop. Under test since 1999, EMAS has gone through a major design revision in order to better withstand the "jet blast" from aircraft successfully taking off. Now installed in 14 locations in the US, EMAS has had numerous successful uses:

    Continue reading "EMAS -- Concrete to Save Your Life" »

    August 15, 2005

    Urine-Catalyzed Battery

    peebatt.jpgInexpensive, easily-made medical sensors and disposable testing kits need inexpensive, easily-made power sources, but until now, these have been difficult to come by. "DNA Chip" type biosensors and the like don't need a lot of power, but do need some. With that in mind, a research team at Singapore’s Institute of Bioengineering and Nanotechnology led by Dr Ki Bang Lee has devised a cheap-to-make paper battery that uses the fluid being tested -- urine -- as a power catalyst.

    The battery unit is made from a layer of paper that is steeped in copper chloride (CuCl) and sandwiched between strips of magnesium and copper. This “sandwich” is then held in place by being laminated, which involves passing the battery unit between a pair of transparent plastic films through a heating roller at 120ºC. The final product has dimensions of 60 mm x 30 mm, and a thickness of just 1 mm (a little bit smaller than a credit card).

    Writing in the Journal of Micromechanics and Microengineering, Lee describes how the battery was created and quantifies its performance. Using 0.2 ml of urine, they generated a voltage of around 1.5 V with a corresponding maximum power of 1.5 mW. They also found that the battery performances (such as voltage, power or duration) may be designed or adjusted by changing the geometry or materials used.

    Notably, 0.2 ml urine allowed battery output for over 15 hours at a 10 ohm load, and a second 0.2 ml drop gave another 15 hours of runtime at roughly the same voltage output. Urine tests are widely used for medical diagnoses, as concentrations of various chemicals (such as glucose) can accurately reflect the body's health (this also might make drug testing via urinalysis more common, although this possibility was not discussed in the research). The question that comes of this (for me, at least) is how scalable the battery system might be. How much power could one get out of a typical day's urine output? Could an inexpensive "paper battery" be part of emergency kits -- "Just Add Pee" -- or useful in relief efforts?

    Dr. Lee's paper will be available for free online for the next 30 days. Download of the paper requires free site registration at Institute of Physics’ Journal of Micromechanics and Microengineering.

    Approaching Geckomimesis

    geckomimetic.jpgIn January, we noted the work of biophysicist Kellar Autumn on an improved understanding of how gecko feet could exhibit such incredible adhesive power, yet leave no residue and even clean themselves during use. Gecko feet have complex microstructures known as setae and spatulae (tiny hairs and the fringes splitting off from them) that make use of Van der Waals force to allow geckos to stick to just about anything. Autumn's ultimate goal is to devise an artificial analogue to gecko adhesion, in order to make (in his words) not just the glue of the future, but the screw of the future: a dry, ultra-strong yet readily detached as needed, residue-free adhesive that works in vacuum, underwater, and on any surface.

    But Autumn isn't the only one working on that goal.

    On Friday, a research team led by University of Akron polymer scientist Dr. Ali Dhinojwala announced the development of artificial setae and spatulae made of multi-wall carbon nanotubes. What's more, the geckomimetic tubes demonstrate an adhesive force substantially more powerful than gecko feet.

    Continue reading "Approaching Geckomimesis" »

    August 19, 2005

    Ribbons, Sheets and the Nanofuture

    rolloutthenano.jpgThis is likely the biggest technological breakthrough of the year, arguably even of the decade.

    A team of researcher from the University of Texas, Dallas, and Australia's CSIRO has come up with a way to make strong, stable macroscale sheets and ribbons of multiwall nanotubes at a rate of seven meters per minute. These ribbons and sheets, moreover, already display -- without optimization of the process -- important electronic and physical properties, making them suitable for use in an enormous variety of settings, including artificial muscles, transparent antennas, video displays and solar cells -- and many, many more. The breakthrough was announced in the latest edition of Science. As usual, the article itself is behind a subscriber-only wall, but the abstract and supplementary information are available with a free site registration. The press release from UTD (carried by Eurekalert) provides abundant information, however; an article in the UK Guardian gives additional detail.

    If you've followed the developments of macro-scale materials made with nanotubes, you'll understand just how enormous a development this is. Previous "sheets" were small and took hours to produce via a liquid-assembly process. This technique allows a meter-long, five centimeter-wide ribbon to be created in seconds. The Science supplemental material page has a link to a video of this in action -- the image above right is a screencap from that video -- and the speed at which the carbon nanotube ribbon is produced is just amazing.

    But as startling as the production speed is, it pales in comparison to the material's properties. To start with, the measured gravimetric strength of the nanoribbons -- again, this is the unoptimized version -- already exceeds steel and carbon fiber materials such as Kevlar. Moreover:

    Continue reading "Ribbons, Sheets and the Nanofuture" »

    September 1, 2005

    Getting Smarter About Genetic Modification

    One of the reasons that many people with solid scientific backgrounds have concerns about the use of genetic modification techniques in agriculture is the degree to which the potential for unintended consequences seems to be downplayed. This is especially true when microbial genetic material is involved. Unlike more complex organisms, bacteria can spread changes to their genomes through methods other than traditional reproduction; "horizontal gene transfer" across species is said to be responsible for up to 10% of evolutionary changes to bacterial genes. The use of microbial DNA in agricultural biotechnology risks increasing the possibility of horizontal transfer, particularly between bacterial species that don't normally have contact.

    Given the overwhelming concern these days about bacteria acquiring antibiotic resistance, one would expect that this would be an area that plant bioengineers would be extra-careful about. One would be wrong. It turns out that genes for a particular type of antibiotic resistance are part of many plant modifications, for reasons explained below. Fortunately, researchers in Tennessee have come across a set of plant genes that impart similar resistance to antibiotics, but cannot be transferred -- horizontally or otherwise -- to bacteria. The adoption of this technique won't allay all reasonable concerns about GMOs, but it would go a long way to preventing one particularly nasty outcome.

    Continue reading "Getting Smarter About Genetic Modification" »

    September 7, 2005

    WorldChanging Nanotechnology

    nanofactory.jpg"Nanotechnology" gets a great deal of attention these days, including here at WorldChanging, and for good reason. The ability to create materials and operate machines that have useful properties at the nano-scale (about a billionth of a meter, or roughly the size of molecules) has the potential for dramatic changes in realms as diverse as energy production, medical science, and even adhesives, among many others. Increasingly, governments, companies and NGOs around the world recognize the possibilities arising from these new technologies, and many have noted the particular applicability of nanotechnologies to the needs of the developing world -- including leaders in the developing nations themselves.

    But not all nanotechnologies are alike; the range of innovations encompassed under the umbrella of "nanotechnology" is even greater than the difference between "micro-scale" technologies such as antibiotics and printed circuits. Although some nanotechnology specialists may quibble, I tend to split the concept of nanotechnology into three general categories. There are differences in complexity between the categories, but more important are the differences in use.

    Read on for a discussion of the various types of nanotechnologies, including examples pulled from research announcements made over the last day or two, along with examinations of both their possible benefits and their potential risks.

    Continue reading "WorldChanging Nanotechnology" »

    September 28, 2005

    Satellite of La Mancha

    DQprobe.jpgThe question of how to respond to warnings that a Near Earth Object (NEO) was on an eventual collision course with our home planet is a minor recurring theme here at WorldChanging. In a way, it's one of clearer issues that we grapple with -- there are no questions of human culpability or poor planning decisions to make the problem more complex. The reality is that our planetary neighborhood is pretty dangerous, and that one day, one of the thousands upon thousands of asteroids and comets swirling about our solar system is going to have Earth's name on it.

    But it's something that, given enough warning, we could act to avert. We've written before of various calls for NASA to put together response plans, but haven't said anything about other space programs getting into the act. This was a mistake, as it appears that the ESA -- the European Space Agency -- is taking the issue of NEO impact very seriously, and is prepping the first in a series of missions to study what it would take to deflect an oncoming asteroid. The name of the mission is Don Quijote, and it will comprise two satellites working together: Hildalgo and Sancho (Jon first alerted us to plans for this mission back in July, 2004!).

    Continue reading "Satellite of La Mancha" »

    September 29, 2005

    Nanotubes For Hydrogen Extraction

    CtubeH2cracking.jpgResearchers at North Carolina State University have discovered a way to crack hydrogen from water using heat that takes about half the energy of the previous method. The reason? Defective carbon nanotubes. The research is to be published in tomorrow's Physical Review Letters.

    The team, led by physicist Dr. Marco Buongiorno-Nardelli, found that naturally-occuring defects in carbon nanotubes could increase the rate of certain chemical reactions because the atoms forming the tubes are essentially "incomplete," making them more reactive.

    Because of this, a temperature-based method of cracking hydrogen from water, which normally requires the water to be heated to 2,000° C, can take place at significantly lower temperatures.

    “We studied water for many months and ran many different calculations, and we ended up showing that if you want to break a water molecule, you spend a lot less energy if you do it on this defective carbon material than if you do it by simply heating the molecule until it breaks,” Buongiorno-Nardelli said. “You can reduce the energy necessary by a factor of two – you can do it at less than 1,000 degrees.”

    As is typical for such breakthroughs, what's demonstrated in the lab may not ever make it to real-world use. The Buongiorno-Nardelli method, although requiring less energy, cannot yet be done in commercially viable quantities. The next step for the NCSU team is to collaborate with engineers working with nanoscale devices to design and build "nanoscale reactors" to allow cost-effective hydrogen production. In principle, however, this method would make it possible to crack water efficiently without needing the extreme temperatures; high-temperature hydrogen extraction is sometimes used as a justification for more construction of nuclear reactors.

    Okay, all together: "Oh carbon nanotube, is there nothing you cannot do?"

    (Via Green Car Congress)

    October 6, 2005

    Sequencing the Killer Flu

    h1n1.jpgSometimes, the universe has excellent timing. It's Pandemic Flu Awareness Week, and what should we get but two major scientific papers detailing the biology of the virus behind the greatest pandemic the planet has ever seen.

    In 1918, when the total world population was about 1.8 billion people, a strain of virus known as the "Spanish Flu" infected about 20% of the planet, killing 50 million people before burning itself out. While records of what happened to the victims are abundant, we've had no clear idea of exactly what kind of virus killed so many -- until now. Scientists at the US Armed Forces Institute of Pathology in Rockville, Maryland led by Dr. Jeff Taubenberger have managed to piece together and sequence the entire genome of the virus known as H1N1 (the research is in the current edition of Nature); with the complete sequence in hand, Dr. Terrence Tumpey at the US Centers for Disease Control in Atlanta was able to recreate the virus for study, as reported in the latest edition of Science. What the biologists found was unexpected, troubling -- and potentially the key to fighting the next influenza pandemic.

    Continue reading "Sequencing the Killer Flu" »

    October 15, 2005

    The Cancer Nanobomb

    Panchapakesan-NanoBomb.jpgWhen I first saw the link to this story, I thought it was yet another illuminated nanospheres zapping cancer report; after all, lots of places appear to be working on that technology, so we should start seeing more results in the months to come. This one turns out to be different, however: University of Delaware researchers have put together a system that quite literally blows apart cancerous tissue with minimal damage to nearby healthy cells. The research was reported in the journals NanoBiotechnology and Oncology Issues.

    Dr. Balaji Panchapakesan, assistant professor of electrical and computer engineering, and his team at the University of Delaware were looking at the optical and thermal properties of carbon nanotubes, trying to find ways to improve their use as drug delivery mechanisms.

    As they undertook various experiments, however, the team made a startling discovery. “When you put the atoms in different shapes and forms, they take on different properties at the nanoscale,” Panchapakesan said. “We were experimenting with the molecules and considering optical and thermal properties, and found we could trigger microscopic explosions of nanotubes in wide variety of conditions.”

    Continue reading "The Cancer Nanobomb" »

    October 23, 2005

    White Light, Less Heat

    quantumdot_led_4.jpgPlease note that this article has been updated from its original text, correcting a couple of mistakes. -- Jamais

    An accidental discovery at Vanderbilt University may well be the key to making light-emitting diodes the dominant lighting technology of the century. Up until very recently, the only way to make "white" light was to add yellow phosphors to bright blue LEDs. It wasn't quite right, though, as even the best "white" LED retained a blue tint. This week, we got the news that a chemistry grad student at Vanderbilt has stumbled on a way to make broad-spectrum white LEDs using quantum dots -- and in doing so, he may well have kicked off a revolution.

    Michael Bowers was making quantum dots, tiny nanocrystals just a few dozen atoms across. Crystals at that scale often have unusual properties, and the ones that Bowers created were no exception. When he illuminated his batch with a laser, rather than the blue glow he expected, out came a rich white light, similar in spectrum to sunlight.

    Bowers then took a polyurethane sealing liquid, mixed in some of his dots, and coated a blue LED. Although the resulting bulb -- pictured above -- is crude, it puts out white light. Its visible spectrum is similar to a typical incandescent bulb, but it puts out twice the light-per-watt, and lasts fifty times longer. One key reason for its efficiency is that it doesn't put out the infrared light typical of a regular light bulb; despite being much brighter, it's still far cooler to the touch. (The LED assembly still gets hot, however.) Completely by accident, Bowers had come up with a technology that possessed the quality of incandescent light, but none of its drawbacks.

    Continue reading "White Light, Less Heat" »

    November 1, 2005

    Heavy Metal Bioremediation

    tobacco.jpgBioremediation is the process of using living organisms -- typically plants or microbes -- to remove toxic material from the environment. Previous examples we've noted include seaweed cleaning up DDT, bacteria removing uranium from groundwater around weapons production sites, and tumbleweeds removing uranium from the soil. In most cases, the bioremediation takes advantage of a natural process within the selected organism.

    Most, but not all: reports on work being done at the Peking University's College of Life in Beijing to bioengineer tobacco to bioremediate heavy metals from the soil, and algae to remove metals from water. This doesn't use a natural feature of the organisms, however. Instead, it uses a rat gene involved in the creation of a protein in rat livers that binds with toxic metals:

    Continue reading "Heavy Metal Bioremediation" »

    November 10, 2005

    Bio-Printers, Revisited

    organprinting.jpgOur future will be built with ink-jet printers. Not only can we print out polymer electronics and solid objects, we can print out biological structures. Last January, I wrote about the University of Manchester's process for building organs to spec with ink-jet technology; in July, I wrote about researchers in the US and Holland using a related process to grow cruelty-free "cultured meat." Now University of Utah College of Pharmacy Glenn Prestwich has created a "bio-paper" that works with a "bio-ink" to build tissue to repair damaged organs. Research sponsored by the US National Science Foundation will look at how well the combination works with organ-printing technology.

    The details are a bit spotty, and I'm still looking for something other than popular media accounts. But for what it's worth:

    The NSF study will try first to print blood vessels and cardiovascular networks. Once they prove it can be done, the scientists will look at more complex organs such as livers and kidneys and simpler but more mechanical organs like the esophagus, Prestwich said.

    Continue reading "Bio-Printers, Revisited" »

    November 12, 2005

    Print Screen

    printoledscreen.jpgHere's another indication that the ink-jet future is upon us (and it's likely to be a more palatable example than the last one): Cambridge Display Technologies is now able to produce 14" laptop displays by printing the organic polymer LED material using ink-jet tech. Already appearing in small form on cell phones and other handheld devices, organic LEDs have some real advantages over the LCD technology in your laptop or flat-panel screen:

    OLED is viewed as a potential successor to liquid crystal displays, used in many flat-panel TVs and computer monitors. Materials in an OLED display emit light when an electrical current is applied. The displays can function without a backlight, which cuts down on power consumption, screen thickness and cost. OLED displays also offer higher resolution than LCDs.

    The screens, potentially, also cost less to produce. Cambridge sprays its pixels on with multi-nozzle inkjet printers. The printers can sport 128 nozzles and come from a company called Litrex, which is half owned by Cambridge.

    Another big advantage of organic polymer electronics in general is that they have a much lower environmental footprint in terms of energy required to produce them and toxic material content, in comparison to LCDs and traditional (read: obsolete) CRT technologies.

    Continue reading "Print Screen" »

    November 15, 2005

    Tackling the Central Dogma with an Optical Trap

    image courtesy Stanford UniversityStanford researchers have designed the first microscope sensitive enough to watch a protein molecule function in real time -- and in doing so, they may have both solved some of the deepest questions about how DNA replicates and kicked off an entirely new field of study.

    Dr. Steven Block and his team designed and built an "optical trap" microscope that uses the minute pressure of infrared light to hold a molecule in place for measurement without interfering in how the molecule works; the system has an accuracy of one angstrom, equal to one-tenth of a nanometer -- roughly the diameter of a single hydrogen atom. This allowed them to watch genes being copied in real time. And this, in turn, let them resolve a question at the heart of what biologists call the "central dogma."

    Continue reading "Tackling the Central Dogma with an Optical Trap" »

    November 17, 2005

    DNA, Behavior and Food

    labrat.jpgWe're all familiar with the ways in which the chemicals in food can change our behavior, sometimes dramatically (as anyone who has been around me when I'm having a mid-day low blood sugar crash can tell you). But it turns out that ingested chemicals in the bloodstream can do more than change transient behavior -- they can change the way our DNA is expressed.

    That's the finding of Drs. Moshe Szyf, Michael Meaney and colleagues at McGill University in Montreal, Canada, speaking at this week's Environmental Epigenomics conference. They found that injecting L-methionine, a common amino acid and food supplement, into the brains of lab rats, could turn well-adjusted rats into easily-stressed, shy rats, by causing the same kinds of changes to DNA expression in the brain as result from rats that are not properly groomed and cared for by their mothers:

    Continue reading "DNA, Behavior and Food" »

    November 23, 2005

    Bacterial Cameras and the Fabrication Future

    It may need four hours to take a picture, and even then only create monochrome images, but the bacterial camera made by researchers at the University of California, San Francisco, could be pretty important.

    Chris Voigt and his team hacked the genome of E. coli, the common food-poisoning gut microbe, to make it sensitive to light by adding sequences from photosynthesizing algae. When activated by light, the new genes can shut off the action of another gene, in this case one controlling the color of the bacteria. A sufficiently large mass of E. coli can then be used to "print" images. Because the "pixels" are bacteria, the resolution is astounding -- over one hundred megapixels per inch.

    The goal of the experiment wasn't to produce a slow, massively high resolution black & white camera, however; the goal was to demonstrate the use of light sensitivity as a control for other bacterial functions.

    ...their success in getting an array of bacteria to respond to light could lead to the development of “nano-factories” in which minuscule amounts of substances are produced at locations precisely defined by light beams.
    For instance, the gene switch need not activate a pigment, says Voigt. A different introduced gene could produce polymer-like proteins, or even precipitate a metal. “This way, the bacteria could weave a complex material,” he says. [...]
    As a method of nano-manufacturing, the biocamera is an "extremely exciting advance" says Harry Kroto, the Nobel prize-winning discoverer of buckminsterfullerene, or buckyballs. "I have always thought that the first major nanotechnology advances would involve some sort of chemical modification of biology."

    This bio-photolithography would be a good way of using microbes to construct macro-scale structures without having to develop complex chemical signalling mechanisms.

    The image chosen for the experiment, in case you don't recognize it, is the Flying Spaghetti Monster -- and clearly this work has been touched by its noodly appendages.

    December 2, 2005

    Optimized Self-Assembly

    self-assembly.jpgSelf-assembly is a fundamental part of how things work in the universe. We often see it at the nano-scale, whether we're talking about nanotechnology or biochemistry. You put the right components together in the right context, and what results is a structure -- DNA, for example. Traditionally, the use of self-assembly to build nano-sized materials requires a lot of repetitive experimentation: try this set of components under these conditions; now slightly alter the setup and repeat, until you get what you want.

    If researchers at Princeton University are right, though, the era of hit-or-miss self-assembly experiments may soon be over. A team led by Dr. Salvatore Torquato applied mathematical principles of optimization, which is essentially a process of finding the most efficient operation of a given process, to how components of a nanomaterial are organized prior to self-assembly. According to their computer models, this makes it possible to determine how to build a particular molecular structure before one starts, eliminating the need for repetitive sub-optimal experiments.

    ''If one thinks of a nanomaterial as a house, our approach enables a scientist to act as architect, contractor, and day laborer all wrapped up in one," Torquato said. "We design the components of the house, such as the 2-by-4s and cement blocks, so that they will interact with each other in such a way that when you throw them together randomly they self-assemble into the desired house."

    Continue reading "Optimized Self-Assembly" »

    December 8, 2005

    Organic Radical Battery

    necorganic.jpgIt's fascinating to watch the future emerge, piece by piece. Organic polymer electronic materials are very appealing, from a WorldChanging perspective: they contain few or no toxic metals, meaning that they are far less damaging to the environment; in many cases, they can be produced through standard "ink-jet" style printing processes, making them prime candidates for use in fabricators; and, because they are so flexible, they have applications far beyond what can be easily done with current electronics. In the past, we've looked at organic polymer solar panels, electronic circuits, and displays. Now we get to add batteries to the mix.

    NEC's new "organic radical battery" technology uses a gel as its core material, allowing the final battery to be extremely flexible and thin -- the demonstration unit is 300 microns (0.3 mm) thick -- not much thicker than a typical business card. NEC claims that the battery can be fully recharged in 30 seconds; this isn't as much of a big deal as it sounds, as the power density of the battery is fairly low, just 1 milliwatt-hour (mWh) per square centimeter.

    Such limited power is typical of organic polymer electronics. Organic polymer solar cells, for example, are at best around 5 or 6% efficient (compared to 25-35% for traditional silicon panels), and the organic polymer circuit mentioned above is limited to around 600 kilohertz -- or less than one-tenth of one percent as fast as this two-year-old laptop I'm writing on now.

    For now, organic polymer electronics are likely to appear primarily in devices like sensors, RFID tags and "smart" building materials. But the technology keeps improving, and smart industrial designers are already thinking about what they could do with power, processors and displays as flexible as paper.

    December 15, 2005

    Remote Control Medicine

    Scanning electron microscopy images of image of (A) a hollow, open surfaced, biocontainer, and (B) a device loaded with glass microbeads. (C) Fluorescence microscopy images of a biocontainer loaded with cell-ECM-agarose with the cell viability stain, Calcein-AM. (D) Release of viable cells from the biocontainer.
    How do you deliver a drug to its exact target in the body? You could inject it directly, which can work for organs and larger clusters of cells; you could flood the body with the drug, so as to make certain that the specific body part gets a sufficient dose; or you could engineer the drug delivery mechanism so that it is only captured by the specific target. All of these can work, but none of them works in every case. If Dr. David Gracias of the Johns Hopkins school of medicine is correct, the best approach may be to steer the drugs through the blood stream with magnets.

    In a paper to be published later this month in the journal Biomedical Microdevices, Gracias and his team demonstrate a new tool for encapsulating and delivering drugs in a body, as well as for biosensors able to travel through the bloodstream. The new system uses a combination of biomimetics, nanomaterial production, computer chip manufacturing techniques, and old-fashioned chemistry. The result are microdevices that would be familiar to anyone who has ever made a box out of paper.

    ...Gracias and his colleagues begin with some of the same techniques used to make microelectronic circuits: thin film deposition, photolithography and electrodeposition. These methods produce a flat pattern of six squares, in a shape resembling a cross. Each square, made of copper or nickel, has small openings etched into it, so that it eventually will allow medicine or therapeutic cells to pass through.

    What's more, they self-assemble:

    Continue reading "Remote Control Medicine" »

    December 20, 2005

    File Compression for Remote Diagnosis

    waveletmamm.jpgIt turns out that a little bit of math can both improve the results of mammography and make expert radiological analysis available to more people in remote and poor areas of the world.

    Researchers have known for a few years now that applying a mathematical transformation method known as "wavelets" to radiological images can improve the ability of doctors to detect cancer. But Bradley Lucier's team of mathematicians at Purdue has taken the process to a new level -- by using the wavelets method to compress mammogram images by 98%, not only can radiologists still detect cancer better than they can with unmodified images, the mammograms become small enough to send easily over the dial-up computer networks common in poorer parts of the world. The work will appear in the next edition of Radiology.

    "Any technique that improves the performance of radiologists is helpful, but this also means that mammograms can be taken in remote places that are underserved by the medical community," said Lucier, who is a professor of mathematics and computer science in Purdue's College of Science. "The mammograms can then be sent electronically to radiologists, who can read the digitized versions knowing they will do at least as well as the original mammograms." [...]

    Continue reading "File Compression for Remote Diagnosis" »

    January 18, 2006

    New Tool for Making Vaccines

    modtobacco.jpgResearchers at Arizona State University have come up with a truly ingenious way to make large amounts of usable antigens for the creation of vaccines -- using tobacco plants and a tobacco plant virus.

    The Tobacco Mosaic Virus (TMV) is a common problem for tobacco crops. In its natural form, it eats away at the leaves, flowers and fruit of the tobacco plant. But TMV can be easily modified by researchers, and can be used to introduce new genes to tobacco plants without using transgenic modification (meaning that the new genes will not be passed along to any subsequent offspring of the plants). ASU scientists employed TMV as a way to introduce genes that prompt the production of plague antibodies; the modified tobacco plants produce large amounts of antibodies in relatively short periods of time. Moreover, the technique allowed the researchers to trigger the production of very specific forms of the plague antigens, substantially reducing the incidence of adverse reactions.

    Like most crops, producing vaccines in tobacco plants primarily revolved around issues of speed, low cost and high yield. “The major advantage of the vaccine is the rapidity of the system,” said Santi. “In a matter of 10 days, we can go from infecting the plants to harvesting the plants. From there, we purify the antigens in an additional one to two weeks to create the vaccine.” [...]
    The beauty of the system is its potential versatility to fight against other pathogens as well. The research team’s next step is to refine their methods to achieve a large-scale commercial production of the vaccine.

    As dangerous as plague can be -- especially with the possibility of its use as a bioweapon -- there are other diseases that would also benefit from a quick, relatively inexpensive method of vaccine production. It would be a welcome irony if tobacco, long a global health scourge, became the vehicle for widespread production of effective and safe immunization.

    (Via Medgadget)

    January 30, 2006

    Return of the Son of Sonofusion

    star-in-a-jar.jpgCould the future of energy be found in a "star in a jar?"

    In Sonofusion (2004), we introduced research from Purdue showing that sound waves pulsing through bubbles in a liquid could pop them with enough energy to produce (for a brief, tiny instant) a nuclear fusion event; in Son of Sonofusion (2005), we pointed to research by the University of Illinois at Urbana-Champaign confirming key elements of the research, including the extremely high temperatures within the bubbles; in Return of Sonofusion (2005), we presented more confirmation, and a lengthy discussion of just how this process appears to work. Now Sonofusion is back, with an answer for a key criticism of the work.

    The primary piece of evidence that fusion is happening is the measurement of bursts of neutrons coinciding with the flashes from bursting bubbles. But the process, as we noted in the first sonofusion post, requires an external neutron source to prime the reaction; critics have pointed out that there's no way to be absolutely sure that the measured neutrons aren't just from the external source. Now researchers from Purdue, Rensselaer Polytechnic Institute and the Russian Academy of Sciences have produced a sonofusion reaction without any external neutron source -- and found neutron emissions from the jar:

    Continue reading "Return of the Son of Sonofusion" »

    February 23, 2006

    Directed Evolution, Natural Sequestration and Terraforming the Earth

    rubisco.jpgCan we avoid climate disaster simply by cutting back radically on the emission of greenhouse gases? Possibly not, and therein lies a problem. Because of the slow cycle time of carbon dioxide in the atmosphere and the thermal inertia of the oceans, we are almost certain to see a continued rise in temperatures over the coming decades even if we were to stop all greenhouse gas emissions tomorrow. It may well be that a temperature increase of just a couple more degrees is enough to kick off a catastrophic shift in climate systems. A wise strategy for dealing with climate disruption, therefore, relies on drastic reductions in carbon output but would need to include careful efforts to extract carbon from the atmosphere and store it for an extended period of time -- and researchers at the Emory University School of Medicine may have figured out a way to do just that.

    We've talked about sequestration a few times here, and always with a skeptical eye. Many of the sequestration proposals are efforts to reduce the greenhouse footprint of otherwise carbon-intensive processes, like energy production from coal. Although it relies on carbon capture technologies, this version of sequestration is just another form of carbon emission reduction, no different (from the atmosphere's perspective) from a shift to renewable energy or higher efficiency use. What I'm talking about here is the active reduction of existing atmospheric CO2 -- intentionally decreasing the concentration of carbon dioxide, not just waiting for it to cycle out. It's another example of Terraforming Earth, but arguably one on the less-potentially-disruptive end of the spectrum.

    The Emory University group has figured out a way to boost the efficiency of the key carbon dioxide-fixing enzyme in plants five-fold. The enzyme, known as RuBisCO, is a thousand times slower in its processes than most similar enzymes, and plants have to make a lot of it in order to consume usable amounts of CO2; increasing its efficiency means that plants can take in and use much more CO2. Interestingly, the basic process used by the researchers turns many expectations about biotechnology upside-down: directed evolution:

    Continue reading "Directed Evolution, Natural Sequestration and Terraforming the Earth" »

    March 14, 2006


    STMV_sim.jpgOpponents of animal testing for medical research often argue that the same tests could be performed via computer simulation; researchers counter that simulations simplify physiology too much to be useful in that way. But such a claim may be in its final era -- we now have the first functional, down-to-the-atom simulation of a biological organism. Computational biologists at the University of Illinois at Urbana-Champaign and crystallographers at the University of California at Irvine have created a complete simulation of the Satellite Tobacco Mosaic virus. We won't have a SimRabbit, SimRhesus Monkey or SimHuman any time soon, but such tools now appear to be much more plausible.

    The satellite tobacco mosaic virus is about as simple a virus as possible; the entire STMV genome consists of a little over a thousand nucleotides in RNA, along with a couple of proteins. The virus is referred to as a "satellite" because it relies on the presence of the tobacco mosaic virus in order to reproduce. Despite this simplicity, the researchers had to use a supercomputer to simulate a fraction of a second of viral activity:

    Continue reading "SimVirus" »

    April 22, 2007

    Earth Day Voices: Jamais Cascio

    Four Futures for the Earth
    by Jamais Cascio

    Never trust a futurist who only offers one vision of tomorrow.

    We don't know what the future will hold, but we can try to tease out what it might. Scenarios, which combine a variety of important and uncertain drivers into a mix of different -- but plausible -- futures, offer a useful methodology for coming up with a diverse set of plausible tomorrows. Scenarios are not predictions, but examples, giving us a wind-tunnel to test out different strategies for managing large, complex problems.

    And there really isn't a bigger or more complicated problem right now than the incipient climate disaster. Today, there seems to be two schools of thought regarding the best way to deal with global warming: the "act now" approach, demanding (in essence) that we change our behavior and the ways that our societies are structured, and do it as quickly as possible, or else we're boned; and the "techno-fix" approach, which says (in essence) don't worry, the nano/info/bio revolution that's just around the corner will save us. Generally, the Worldchanging approach is to emphasize the first, with a sprinkle of the second for flavor (and as backup).

    The thing is, these are not mutually-exclusive propositions, and success or failure in one doesn't determine the chance of success or failure in the other. It's entirely possible that we will change our behavior/society/world (ahem), and also come up with fantastic new technologies; it's also possible that we'll stumble on both paths, neither fixing things in time nor getting our hands on the tools we could use to repair the worst damage.

    To a futurist, a pair of distinct, largely independent variables just begs to be turned into a scenario matrix. So let's give in, and take a brief look a the four scenarios the combinations of these two paths create:

    Dodging a Bullet

    2037: It's amazing how fast we went from "is this real?" to "what can we do?" to "let's do it now." There was no silver bullet, no green leap forward, just a billion quiet decisions to act. People made better, smarter choices, and the headlong rush to disaster slowed; encouraged by this, we started to focus our investments and social energy into solving this problem, and eventually (but much faster than we'd dared hope!) the growth of atmospheric carbon stopped. There's still too much CO2 in the air, and we know we're going to be dealing with a warming climate for awhile still, but the human species actually managed to choose to avoid killing itself off.

    This is a world in which civil society begins to focus on averting climate disaster as its primary, immediate task, even at the cost of some economic growth and general technological acceleration. Most governments and institutions curtail research and development without direct climate benefits, leading to a world of 2037 that's nowhere near as advanced as futurists and technology enthusiasts had expected. A succession of environmental disasters linked (in the public mind, at the very least) to global warming -- killing hundreds of thousands, and leaving tens of millions as refugees -- gave added impetus to a world-wide effort; by 2017, a clear majority of the world's population was willing to do anything necessary to avoid the environmental collapse that many scientists saw as nearly inevitable. One popular slogan for the climate campaign was "we could be the best, or we could be the last."

    Teaching the World to Sing

    02037: I stumbled across a memory archive from twenty years ago, before the emergence of the Chorus, and was shocked to see the Earth as it was. Oceans near death, climate system lurching towards collapse, overall energy flux just horribly out-of-balance. I can't believe the Earth actually survived that. I had assumed that the Chorus was responsible for repairing the planet, but no -- We told me that, even by 02017, the Earth's human populace was making the kind of substantive changes to how it lived necessary to avoid real disaster, and that 02017 was actually one of the first years of improvement! What the Chorus made possible was the planetary repair, although We says that this project still has many years left, in part because We had to fix some of We's own mistakes from the first few repair attempts. The Chorus actually seemed embarrassed when We told me that!

    This is a world in which immediate efforts to make the social and behavioral changes necessary to avoid climate disaster make possible longer-term projects to apply powerful, transformative technologies (such as molecular manufacturing and cognitive augmentation) to the problem of stabilizing and, eventually, repairing the broken environment. It's not quite a Singularity, but is perhaps something nearly as strange: a world that has come to see few differences between human systems and natural/geophysical systems. "We are Gaia, too," the aging (but quite healthy) James Lovelock reminded us in 2023. And Gaia is us: billions of molecular-scale eco-sensors and intelligent simulations give the Earth itself an important voice in the global Chorus.

    Geoengineering 101: Pass/Fail

    2037: The Hephaestus 2 mission reported last week that it had managed to stabilize the wobble on the Mirror, but blurbed me a minute ago that New Tyndall Center is still showing temperature instabilities. According to Tyndall, that clinches it: we have another rogue at work. NATO ended the last one with extreme prejudice (as dramatized in last Summer's blockbuster, "Shutdown" -- I loved that Bruce Willis came out of retirement to play Gates), but this one's more subtle. My eyecrawl has some bluster from the SecGen now, saying that "this will not stand," blah blah blah. I just wish that these boy geniuses (and they're all guys, you ever notice that?) would put half as much time and effort into figuring out the Atlantic Seawall problem as they do these crazy-ass plans to fix the sky.

    This is a world in which attempts to make the broad social and behavioral changes necessary to avoid climate disaster are generally too late and too limited, and the global environment starts to show early signs of collapse. The 2010s to early 2020s are characterized by millions of dead from extreme weather events, hundreds of millions of refugees, and a thousand or more coastal cities lost all over the globe. The continued trend of general technological acceleration gets diverted by 2020 into haphazard, massive projects to avert disaster. Few of these succeed -- serious climate problems hit too fast for the more responsible advocates of geoengineering to get beyond the "what if..." stage -- and the many that fail often do so in a spectacular (and legally actionable) fashion. Those that do work serve mainly to keep the Earth poised on the brink: bioengineered plants that consume enough extra CO2 and methane to keep the atmosphere stable; a very slow project to reduce the acidity of the oceans; and the Mirror, a thousands of miles in diameter solar shield at the Lagrange point between the Earth and the Sun, reducing incoming sunlight by 2% -- enough to start a gradual cooling trend.

    Say Goodnight

    2030-something. Late in the decade, I think. Living day-to-day makes it hard to keep track of the years. The new seasons don't help -- Stormy, Still Stormy, Hellaciously Stormy, and Blast Furnace -- and neither does the constant travel, north to the Nunavut Protectorate, if it's still around. I hear things are even worse in Europe, if you can believe that. I don't hear much about Asia anymore, but I suppose nobody does now. The Greenland icepack went sometime in the last few years, and I hear a rumor that Antarctica is starting to go now. Who knows? I still see occasional aircraft high overhead, but they mostly look like military planes, so don't get your hopes up: they're probably from somebody who thinks it's still worth it to fight over the remaining oil.

    This is a world in which we don't adopt the changes we need, and technology-based fixes end up being too hard to implement in sufficient quantity and scale to make a real difference. Competition for the last bit of advantage (in economics, in security, in resources) accelerates the general collapse. Things fall apart; the center does not hold; mere anarchy is loosed upon the world.

    Pick your future.

    Jamais Cascio co-founded Worldchanging, and wrote over 1,900 articles for the site during his tenure. He now works as a foresight and futures specialist, serving as the Global Futures Strategist for the Center for Responsible Nanotechnology and a Research Affiliate for the Institute for the Future. His current online home is Open the Future.

    About New Science

    This page contains an archive of all entries posted to WC Archive in the New Science category. They are listed from oldest to newest.

    Movement Building and Activism is the previous category.

    Nonviolence is the next category.

    Many more can be found on the main index page or by looking through the archives.

    Powered by
    Movable Type 3.34