« Online Course on Climate Change | Main | Thirty Essential Studies »

Open Source Science

"The Common Good," a new essay in Nature by consulting editor Philip Ball, explores the growing use of collaborative methods to build and evaluate scientific efforts. Such methods make use of the cumulative wisdom of the scientific analysis of dozens, hundreds, sometimes even thousands of participants. Although the observations made by any single person may not be individually insightful, the accumulated (and, occasionally, averaged) efforts are often dazzling. While Ball's essay is not overly detailed, it covers a variety of projects -- some which many WorldChangers will already be familiar with, and some which are quite new.

Ball covers three broad categories of mass-collaborative science. The first I would characterize as mass analysis, in which large numbers of people take a look at a set of data to try to find mistakes or hidden details. His best example of this is the NASA Clickworkers project, which used a large group of volunteers to look at maps of Mars in order to identify craters. It turned out that the collective crater identification ability of volunteers given a small amount of training was as good as the best experts in the field. Ball links this directly to the James Surowiecki book, The Wisdom of Crowds, which argues that the collective decision-making power of large groups can be surprisingly good. WorldChanging's Nicole Boyer has mentioned The Wisdom of Crowds in a couple of her essays, most notably this week's The Wisdom of Google's Experiment. The ability of groups to act collectively to analyze and generate information is one of the drivers of collaborative efforts such as Wikipedia -- any individual contributor won't be an expert on everything, but the collected knowledge of the mass of authors is unbeatable.

The second model of collaborative science he discusses is that of mass evaluation, in which large numbers of people have the opportunity to vet articles and arguments by researchers. This is a less quantitative and more subjective approach than collaborative analysis, but can still produce high-quality results. Ball cites Slashdot and Kuro5hin as examples of this approach, with the mass of participants on the sites evaluating the posts and/or comments, eventually pushing the best stuff up to the top. In the world of science, articles submitted to journals are regularly checked out by groups of reviewers, but the set of evaluators for any given article is usually fairly small. Ball cites the physics pre-print journal arXiv as an exemplar of a countervailing trend -- that of open evaluation. ArXiv allows anyone to contribute articles, and lets participants evaluate them -- a true "peer review."

The third model Ball discusses is perhaps the most controversial -- that of collaborative research, where research already in progress is opened up to allow labs anywhere in the world to contribute experiments. The deeply networked nature of modern laboratories, and the brief down-time that all labs have between projects, make this concept quite feasible. Moreover, such distributed-collaborative research spreads new ideas and discoveries even faster, ultimately accelerating the scientific process. Yale's Yochai Benkler, author of the well-known Coase's Penguin, or Linux and the Nature of the Firm, argues in a recent article in Science (pay access only) that such a method would be potentially revolutionary. He calls it "peer production;" we've called it "open source" science, and have been talking about the idea since we started WorldChanging.

This is neither a utopian vision of "citizen science" nor a "Science Survivor" where the least popular theories get voted off the island each week. All three of these models are based on the mass participation of people who are at least amateur scientists, and who can demonstrate some understanding of the processes involved. The Clickworkers project required a moderate amount of training, evaluative comments on arXiv from those without a physics background will likely be ignored, and "peer production"/"open source" scientific research will be open to those who actually know how to carry out the proper experiments. Such "mass elitism" is not without precedent; Free/Open Source Software development is open to anyone who wants to participate, but does not usually accept code contributions from people with marginal programming skills. Functional "wisdom of crowds" approaches are predicated on the assumption that the crowds comprise people who are familiar with a given subject enough to even be able to speculate on the right answer to a problem.

All three of these methods are based on the fundamental logic of the open source concept: with many eyes, all bugs are shallow. The more participants you have, the greater the breadth of knowledge and experience, and the greater the ability to find subtle problems or hidden surprises. The open science approach is potentially invaluable -- and it's in the best traditions of science itself, which has always flourished best in a world of critical engagement, open discourse, and cooperation.

TrackBack

Listed below are links to weblogs that reference Open Source Science:

» Can Open Source Truly Innovate? from dan.spikesource.com
Are the strengths of open source development techniques antithetical to true technology innovation? [Read More]

» The Algorithms of Social Innovation: Online Communities, New Analytics, and Napsterization from Knowledge in the Public Interest Blog
When we began conceptualizing Knowledge in the Public Interest in 2001, we thought -- as we do now -- that traditional methods of research and evaluation would soon be radically altered by the internet, and that this would in turn... [Read More]

» The Algorithms of Social Innovation: Online Communities, New Analytics, and Napsterization from Knowledge in the Public Interest Blog
When we began conceptualizing Knowledge in the Public Interest in 2001, we thought -- as we do now -- that traditional methods of research and evaluation would soon be radically altered by the internet, and that this would in turn... [Read More]

Comments (5)

Stephen Balbach:

Wired has an article this month titled "Scientific Method Man" about Gordon Rugg who has solved the Voynich Manuscript challenge (deciphering a medieval manuscript) by using somthing he calls the "verifier approach". Which is, he looks at a problem that many experts are trying to solve (cure for cancer) then figures out where gaps in expertiese lie that no one can see (forest for the trees). Even though 100s of experts from diffrent fields had worked on the Voynich manuscript for example, no one ever thought maybe the code was a fake and undecipherable, because, there is no such thing as an expert on fake codes. Using the verifier method a non-scientific non-expert was able to solve the puzzle in a matter of weeks that had been unsolved for 100s of years. In the same way Science has become so complex and specialized and overwhelmed with data it is hard to see the forest for the trees and basically he is saying there needs to be a new meta-layer in the scientific method.

I bring this up because it is a new method of solving scientific problems, just like the open source method is new. When the scholastics of the 12th Century invented scholasticism, by changeing how scholars read,wrote,and taught, this also changed what scholars read, wrote and taught. In the same way the changes the Internet and other collaborative efforts created by technology bring will change how people work, it also changes what people work on. Normal everyday non-experts solving complex science problems. It really is a revolution.

tatere:

Yale's Yochai Benkler, author of the well-known Coase's Penguin, or Linux and the Nature of the Firm, argues in a recent article in Science (pay access only)

heh. life does enjoy its little ironies.

Frank Shearar:

Regarding the "solution" of the Voynich manuscript, Gordon Rugg's site (http://www.keele.ac.uk/depts/cs/staff/g.rugg/voynich/) says

"This site contains results from using sixteenth century techniques to reproduce the manuscript. These results appear to contain the same unusual features as the language of the manuscript, suggesting that the solution may at last have been found."

That statement does not mean "I have solved the Voynich puzzle". Further, members of the Voynich list question the results of the cardan grille - apparently, the word frequencies of text generated with a cardan grille do not match up properly with those found in the manuscript.

Googling for "voynich gordon rugg" brings up links to some interesting reading matter.

At any rate, a bit of mystery adds a whole lot of spice to life!

Mostafa Hussein:

Given enough eyeballs all craters are shallow.

Oh, Mostafa, that's bad! Heh.

About

This page contains a single entry from the blog posted on August 24, 2004 4:18 PM.

The previous post in this blog was Online Course on Climate Change.

The next post in this blog is Thirty Essential Studies.

Many more can be found on the main index page or by looking through the archives.

Powered by
Movable Type 3.34