Researchers who have used the biomolecule to encode MP3s, text files, and JPEGs say it will be a competitive storage medium in just a few decades.
DNA could someday store more than just the blueprints for life—it could also house vast collections of documents, music, or video in an impossibly compact format that lasts for thousands of years.
Researchers at the European Bioinformatics Institute in Hinxton, U.K., have demonstrated a new method for reliably encoding several common computer file formats this way. As the price of sequencing and synthesizing DNA continues to drop, the researchers estimate, this biological storage medium will be competitive within the next few decades.
The information storage density of DNA is at least a thousand times greater than that of existing media, but until recently the cost of DNA synthesis was too high for the technology to be anything more than a curiosity. Conventional methods of storing digital information for prolonged periods continue to pose problems, however. The magnetic tapes typically used for archival storage become brittle and lose their coating after a few decades. And even if the physical medium used to store information remains intact, storage formats are always changing. This means the data has to be transferred to a new format or it may become unreadable.
DNA, in contrast, remains stable over time—and it’s one format that’s always likely to be useful. “We want to separate the storage medium from the machine that will read it,” says project leader Nick Goldman. “We will always have technologies for reading DNA.” Goldman notes that intact DNA fragments tens of thousands of years old have been found and that DNA is stable for even longer if it’s refrigerated or frozen.
The U.K. researchers encoded DNA with an MP3 of Martin Luther King Jr.’s “I Have a Dream” speech, a PDF of a scientific paper, an ASCII text file of Shakespeare’s sonnets, and a JPEG color photograph. The storage density of the DNA files is about 2.2 petabytes per gram.
Others have demonstrated DNA data storage before. This summer, for example, researchers led by Harvard University genetics professor George Church used the technology to encode a book (see “An Entire Book Stored in DNA”).
The difference with the new work, says Goldman, is that the researchers focused on a practical, error-tolerant design. To make the DNA files, the researchers created software that converted the 1s and 0s of the digital realm into the genetic alphabet of DNA bases, labeled A, T, G, and C. The program ensures that there are no repeated bases such as “AA” or “GG,” which lead to higher error rates when synthesizing and sequencing DNA.
The files were divided into segments, each bookended with an index code that contains information about which file it belongs to and where it belongs within that file—analogous to the title and page number on pages of a book.
The encoding software also ensures some redundancy. Each part of a file is represented in four different fragments, so even if several degrade, it should still be possible to reconstruct the data.
Working with Agilent Technologies of Santa Clara, California, the researchers synthesized the fragments of DNA and then demonstrated that they could sequence them and accurately reconstruct the files. This work is described today in the journal Nature.
Goldman’s group estimates that encoding data in DNA currently costs $12,400 per megabyte, plus $220 per megabyte to read that data back. If the price of DNA synthesis comes down by two orders of magnitude, as it is expected to do in the next decade, says Goldman, DNA data storage will soon cost less than archiving data on magnetic tapes.
Victor Zhirnov, program director for memory technologies at the Semiconductor Research Corporation in Durham, North Carolina, says that because the current cost is so high, data-storing DNA will probably find its earliest use in long-term archives that aren’t accessed frequently. Looking ahead, he says, he can envision “a more aggressive technology” to replace flash, the nonvolatile memory technology found in portable electronics, which is already reaching its scaling limits. The key will be developing entire hardware systems that work with DNA, not just sequencers and synthesizers.
Harvard’s Church says he is working on this very problem. “We can keep incrementally improving our ability to read and write DNA, but I want to jump completely out of that box,” he says. Church is currently developing a system for directly encoding analog signals such as video and audio into DNA, eliminating conventional electronics altogether.
I’m really excited to share my new essay, “The Relevance of Algorithms,” with those of you who are interested in such things. It’s been a treat to get to think through the issues surrounding algorithms and their place in public culture and knowledge, with some of the participants in Culture Digitally (here’s the full litany: Braun, Gillespie, Striphas, Thomas, the third CD podcast, and Anderson‘s post just last week), as well as with panelists and attendees at the recent 4S and AoIR conferences, with colleagues at Microsoft Research, and with all of you who are gravitating towards these issues in their scholarship right now.
The motivation of the essay was two-fold: first, in my research on online platforms and their efforts to manage what they deem to be “bad content,” I’m finding an emerging array of algorithmic techniques being deployed: for either locating and removing sex, violence, and other offenses, or (more troublingly) for quietly choreographing some users away from questionable materials while keeping it available for others. Second, I’ve been helping to shepherd along this anthology, and wanted my contribution to be in the spirit of the its aims: to take one step back from my research to articulate an emerging issue of concern or theoretical insight that (I hope) will be of value to my colleagues in communication, sociology, science & technology studies, and information science.
The anthology will ideally be out in Fall 2013. And we’re still finalizing the subtitle. So here’s the best citation I have.
Gillespie, Tarleton. “The Relevance of Algorithms. forthcoming, in Media Technologies, ed. Tarleton Gillespie, Pablo Boczkowski, and Kirsten Foot. Cambridge, MA: MIT Press.
Below is the introduction, to give you a taste.
Algorithms play an increasingly important role in selecting what information is considered most relevant to us, a crucial feature of our participation in public life. Search engines help us navigate massive databases of information, or the entire web. Recommendation algorithms map our preferences against others, suggesting new or forgotten bits of culture for us to encounter. Algorithms manage our interactions on social networking sites, highlighting the news of one friend while excluding another’s. Algorithms designed to calculate what is “hot” or “trending” or “most discussed” skim the cream from the seemingly boundless chatter that’s on offer. Together, these algorithms not only help us find information, they provide a means to know what there is to know and how to know it, to participate in social and political discourse, and to familiarize ourselves with the publics in which we participate. They are now a key logic governing the flows of information on which we depend, with the “power to enable and assign meaningfulness, managing how information is perceived by users, the ‘distribution of the sensible.’” (Langlois 2012)
Algorithms need not be software: in the broadest sense, they are encoded procedures for transforming input data into a desired output, based on specified calculations. The procedures name both a problem and the steps by which it should be solved. Instructions for navigation may be considered an algorithm, or the mathematical formulas required to predict the movement of a celestial body across the sky. “Algorithms do things, and their syntax embodies a command structure to enable this to happen” (Goffey 2008, 17). We might think of computers, then, fundamentally as algorithm machines — designed to store and read data, apply mathematical procedures to it in a controlled fashion, and offer new information as the output.
But as we have embraced computational tools as our primary media of expression, and have made not just mathematics but all information digital, we are subjecting human discourse and knowledge to these procedural logics that undergird all computation. And there are specific implications when we use algorithms to select what is most relevant from a corpus of data composed of traces of our activities, preferences, and expressions.
These algorithms, which I’ll call public relevance algorithms, are — by the very same mathematical procedures — producing and certifying knowledge. The algorithmic assessment of information, then, represents a particular knowledge logic, one built on specific presumptions about what knowledge is and how one should identify its most relevant components. That we are now turning to algorithms to identify what we need to know is as momentous as having relied on credentialed experts, the scientific method, common sense, or the word of God.
What we need is an interrogation of algorithms as a key feature of our information ecosystem (Anderson 2011), and of the cultural forms emerging in their shadows (Striphas 2010), with a close attention to where and in what ways the introduction of algorithms into human knowledge practices may have political ramifications. This essay is a conceptual map to do just that. I will highlight six dimensions of public relevance algorithms that have political valence:
1. Patterns of inclusion: the choices behind what makes it into an index in the first place, what is excluded, and how data is made algorithm ready
2. Cycles of anticipation: the implications of algorithm providers’ attempts to thoroughly know and predict their users, and how the conclusions they draw can matter
3. The evaluation of relevance: the criteria by which algorithms determine what is relevant, how those criteria are obscured from us, and how they enact political choices about appropriate and legitimate knowledge
4. The promise of algorithmic objectivity: the way the technical character of the algorithm is positioned as an assurance of impartiality, and how that claim is maintained in the face of controversy
5. Entanglement with practice: how users reshape their practices to suit the algorithms they depend on, and how they can turn algorithms into terrains for political contest, sometimes even to interrogate the politics of the algorithm itself
6. The production of calculated publics: how the algorithmic presentation of publics back to themselves shape a public’s sense of itself, and who is best positioned to benefit from that knowledge.
Considering how fast these technologies and the uses to which they are put are changing, this list must be taken as provisional, not exhaustive. But as I see it, these are the most important lines of inquiry into understanding algorithms as emerging tools of public knowledge and discourse.
It would also be seductively easy to get this wrong. In attempting to say something of substance about the way algorithms are shifting our public discourse, we must firmly resist putting the technology in the explanatory driver’s seat. While recent sociological study of the Internet has labored to undo the simplistic technological determinism that plagued earlier work, that determinism remains an alluring analytical stance. A sociological analysis must not conceive of algorithms as abstract, technical achievements, but must unpack the warm human and institutional choices that lie behind these cold mechanisms. I suspect that a more fruitful approach will turn as much to the sociology of knowledge as to the sociology of technology — to see how these tools are called into being by, enlisted as part of, and negotiated around collective efforts to know and be known. This might help reveal that the seemingly solid algorithm is in fact a fragile accomplishment.
~ ~ ~
Here is the full article [PDF]. Please feel free to share it, or point people to this post.
Apple doubles the size of the fuel cell at its new data center, a potential new energy model for the cloud computing.
One of the ways Apple’s new data center will save energy is by using a white roof that reflects heat. Credit: Apple.
Apple is doubling the size its fuel cell installation at its new North Carolina data center, making it a proving ground for large-scale on-site energy at data centers.
In papers filed with the state’s utilities commission last month, Apple indicated that it intends to expand capacity from five megawatts of fuel cells, which are now runnning, to a maximum of 10 megawatts. The filing was originally spotted by the Charlotte News Observer.
Apple says the much-watched project (Wired actually hired a pilot to take photos of it) will be one of the most environmentally benign data centers ever built because it will use several energy-efficiency tricks and run on biogas-powered fuel cells and a giant 20-megawatt solar array.
Beyond Apple’s eco-bragging rights, this data center (and one being built by eBay) should provide valuable insights to the rest of the cloud computing industry. Apple likely won’t give hard numbers on expenses but, if all works as planned, it will validate data center fuel cells for reliable power generation at this scale.
Stationary fuel cells are certainly well proven, but multi-megawatt installations are pretty rare. Data center customers for Bloom Energy, which is supplying Apple in North Carolina, typically have far less than a megawatt installed. Each Bloom Energy Server, which takes up about a full parking space, produces 200 kilowatts.
By going to 10 megawatts of capacity, Apple can claim the largest fuel cell powered data center, passing eBay which earlier this year announced plans for six megawatts worth of fuel cells at a data center in Utah. (See, EBay Goes All-in With Fuel Cell-Powered Fuel Cell Data Center.) It also opens up new ways of doing business.
Using fuel cells at this scale potentially changes how data center operators use grid power and traditional back up diesel generators. With Apple’s combination of its solar power and fuel cells, it appears the facility will be able to produce more than the 20 megawatts it needs at full steam. That means Apple could sell power back to the utility or even operate independently and use the grid as back up power—a completely new configuration.
Bloom Energy’s top data center executive Peter Gross told Data Center Insider that data center servers could have two power cords—one from the grid and one from the fuel cells. In the event of a power failure, those fuel cells could keep the servers humming, rather than the backup diesel generators.
Apple hasn’t disclose how much it’s paying for all this, but the utility commission filing indicates it plans to monetize its choice of biogas, rather than natural gas. The documents show that Apple is contracting with a separate company to procure biogas, or methane that is given off from landfills. Because it’s a renewable source, Apple can receive compensation for renewable energy credits.
Proving fuel cells and solar work in a mission-critical workload at this scale is one thing. Whether it makes economic sense for companies other than cash-rich Apple and eBay is something different. Apple and eBay could save some money by installing fewer diesel generators. Investing in solar also gives companies a fixed electricity cost for years ahead, shielding them from spikes in utilities’ power prices.
But some of the most valuable information on these projects will be how the numbers pencil out. That might help conservative data center designers to look at these technologies, which are substantially cleaner than the grid, more seriously.
Both operationally and financially, there’s a lot to learn down in Maiden. Let’s hope Apple is a bit more forthcoming about its data center than telling us what’s in the next iPhone.
Personal comment:
This looks like one of several (but far not enough) implementations of "the third industrial revolution" (J. Rifkin), definitely a book to read to foresee a path toward a new (economic) model of clean energy and society, when the information based Internet will (might) combine with the energy based Internet and when energy will start to be an (abundant) solution and not a problem anymore.
We've seen computer/Internet industries take over the music industry, or now the book industry, etc. Will we see them take over the energy industry? We can witness several "little things" going into that direction. Google energy in your Google+ "task bar" by 2030?
But the main point of all this, is that if we by chance move not too late toward a clean energy model (fuel cells, solar, wind, etc.) --but note that we don't have any other choice now (an increase of 6°C degrees in average temperature means massive ecosystems extinctions by the end of the century, and will it or not, we are part of them--), it should remain decentralized and not concentrated as it is now. Therefore we should remain vigilant with this point like we are with the actual Internet. It is important that the system remains participative in some ways and that anybody can produce its own energy and share the surplus.
*I’m guessing this rig won’t show up on a fashion catwalk any time soon.
Published on Jun 6, 2012 by googlemaps
“There’s a whole wilderness out there that is only accessible by foot. Street View Trekker solves that problem by enabling us to photograph beautiful places such as the Grand Canyon or Muir Woods so anyone can explore them. All the equipment fits in this one backpack.”
It's common when we discuss the future of maps to reference the Borgesian dream of a 1:1 map of the entire world. It seems like a ridiculous notion that we would need a complete representation of the world when we already have the world itself. But to take scholar Nathan Jurgenson's conception of augmented reality seriously, we would have to believe that every physical space is, in his words, "interpenetrated" with information. All physical spaces already are also informational spaces.
Interesting point by author A. C. Madrigal in his article to consider the Google driverless cars as the coming "web crawlers" of the physical world ... Interesting also, the concept of "deep map".
Additionnal comment on this post (01.10.2012), even more interesting now that we see Google driverless cars getting legalized in California...
Researchers at Harvard encode information in DNA at a density on par with any other experimental storage method.
By Susan Young
-
DNA can be used to store information at a density about a million times greater than your hard drive, report researchers in Science today. George Church of Harvard Medical School and colleagues report that they have written an entire book in DNA, a feat that highlights the recent advances in DNA synthesis and sequencing.
The team encoded a draft HTML version of a book co-written by Church called Regenesis: How Synthetic Biology Will Reinvent Nature and Ourselves. In addition to the text, the biological bits included the information for modern formatting, images and Javascript, to show that “DNA (like other digital media) can encode executable directives for digital machines,” they write.
To do this, the authors converted the computational language of 0's and 1's into the language of DNA--the nucleotides typically represented by A's, T's G's and C's; the A’s and C’s took the place of 0's and T’s and G’s of 1's. They then used off-the-shelf DNA synthesizers to make 54,898 pieces of DNA, each 159 nucleotides long, to encode the book, which could then be decoded with DNA sequencing.
This is not the first time non-biological information has been stored in DNA, but Church's demonstration goes far beyond the amount of information stored in previous efforts. For example, in 2009, researchers encoded 1688 bits of text, music and imagery in DNA and in 2010, Craig Venter and colleagues encoded a watermarked, synthetic genome worth 7920 bits.
DNA synthesis and sequencing is still too slow and costly to be practical for most data storage, but the authors suggest DNA’s long-lived nature could make it a suitable medium for archival storage.
Erik Winfree, who studies DNA-based computation at Caltech and was a 1999 TR35 winner, hopes the study will stimulate a serious discussion about what roles DNA can play in information science and technology.
“The most remarkable thing about DNA is its information density, which is roughly one bit per cubic nanometer,” he writes in an email.
“Technology changes things, and many old ideas for DNA information storage and information processing deserve to be revisited now -- especially since DNA synthesis and sequencing technology will continue their remarkable advance.”
Personal comment:
Where the living binds to the machine / to computation and where information seems to be the key ingredient. Somehow what Wiener and Shannon told us half a century ago.
Sometimes, particle animations work. Perpetual Ocean by NASA This visualization shows ocean surface currents around the world during the period from June 2005 through Decmeber 2007.
Interestingly, the visualization does not include any narration or annotations. Instead, the goal was to use ocean flow data to create a simple, visceral experience. The data was based on a high resolution model of the global ocean and sea-ice, that is able to capture ocean eddies and other narrow-current systems which transport heat and carbon in the oceans.
You can either hit your bandwidth allowance by downloading the 2GB versions at the NASA website, or watch the somewhat smaller, yet still HD version, below.
["The Digital Dump", a graphic about e-waste from Good.is's "Transparency" series and Column Five Media.]
Mostly for our own purposes (keeping track of things we see), we’ve started Visibility, a tumblr collecting items related to An Atlas of iPhone Landscapes. I make no promises about how frequently it will or won’t be updated, but if you’re particularly interested in the topic, you can follow the tumblr or grab the feed.
Nick Bilton at the Times’s Bits Blog, hardly a site for speculation on vaporware, tells us to expect something remarkable from Google by the year’s end: heads-up display glasses “that will be able to stream information to the wearer’s eyeballs in real time.”
The Times post builds on the reporting of Seth Weintraub, who blogs at 9 to 5 Google. He had written about the glasses project in December, as well as this month. Weintraub had one tipster, who told him the glasses would look something like Oakley Thumps. Bilton cites “several Google employees familiar with the project,” who said the devices would cost between $250 and $600. The device is reportedly being built in Google’s “X offices,” a top-secret lab that is nonetheless not-top-secret-enough that you and I and other readers of the Times know about it. (X is favored letter for Google of late, when it comes to blue sky projects.)
A few other details about the glasses, that have emerged from either Bilton or Weintraub: they would be Andoid-based and feature a small screen that sits inches from the eye. They’d have access to a 3G or 4G network, and would have motion and GPS sensors. And, in wild, Terminator style, the glasses would even have a low-res camera “that will be able to monitor the world in real time and overlay information about locations, surrounding buildings and friends who might be nearby,” per Bilton. Google co-founder Sergey Brin is reportedly serving as a leader on the project, along with Steve Lee, who made Latitude, Google’s mapping software.
Though reportedly arriving for sale in 2012, the glasses may never reach a mass market. Google is said to be exploring ways to monetize the glasses should consumers take a liking to them. “If consumers take to the glasses when they are released later this year, then Google will explore possible revenue streams,” writes Bilton.
I’ve written before that smartwatches could represent a frontier of smartness-on-your-person. “They stand to transform your wrist into something akin to (if a wee bit short of) a heads-up display,” was how I put it. If the information Bilton and Weintraub have on Google is sound, I may have to dial back my enthusiasm on smartwatches--or at least stop likening them to heads-up displays, once the real thing exists.
Then again, smartwatches may still occupy a middle ground between utility and style. On the one hand, Oakley Thump-style smartglasses would be extraordinarily useful, for some. On the other hand, they would also be--let's face it--irredeemably geeky. As Bilton writes, “The glasses are not designed to be worn constantly — although Google expects some of the nerdiest users will wear them a lot.”
If you thought your smartwatch-sporting friend was a geek, just wait till he's flanked by people playing cyborg with Google’s forthcoming technology.
Personal comment:
Well, I'n not so sure if this is a good news... (in fact I don't --I don't like Oakley's...--, unless it would be an open project / open data with a more "design noir" approach), but Terminators will certainly be happy!
A startup believes combining LED technology and smart-phone apps will offer precise indoor location data.
By Rachel Metz
When you go to the grocery store, chances are you find yourself hunting for at least a couple of items on your list. Wouldn't it be easier if your smart phone could just give you turn-by-turn directions to that elusive can of tomato paste or bunch of cilantro, and maybe even offer you a discount on yogurt, too?
That's the idea behind ByteLight, a Cambridge, Massachusetts-based startup founded by Dan Ryan and Aaron Ganick. ByteLight aims to use LED bulbs—which will fit into standard bulb sockets—as indoor positioning tools for apps that help people navigate places such as museums, hospitals, and stores, and offer deals targeted to a person's location.
Accurate indoor navigation is currently lacking. While GPS is good for finding your way outdoors, it doesn't work as well inside. And technologies being used for indoor positioning, such as Wi-Fi, aren't accurate enough, Ryan and Ganick say.
Ryan and Ganick feel confident they're in the right space at the right time: there's not only been a boom in location-based services, but also in smart-phone apps such as Foursquare or Shopkick that use these services. Meanwhile, LEDs are increasingly popular as replacements for traditional lightbulbs (due to their energy efficiency and long life span).
ByteLight grew out of the National Science Foundation-funded Smart Lighting Engineering Research Center at Boston University, which Ganick and Ryan, both 24, took part in as electrical engineering undergrads.
Initially, ByteLight focused on using LEDs to provide high-speed data communications—a technology referred to as Li-Fi. But Ryan and Ganick felt their technology was better suited to helping people find their way around large indoor spaces.
Here's how it might work: you're in a department store that has replaced a number of its traditional lightbulbs with ByteLights. The lights, flickering faster than the eye can see, would emit a signal to passing smart phones. Your phone would read the signal through its camera, which would direct the smart phone to pull up a deal offering a discount on a shirt on a nearby rack.
While Wi-Fi can only accurately determine your position indoors to within about five to 10 meters, Ryan and Ganick say, ByteLight's technology cuts this down to less than a meter—close enough for you to easily figure out which shirt the deal is referring to.
ByteLight is working on a functioning prototype, and hopes to have the first products available within a year. Ryan and Ganick say a number of developers are working on smart-phone apps that would include the technology, which, they feel, could also work as an additional (or smarter) location-finding feature within existing apps.
The company is talking to retailers about installing its equipment in stores, too. Ryan and Ganick think businesses will warm to ByteLight because installation mainly requires buying and screwing in their lightbulbs. Once a business installs the lights, they'll need to use a ByteLight mobile app to determine which light corresponds to which spot in their building, Ganick says. An app developer could then use that data to tag deals to different lights.
And while LED bulbs are more costly than standard lightbulbs, they've been falling in price. ByteLight says its bulbs will be only "marginally" more expensive than existing LEDs.
Jeffrey Grau, an analyst with digital marketing company eMarketer, believes ByteLight may be on to something. If the customers are already inside a store, showing them an exclusive offer makes it more likely they'll buy something.
But will shoppers find ByteLight's targeting creepy? Ryan and Ganick don't think so. They say an app on your smart phone would be "listening" for nearby ByteLights, not the other way around. So users can control their own experience. And the LED bulbs' positioning capabilities could help people inside a large building solve the common problem of figuring out where they are. "We want people to think about lightbulbs in an entirely new way," Ganick says.
Copyright Technology Review 2012.
Personal comment:
The technology of transmitting bit information through light is known and was used already with neon tubes (or if you look for older technologies, the morse code was already the same technology, but dedicated to the human eye instead of the digital and quick one of a camera), it is now just a shift to combine very rapid variations in lighting (information transmission that remain invisible --to the human eye--) with micro-geo-location.
This blog is the survey website of fabric | ch - studio for architecture, interaction and research.
We curate and reblog articles, researches, writings, exhibitions and projects that we notice and find interesting during our everyday practice and readings.
Most articles concern the intertwined fields of architecture, territory, art, interaction design, thinking and science. From time to time, we also publish documentation about our own work and research, immersed among these related resources and inspirations.
This website is used by fabric | ch as archive, references and resources. It is shared with all those interested in the same topics as we are, in the hope that they will also find valuable references and content in it.