Tuesday, August 02. 2016
By fabric | ch
As we continue to lack a decent search engine on this blog and as we don't use a "tag cloud" ... This post could help navigate through the updated content on | rblg (as of 07.2016), via all its tags!
HERE ARE ALL THE CURRENT TAGS TO NAVIGATE ON | RBLG BLOG:
(to be seen just below if you're navigating on the blog's page or here for rss readers)
Posted by Patrick Keller in fabric | ch at 16:58
Defined tags for this entry: 3d, activism, advertising, agriculture, air, animation, applications, archeology, architects, architecture, art, art direction, artificial reality, artists, atmosphere, automation, behaviour, bioinspired, biotech, blog, body, books, brand, character, citizen, city, climate, clips, code, cognition, collaboration, commodification, communication, community, computing, conditioning, conferences, consumption, content, control, craft, culture & society, curators, customization, data, density, design, design (environments), design (fashion), design (graphic), design (interactions), design (motion), design (products), designers, development, devices, digital, digital fabrication, digital life, digital marketing, dimensions, direct, display, documentary, earth, ecal, ecology, economy, electronics, energy, engineering, environment, equipment, event, exhibitions, experience, experimentation, fabric | ch, farming, fashion, fiction, films, food, form, franchised, friends, function, future, gadgets, games, garden, generative, geography, globalization, goods, hack, hardware, harvesting, health, history, housing, hybrid, identification, illustration, images, information, infrastructure, installations, interaction design, interface, interferences, kinetic, knowledge, landscape, language, law, life, lighting, localization, localized, magazines, make, mapping, marketing, mashup, materials, media, mediated, mind, mining, mobile, mobility, molecules, monitoring, monography, movie, museum, music, nanotech, narrative, nature, networks, neurosciences, opensource, operating system, participative, particles, people, perception, photography, physics, physiological, politics, pollution, presence, print, privacy, product, profiling, projects, psychological, public, publishing, reactive, real time, recycling, research, resources, responsive, ressources, robotics, santé, scenography, schools, science & technology, scientists, screen, search, security, semantic, services, sharing, shopping, signage, smart, social, society, software, solar, sound, space, speculation, statement, surveillance, sustainability, tactile, tagging, tangible, targeted, teaching, technology, tele-, telecom, territory, text, textile, theory, thinkers, thinking, time, tools, topology, tourism, toys, transmission, trend, typography, ubiquitous, urbanism, users, variable, vernacular, video, viral, vision, visualization, voice, vr, war, weather, web, wireless, writing
Wednesday, January 15. 2014
"We found evolution will punish you if you're selfish and mean," said lead author Christoph Adami, MSU professor of microbiology and molecular genetics. "For a short time and against a specific set of opponents, some selfish organisms may come out ahead. But selfishness isn't evolutionarily sustainable."
Provided by Michigan State University
Saturday, July 06. 2013
Loren M. Frank
Enhancing the flow of information through the brain could be crucial to making neuroprosthetics practical.
The abilities to learn, remember, evaluate, and decide are central to who we are and how we live. Damage to or dysfunction of the brain circuitry that supports these functions can be devastating, leading to Alzheimer’s, schizophrenia, PTSD, or many other disorders. Current treatments, which are drug-based or behavioral, have limited efficacy in treating these problems. There is a pressing need for something more effective.
One promising approach is to build an interactive device to help the brain learn, remember, evaluate, and decide. One might, for example, construct a system that would identify patterns of brain activity tied to particular experiences and then, when called upon, impose those patterns on the brain. Ted Berger, Sam Deadwyler, Robert Hampsom, and colleagues have used this approach (see “Memory Implants”). They are able to identify and then impose, via electrical stimulation, specific patterns of brain activity that improve a rat’s performance in a memory task. They have also shown that in monkeys stimulation can help the animal perform a task where it must remember a particular item.
Their ability to improve performance is impressive. However, there are fundamental limitations to an approach where the desired neural pattern must be known and then imposed. The animals used in their studies were trained to do a single task for weeks or months and the stimulation was customized to produce the right outcome for that task. This is only feasible for a few well-learned experiences in a predictable and constrained environment.
New and complex experiences engage large numbers of neurons scattered across multiple brain regions. These individual neurons are physically adjacent to other neurons that contribute to other memories, so selectively stimulating the right neurons is difficult if not impossible. And to make matters even more challenging, the set of neurons involved in storing a particular memory can evolve as that memory is processed in the brain. As a result, imposing the right patterns for all desired experiences, both past and future, requires technology far beyond what is possible today.
I believe the answer to be an alternative approach based on enhancing flows of information through the brain. The importance of information flow can be appreciated when we consider how the brain makes and uses memories. During learning, information from the outside world drives brain activity and changes in the connections between neurons. This occurs most prominently in the hippocampus, a brain structure critical for laying down memories for the events of daily life. Thus, during learning, external information must flow to the hippocampus if memories are to be stored.
Once information has been stored in the hippocampus, a different flow of information is required to create a long-lasting memory. During periods of rest and sleep, the hippocampus “reactivates” stored memories, driving activity throughout the rest of the brain. Current theories suggest that the hippocampus acts like a teacher, repeatedly sending out what it has learned to the rest of the brain to help engrain memories in more stable and distributed brain networks. This “consolidation” process depends on the flow of internal information from the hippocampus to the rest of the brain.
Finally, when a memory is retrieved a similar pattern of internally driven flow is required. For many memories, the hippocampus is required for memory retrieval, and once again hippocampal activity drives the reinstatement of the memory pattern throughout the brain. This process depends on the same hippocampal reactivation events that contribute to memory consolidation.
Different flows of information can be engaged at different intensities as well. Some memories stay with us and guide our choices for a lifetime, while others fade with time. We and others have shown that new and rewarded experiences drive both profound changes in brain activity, and strong memory reactivation. Familiar and unrewarded experiences drive smaller changes and weaker reactivation. Further, we have recently shown that the intensity of memory reactivation in the hippocampus, measured as the number of neurons active together during each reactivation event, can predict whether an the next decision an animal makes is going to be right or wrong. Our findings suggest that when the animal reactivates effectively, it does a better job of considering possible future options (based on past experiences) and then makes better choices.
These results point to an alternative approach to helping the brain learn, remember and decide more effectively. Instead of imposing a specific pattern for each experience, we could enhance the flow of information to the hippocampus during learning and the intensity of memory reactivation from the hippocampus during memory consolidation and retrieval. We are able to detect signatures of different flows of information associated with learning and remembering. We are also beginning to understand the circuits that control this flow, which include neuromodulatory regions that are often damaged in disease states. Importantly, these modulatory circuits are more localized and easier to manipulate than the distributed populations of neurons in the hippocampus and elsewhere that are activated for each specific experience.
Thus, an effective cognitive neuroprosthetic would detect what the brain is trying to do (learn, consolidate or retrieve) and then amplify activity in the relevant control circuits to enhance the essential flows of information. We know that even in diseases like Alzheimer’s where there is substantial damage to the brain, patients have good days and bad days. On good days the brain smoothly transitions among distinct functions, each associated with a particular flow of information. On bad days these functions may become less distinct and the flows of information muddled. Our goal then, would be to restore the flows of information underlying different mental functions.
A prosthetic device has the potential to adapt to the moment-by-moment changes in information flow necessary for different types of mental processing. By contrast, drugs that seek to treat cognitive dysfunction may effectively amplify one type of processing but cannot adapt to the dynamic requirements of mental function. Thus, constructing a device that makes the brain’s control circuits work more effectively offers a powerful approach to treating disease and maximizing mental capacity.
Loren M. Frank is a professor at the Center for Integrative Neuroscience and the Department of Physiology at the University of California, San Francisco.
Tuesday, April 30. 2013
By David Talbot on April 16, 2013
Storing video and other files more intelligently reduces the demand on servers in a data center.
Worldwide, data centers consume huge and growing amounts of electricity.
New research suggests that data centers could significantly cut their electricity usage simply by storing fewer copies of files, especially videos.
For now the work is theoretical, but over the next year, researchers at Alcatel-Lucent’s Bell Labs and MIT plan to test the idea, with an eye to eventually commercializing the technology. It could be implemented as software within existing facilities. “This approach is a very promising way to improve the efficiency of data centers,” says Emina Soljanin, a researcher at Bell Labs who participated in the work. “It is not a panacea, but it is significant, and there is no particular reason that it couldn’t be commercialized fairly quickly.”
With the new technology, any individual data center could be expected to save 35 percent in capacity and electricity costs—about $2.8 million a year or $18 million over the lifetime of the center, says Muriel Médard, a professor at MIT’s Research Laboratory of Electronics, who led the work and recently conducted the cost analysis.
So-called storage area networks within data center servers rely on a tremendous amount of redundancy to make sure that downloading videos and other content is a smooth, unbroken experience for consumers. Portions of a given video are stored on different disk drives in a data center, with each sequential piece cued up and buffered on your computer shortly before it’s needed. In addition, copies of each portion are stored on different drives, to provide a backup in case any single drive is jammed up. A single data center often serves millions of video requests at the same time.
The new technology, called network coding, cuts way back on the redundancy without sacrificing the smooth experience. Algorithms transform the data that makes up a video into a series of mathematical functions that can, if needed, be solved not just for that piece of the video, but also for different parts. This provides a form of backup that doesn’t rely on keeping complete copies of the data. Software at the data center could simply encode the data as it is stored and decode it as consumers request it.
Médard’s group previously proposed a similar technique for boosting wireless bandwidth (see “A Bandwidth Breakthrough”). That technology deals with a different problem: wireless networks waste a lot of bandwidth on back-and-forth traffic to recover dropped portions of a signal, called packets. If mathematical functions describing those packets are sent in place of the packets themselves, it becomes unnecessary to re-send a dropped packet; a mobile device can solve for the missing packet with minimal processing. That technology, which improves capacity up to tenfold, is currently being licensed to wireless carriers, she says.
Between the electricity needed to power computers and the air conditioning required to cool them, data centers worldwide consume so much energy that by 2020 they will cause more greenhouse-gas emissions than global air travel, according to the consulting firm McKinsey.
Smarter software to manage them has already proved to be a huge boon (see “A New Net”). Many companies are building data centers that use renewable energy and smarter energy management systems (see “The Little Secrets Behind Apple’s Green Data Centers”). And there are a number of ways to make chips and software operate more efficiently (see “Rethinking Energy Use in Data Centers”). But network coding could make a big contribution by cutting down on the extra disk drives—each needing energy and cooling—that cloud storage providers now rely on to ensure reliability.
This is not the first time that network coding has been proposed for data centers. But past work was geared toward recovering lost data. In this case, Médard says, “we have considered the use of coding to improve performance under normal operating conditions, with enhanced reliability a natural by-product.”
Still a link in the context of our workshop at the Tsinghua University and related to data storage at large.
Monday, April 22. 2013
Today (18.04.2013) Facebook launched two public dashboards that report continuous, near-real-time data for key efficiency metrics – specifically, PUE and WUE – for our data centers in Prineville, OR and Forest City, NC. These dashboards include both a granular look at the past 24 hours of data and a historical view of the past year’s values. In the historical view, trends within each data set and correlations between different metrics become visible. Once our data center in Luleå, Sweden, comes online, we’ll begin publishing for that site as well.
We began sharing PUE for our Prineville data center at the end of Q2 2011 and released our first Prineville WUE in the summer of 2012. Now we’re pulling back the curtain to share some of the same information that our data center technicians view every day. We’ll continue updating our annualized averages as we have in the past, and you’ll be able to find them on the Prineville and Forest City dashboards, right below the real-time data.
Why are we doing this? Well, we’re proud of our data center efficiency, and we think it’s important to demystify data centers and share more about what our operations really look like. Through the Open Compute Project (OCP), we’ve shared the building and hardware designs for our data centers. These dashboards are the natural next step, since they answer the question, “What really happens when those servers are installed and the power’s turned on?”
Creating these dashboards wasn’t a straightforward task. Our data centers aren’t completed yet; we’re still in the process of building out suites and finalizing the parameters for our building managements systems. All our data centers are literally still construction sites, with new data halls coming online at different points throughout the year. Since we’ve created dashboards that visualize an environment with so many shifting variables, you’ll probably see some weird numbers from time to time. That’s OK. These dashboards are about surfacing raw data – and sometimes, raw data looks messy. But we believe in iteration, in getting projects out the door and improving them over time. So we welcome you behind the curtain, wonky numbers and all. As our data centers near completion and our load evens out, we expect these inevitable fluctuations to correspondingly decrease.
We’re excited about sharing this data, and we encourage others to do the same. Working together with AREA 17, the company that designed these visualizations, we’ve decided to open-source the front-end code for these dashboards so that any organization interested in sharing PUE, WUE, temperature, and humidity at its data center sites can use these dashboards to get started. Sometime in the coming weeks we’ll publish the code on the Open Compute Project’s GitHub repository. All you have to do is connect your own CSV files to get started. And in the spirit of all other technologies shared via OCP, we encourage you to poke through the code and make updates to it. Do you have an idea to make these visuals even more compelling? Great! We encourage you to treat this as a starting point and use these dashboards to make everyone’s ability to share this data even more interesting and robust.
Lyrica McTiernan is a program manager for Facebook’s sustainability team.
The Open Compute Project is definitely an interesting one and the fact that it comes with open data about centers' consumption as well. Though, PUE and WUE should be questioned further to know if these are the right measures about the effectiveness of a data center.
Thursday, February 07. 2013
Note: I'm again here joining two recent posts. First, what it could climatically and therefore spatially, geographically, energetically, socialy, ... mean, degree after degree to increase the average temperature of the Earth and second, an information map about our warming world...
It is an unsigned paper, so it certainly need to be cross-checked, which I haven't done (time, time...)! But I post it nevertheless as it points out some believable consequences, yet very dark. As many people say now, we don't have much time left to start acting, strong (7-10 years).
Via Berens Finance (!)
A degree by degree explanation of what will happen when the earth warms
Even if greenhouse emissions stopped overnight the concentrations already in the atmosphere would still mean a global rise of between 0.5 and 1C. A shift of a single degree is barely perceptible to human skin, but it’s not human skin we’re talking about. It’s the planet; and an average increase of one degree across its entire surface means huge changes in climatic extremes.
BETWEEN THREE AND FOUR DEGREES OF WARMING
BETWEEN FOUR AND FIVE DEGREES OF WARMING
Users can click anywhere on the map and investigate an entire temperature record for that grid cell, retrieved via NASA's surface temperature analysis database GISTEMP, which is based on 6000 monitoring stations, ships and satellite measurements worldwide. Via the drop-down list at the top, users can also switch between different map overlays that summarize the average temperatures for different 20-year pictures. Accordingly, climate change become visible as the cool blue hues from previous decades are replaced with warm red and yellow hues around the start of the 20th Century.
Accordingly, this tool aims to communicate the reality and variability of recorded climate change, and compare that local picture with the trend for the global average temperature..
The accompanying article can be found here.
Read also "An Alarm in the Offing on Climate Change", The New York Times.
Thursday, January 24. 2013
Researchers who have used the biomolecule to encode MP3s, text files, and JPEGs say it will be a competitive storage medium in just a few decades.
DNA could someday store more than just the blueprints for life—it could also house vast collections of documents, music, or video in an impossibly compact format that lasts for thousands of years.
Researchers at the European Bioinformatics Institute in Hinxton, U.K., have demonstrated a new method for reliably encoding several common computer file formats this way. As the price of sequencing and synthesizing DNA continues to drop, the researchers estimate, this biological storage medium will be competitive within the next few decades.
The information storage density of DNA is at least a thousand times greater than that of existing media, but until recently the cost of DNA synthesis was too high for the technology to be anything more than a curiosity. Conventional methods of storing digital information for prolonged periods continue to pose problems, however. The magnetic tapes typically used for archival storage become brittle and lose their coating after a few decades. And even if the physical medium used to store information remains intact, storage formats are always changing. This means the data has to be transferred to a new format or it may become unreadable.
DNA, in contrast, remains stable over time—and it’s one format that’s always likely to be useful. “We want to separate the storage medium from the machine that will read it,” says project leader Nick Goldman. “We will always have technologies for reading DNA.” Goldman notes that intact DNA fragments tens of thousands of years old have been found and that DNA is stable for even longer if it’s refrigerated or frozen.
The U.K. researchers encoded DNA with an MP3 of Martin Luther King Jr.’s “I Have a Dream” speech, a PDF of a scientific paper, an ASCII text file of Shakespeare’s sonnets, and a JPEG color photograph. The storage density of the DNA files is about 2.2 petabytes per gram.
Others have demonstrated DNA data storage before. This summer, for example, researchers led by Harvard University genetics professor George Church used the technology to encode a book (see “An Entire Book Stored in DNA”).
The difference with the new work, says Goldman, is that the researchers focused on a practical, error-tolerant design. To make the DNA files, the researchers created software that converted the 1s and 0s of the digital realm into the genetic alphabet of DNA bases, labeled A, T, G, and C. The program ensures that there are no repeated bases such as “AA” or “GG,” which lead to higher error rates when synthesizing and sequencing DNA.
The files were divided into segments, each bookended with an index code that contains information about which file it belongs to and where it belongs within that file—analogous to the title and page number on pages of a book.
The encoding software also ensures some redundancy. Each part of a file is represented in four different fragments, so even if several degrade, it should still be possible to reconstruct the data.
Working with Agilent Technologies of Santa Clara, California, the researchers synthesized the fragments of DNA and then demonstrated that they could sequence them and accurately reconstruct the files. This work is described today in the journal Nature.
Goldman’s group estimates that encoding data in DNA currently costs $12,400 per megabyte, plus $220 per megabyte to read that data back. If the price of DNA synthesis comes down by two orders of magnitude, as it is expected to do in the next decade, says Goldman, DNA data storage will soon cost less than archiving data on magnetic tapes.
Victor Zhirnov, program director for memory technologies at the Semiconductor Research Corporation in Durham, North Carolina, says that because the current cost is so high, data-storing DNA will probably find its earliest use in long-term archives that aren’t accessed frequently. Looking ahead, he says, he can envision “a more aggressive technology” to replace flash, the nonvolatile memory technology found in portable electronics, which is already reaching its scaling limits. The key will be developing entire hardware systems that work with DNA, not just sequencers and synthesizers.
Harvard’s Church says he is working on this very problem. “We can keep incrementally improving our ability to read and write DNA, but I want to jump completely out of that box,” he says. Church is currently developing a system for directly encoding analog signals such as video and audio into DNA, eliminating conventional electronics altogether.
Monday, December 10. 2012
By Tarleton Gillespie
I’m really excited to share my new essay, “The Relevance of Algorithms,” with those of you who are interested in such things. It’s been a treat to get to think through the issues surrounding algorithms and their place in public culture and knowledge, with some of the participants in Culture Digitally (here’s the full litany: Braun, Gillespie, Striphas, Thomas, the third CD podcast, and Anderson‘s post just last week), as well as with panelists and attendees at the recent 4S and AoIR conferences, with colleagues at Microsoft Research, and with all of you who are gravitating towards these issues in their scholarship right now.
The motivation of the essay was two-fold: first, in my research on online platforms and their efforts to manage what they deem to be “bad content,” I’m finding an emerging array of algorithmic techniques being deployed: for either locating and removing sex, violence, and other offenses, or (more troublingly) for quietly choreographing some users away from questionable materials while keeping it available for others. Second, I’ve been helping to shepherd along this anthology, and wanted my contribution to be in the spirit of the its aims: to take one step back from my research to articulate an emerging issue of concern or theoretical insight that (I hope) will be of value to my colleagues in communication, sociology, science & technology studies, and information science.
The anthology will ideally be out in Fall 2013. And we’re still finalizing the subtitle. So here’s the best citation I have.
Below is the introduction, to give you a taste.
Algorithms play an increasingly important role in selecting what information is considered most relevant to us, a crucial feature of our participation in public life. Search engines help us navigate massive databases of information, or the entire web. Recommendation algorithms map our preferences against others, suggesting new or forgotten bits of culture for us to encounter. Algorithms manage our interactions on social networking sites, highlighting the news of one friend while excluding another’s. Algorithms designed to calculate what is “hot” or “trending” or “most discussed” skim the cream from the seemingly boundless chatter that’s on offer. Together, these algorithms not only help us find information, they provide a means to know what there is to know and how to know it, to participate in social and political discourse, and to familiarize ourselves with the publics in which we participate. They are now a key logic governing the flows of information on which we depend, with the “power to enable and assign meaningfulness, managing how information is perceived by users, the ‘distribution of the sensible.’” (Langlois 2012)
Algorithms need not be software: in the broadest sense, they are encoded procedures for transforming input data into a desired output, based on specified calculations. The procedures name both a problem and the steps by which it should be solved. Instructions for navigation may be considered an algorithm, or the mathematical formulas required to predict the movement of a celestial body across the sky. “Algorithms do things, and their syntax embodies a command structure to enable this to happen” (Goffey 2008, 17). We might think of computers, then, fundamentally as algorithm machines — designed to store and read data, apply mathematical procedures to it in a controlled fashion, and offer new information as the output.
But as we have embraced computational tools as our primary media of expression, and have made not just mathematics but all information digital, we are subjecting human discourse and knowledge to these procedural logics that undergird all computation. And there are specific implications when we use algorithms to select what is most relevant from a corpus of data composed of traces of our activities, preferences, and expressions.
These algorithms, which I’ll call public relevance algorithms, are — by the very same mathematical procedures — producing and certifying knowledge. The algorithmic assessment of information, then, represents a particular knowledge logic, one built on specific presumptions about what knowledge is and how one should identify its most relevant components. That we are now turning to algorithms to identify what we need to know is as momentous as having relied on credentialed experts, the scientific method, common sense, or the word of God.
What we need is an interrogation of algorithms as a key feature of our information ecosystem (Anderson 2011), and of the cultural forms emerging in their shadows (Striphas 2010), with a close attention to where and in what ways the introduction of algorithms into human knowledge practices may have political ramifications. This essay is a conceptual map to do just that. I will highlight six dimensions of public relevance algorithms that have political valence:
Considering how fast these technologies and the uses to which they are put are changing, this list must be taken as provisional, not exhaustive. But as I see it, these are the most important lines of inquiry into understanding algorithms as emerging tools of public knowledge and discourse.
It would also be seductively easy to get this wrong. In attempting to say something of substance about the way algorithms are shifting our public discourse, we must firmly resist putting the technology in the explanatory driver’s seat. While recent sociological study of the Internet has labored to undo the simplistic technological determinism that plagued earlier work, that determinism remains an alluring analytical stance. A sociological analysis must not conceive of algorithms as abstract, technical achievements, but must unpack the warm human and institutional choices that lie behind these cold mechanisms. I suspect that a more fruitful approach will turn as much to the sociology of knowledge as to the sociology of technology — to see how these tools are called into being by, enlisted as part of, and negotiated around collective efforts to know and be known. This might help reveal that the seemingly solid algorithm is in fact a fragile accomplishment.
~ ~ ~
Here is the full article [PDF]. Please feel free to share it, or point people to this post.
Friday, December 07. 2012
Apple doubles the size of the fuel cell at its new data center, a potential new energy model for the cloud computing.
One of the ways Apple’s new data center will save energy is by using a white roof that reflects heat. Credit: Apple.
Apple is doubling the size its fuel cell installation at its new North Carolina data center, making it a proving ground for large-scale on-site energy at data centers.
In papers filed with the state’s utilities commission last month, Apple indicated that it intends to expand capacity from five megawatts of fuel cells, which are now runnning, to a maximum of 10 megawatts. The filing was originally spotted by the Charlotte News Observer.
Apple says the much-watched project (Wired actually hired a pilot to take photos of it) will be one of the most environmentally benign data centers ever built because it will use several energy-efficiency tricks and run on biogas-powered fuel cells and a giant 20-megawatt solar array.
Beyond Apple’s eco-bragging rights, this data center (and one being built by eBay) should provide valuable insights to the rest of the cloud computing industry. Apple likely won’t give hard numbers on expenses but, if all works as planned, it will validate data center fuel cells for reliable power generation at this scale.
Stationary fuel cells are certainly well proven, but multi-megawatt installations are pretty rare. Data center customers for Bloom Energy, which is supplying Apple in North Carolina, typically have far less than a megawatt installed. Each Bloom Energy Server, which takes up about a full parking space, produces 200 kilowatts.
By going to 10 megawatts of capacity, Apple can claim the largest fuel cell powered data center, passing eBay which earlier this year announced plans for six megawatts worth of fuel cells at a data center in Utah. (See, EBay Goes All-in With Fuel Cell-Powered Fuel Cell Data Center.) It also opens up new ways of doing business.
Using fuel cells at this scale potentially changes how data center operators use grid power and traditional back up diesel generators. With Apple’s combination of its solar power and fuel cells, it appears the facility will be able to produce more than the 20 megawatts it needs at full steam. That means Apple could sell power back to the utility or even operate independently and use the grid as back up power—a completely new configuration.
Bloom Energy’s top data center executive Peter Gross told Data Center Insider that data center servers could have two power cords—one from the grid and one from the fuel cells. In the event of a power failure, those fuel cells could keep the servers humming, rather than the backup diesel generators.
Apple hasn’t disclose how much it’s paying for all this, but the utility commission filing indicates it plans to monetize its choice of biogas, rather than natural gas. The documents show that Apple is contracting with a separate company to procure biogas, or methane that is given off from landfills. Because it’s a renewable source, Apple can receive compensation for renewable energy credits.
Proving fuel cells and solar work in a mission-critical workload at this scale is one thing. Whether it makes economic sense for companies other than cash-rich Apple and eBay is something different. Apple and eBay could save some money by installing fewer diesel generators. Investing in solar also gives companies a fixed electricity cost for years ahead, shielding them from spikes in utilities’ power prices.
But some of the most valuable information on these projects will be how the numbers pencil out. That might help conservative data center designers to look at these technologies, which are substantially cleaner than the grid, more seriously.
Both operationally and financially, there’s a lot to learn down in Maiden. Let’s hope Apple is a bit more forthcoming about its data center than telling us what’s in the next iPhone.
This looks like one of several (but far not enough) implementations of "the third industrial revolution" (J. Rifkin), definitely a book to read to foresee a path toward a new (economic) model of clean energy and society, when the information based Internet will (might) combine with the energy based Internet and when energy will start to be an (abundant) solution and not a problem anymore.
Thursday, September 13. 2012
*I’m guessing this rig won’t show up on a fashion catwalk any time soon.
Published on Jun 6, 2012 by googlemaps
“There’s a whole wilderness out there that is only accessible by foot. Street View Trekker solves that problem by enabling us to photograph beautiful places such as the Grand Canyon or Muir Woods so anyone can explore them. All the equipment fits in this one backpack.”
(Page 1 of 7, totaling 69 entries) » next page
fabric | rblg
This blog is the survey website of fabric | ch - studio for architecture, interaction and research.
We curate and reblog articles, researches, writings, exhibitions and projects that we notice and find interesting during our everyday practice and readings.
Most articles concern the intertwined fields of architecture, territory, art, interaction design, thinking and science. From time to time, we also publish documentation about our own work and research, immersed among these related resources and inspirations.
This website is used by fabric | ch as archive, references and resources. It is shared with all those interested in the same topics as we are, in the hope that they will also find valuable references and content in it.
| rblg on Twitter