Thursday, April 17. 2014
Air pollution now the world’s biggest environmental health risk with 7 million deaths per year | #air #health
Following last month catastrophic measures in Paris. Not a funny information, yet good to know. Air quality will undoubtedly become a very big (geo)political issue in the coming years, certainly an engineering one too.
CC BY-ND 2.0 Flickr
The World Health Organization (WHO) released a report last year showing that air pollution killed more people than AIDS and malaria combined. It was based on 2010 figures, which were the latest available at the time. There's now a new study which looked at 2012 data, and it seems like things are even worse than we first believed.
“The risks from air pollution are now far greater than previously thought or understood, particularly for heart disease and strokes,” says Dr Maria Neira, Director of WHO’s Department for Public Health, Environmental and Social Determinants of Health. “Few risks have a greater impact on global health today than air pollution; the evidence signals the need for concerted action to clean up the air we all breathe.”
The WHO found that outdoor air pollution was linked to an estimated 3.7 million deaths in 2012 from urban and rural sources worldwide, and indoor air pollution, mostly caused by cooking (!) on inefficient coal and biomass stoves was linked to 4.3 million deaths in 2012.
Because many people are exposed to both indoor and outdoor air pollution, there is overlap in these two numbers, but the WHO estimates that the total number of victims from air pollution in 2012 was around 7 million, which is tragic since it would take relatively little in many of those cases to save livesFlickr/CC BY-SA 2.0
And it's not really a question of money, since the health costs and lost productivity caused by air pollution are higher in the long-term...
Here's how the health impacts break down for both indoor and outdoor air pollution:
Outdoor air pollution-caused deaths – breakdown by disease:
Indoor air pollution-caused deaths – breakdown by disease:
There are lots of big obvious things we can do, such as replace inefficient and pollution small stoves in poorer countries with better stoves or even better, electric cooking. Many countries, like China, could also do a lot to cut pollution at their coal plants and over time phase out coal (which isn't just a problem for air pollution, but also for water and ground pollution and global warming). There are all these low-hanging fruits that would make a huge difference. To see how dramatic the improvement could be, just look at these photos showing how bad the situation was in the US not so long ago (China is just repeating what has gone on elsewhere...).
One thing we can do to help: plant more trees! Recent studies show that they are even better at filtering the air in urban areas than we previously thought.
© Michael Graham Richard
Related on TreeHugger.com:
Monday, October 14. 2013
The much-anticipated Ivanpah Solar Electric Generating System just kicked into action in California’s Mojave Desert. The 3,500 acre facility is the world’s largest solar thermal energy plant, and it has the backing of some major players; Google, NRG Energy, BrightSource Energy and Bechtel have all invested in the project, which is constructed on federally-leased public land. The first of Ivanpah’s three towers is now feeding energy into the grid, and once the site is fully operational it will produce 392 megawatts — enough to power 140,000 homes while reducing carbon emissions by 400,000 tons per year.
Ivanpah is comprised of 300,000 sun-tracking mirrors (heliostats), which surround three, 459-foot towers. The sunlight concentrated from these mirrors heats up water contained within the towers to create super-heated steam which then drives turbines on the site to produce power.
The first successfully operating unit will sell power to California’s Pacific Gas and Electric, as will Unit 3 when it comes online in the coming months. Unit 2 is also set to come online shortly, and will provide power to Southern California Edison.
Construction began on the facility in 2010, and achieved it’s first “flux” in March, a crucial test which proved its readiness to begin commercial operation. Tests this past Tuesday formed Ivanpah’s “first sync” which began feeding power into the grid.
As John Upton at Grist points out, the project is not without its critics, noting that some “have questioned why a solar plant that uses water would be built in the desert — instead of one that uses photovoltaic panels,” while others have been upset by displacement of local wildlife—notably 100 endangered desert tortoises.
But the Ivanpah plant still constitutes a major milestone, both globally as the world’s largest solar thermal energy plant, and locally for the significant contribution it will make towards California’s renewable energy goal of achieving 3,000 MW of solar generating capacity through public utilities and private ownership.
After the project in Spain, early images and controversy dating back 2012, here comes a new amazing infrastructure / solar power plant in California. Welcome!
Monday, October 07. 2013
This essay is adapted from Marina Alberti Cities as Hybrid Ecosystems (Forthcoming) and from Marina Alberti “Anthropocene City”, forthcoming in The Anthropocene Project by the Deutsche Museum Special Exhibit 2014-1015
Cities face an important challenge: they must rethink themselves in the context of planetary change. What role do cities play in the evolution of Earth? From a planetary perspective, the emergence and rapid expansion of cities across the globe may represent another turning point in the life of our planet. Earth’s atmosphere, on which we all depend, emerged from the metabolic process of vast numbers of single-celled algae and bacteria living in the seas 2.3 billion years ago. These organisms transformed the environment into a place where human life could develop. Adam Frank, an Astrophysicist at the University of Rochesters, reminds us that the evolution of life has completely changed big important characteristics of the planet (NPR 13.7: Cosmos & Culture, 2012). Can humans now change the course of Earth’s evolution? Can the way we build cities determine the probability of crossing thresholds that will trigger non-linear, abrupt change on a planetary scale (Rockström et al 2009)?
For most of its history, Earth has been relatively stable, and dominated primarily by negative feedbacks that have kept it from getting into extreme states (Lenton and Williams 2013). Rarely has the earth experienced planetary-scale tipping points or system shifts. But the recent increase in positive feedback (i.e., climate change), and the emergence of evolutionary innovations (i.e. novel metabolisms), could trigger transformations on the scale of the Great Oxidation (Lenton and Williams 2013). Will we drive Earth’s ecosystems to unintentional collapse? Or will we consciously steer the Earth towards a resilient new era?
In my forthcoming book, Cities as Hybrid Ecosystems, I propose a co-evolutionary paradigm for building a science of cities that “think like planets” (see the Note at the bottom)— a view that focuses both on unpredictable dynamics and experimental learning and innovation in urban ecosystems. In the book I elaborate on some concepts and principles of design and planning that can emerge from such a perspective: self-organization, heterogeneity, modularity, feedback, and transformation.
How can thinking on a planetary scale help us understand the place of humans in the evolution of Earth and guide us in building a human habitat of the “long now”?
Humans make decisions simultaneously at multiple time and spatial scales, depending on the perceived scale of a given problem and scale of influence of their decision. Yet it is unlikely that this scale extends beyond one generation or includes the entire globe. The human experience of space and time has profound implications for our understanding of world phenomena and for making long- and short-term decisions. In his book What time is this place, Kevin Lynch (1972) eloquently told us that time is embedded in the physical world that we inhabit and build. Cities reflect our experience of time, and the way we experience time affects the way we view and change the environment. Thus our experience of time plays a crucial role in whether we succeed in managing environmental change. If we are to think like a planet, the challenge will be to deal with scales and events far removed from everyday human experience. Earth is 4.6 billion years old. That’s a big number to conceptualize and account for in our individual and collective decisions.
Thinking like a planet implies expanding the time and spatial scales of city design and planning, but not simply from local to global and from a few decades to a few centuries. Instead, we will have to include the scales of the geological and biological processes on which our planet operates. Thinking on a planetary scale implies expanding the idea of change. Lynch (1972) reminds us that “the arguments of planning all come down to the management of change.” But what is change?
Human experience of change is often confined to fluctuations within a relatively stable domain. However Planet Earth has displayed rare but abrupt changes and regime shifts in the past. Human experience of abrupt change is limited to marked changes in regional system dynamics, such as altered fire regimes, and extinctions of species. Yet, since the Industrial Revolution, humans have been pushing the planet outside a stability domain. Will human activities trigger such a global event? We can’t answer that, as we don’t understand enough about how regime shifts propagate across scales, but emerging evidence does suggest that if we continue to disrupt ecosystems and climate we face an increasing risk of crossing those thresholds that keep the earth in a relatively stable domain. Until recently our individual behaviors and collective institutions have been shaped primarily by change that we can envision relatively easily on a human time scale. Our behaviors are not tuned to the slow and imperceptible but systematic changes that can drive dramatic shifts in Earth’s systems.
Planetary shifts can be rapid: the glaciation of the Younger Dryas (abrupt climatic change resulting in severe cold and drought) occurred roughly 11,500 years ago, apparently over only a few decades. Or, it can unfold slowly: the Himalayas took over a million years to form. Shifts can emerge as the results of extreme events like volcanic eruptions, or relatively slow processes, like the movement of tectonic plates. Though we still don’t completely understand the subtle relationship between local and global stability in complex systems, several scientists hypothesize that the increasing complexity and interdependence of socio-economic networks can produce ‘tipping cascades’ and ‘domino dynamics’ in the Earth’s system, leading to unexpected regime shifts (Helbing 2013, Hughes et al 2013).
Planetary Challenges and Opportunities
A planetary perspective for envisioning and building cities that we would like to live in—cities that are livable, resilient, and exciting—provides many challenges and opportunities. To begin, it requires that we expand the spectrum of imaginary archetypes. Current archetypes reflect skewed and often extreme simplifications of how the universe works, ranging from biological determinism to techno-scientific optimism. At best they represent accurate but incomplete accounts of how the world works. How can we reconcile the messages contained in the catastrophic versus optimistic views of the future of Earth? And, how can we hold divergent explanations and arguments as plausibly true? Can we imagine a place where humans have co-evolved with natural systems? What does that world look like? How can we create that place in the face of limited knowledge and uncertainty, holding all these possible futures as plausible options?
The concept of “planetary boundaries” offers a framework for humanity to operate safely on a planetary scale. Rockström et al (2009) developed the concept of planetary boundaries to inform us about the levels of anthropogenic change that can be sustained so we can avoid potential planetary regime shifts that would dramatically affect human wellbeing. The concept does not imply, and neither rules out, planetary-scale tipping points associated with human drivers. Hughes et al (2013) do address some the misconception surrounding planetary-scale tipping points that confuses a system’s rate of change with the presence or absence of a tipping point. To avoid the potential consequences of unpredictable planetary-scale regime shifts we will have to shift our attention towards the drivers and feedbacks rather than focus exclusively on the detectable system responses. Rockström et al (2009) identify nine areas that are most in need of set planetary boundaries: climate change; biodiversity loss; input of nitrogen and phosphorus in soils and waters; stratospheric ozone depletion; ocean acidification; global consumption of freshwater; changes in land use for agriculture; air pollution; and chemical pollution.
A different emphasis is proposed by those scientists who have advanced the concept of planetary opportunities: solution-oriented research to provide realistic, context-specific pathways to a sustainable future (DeFries et al. 2012). The idea is to shift our attention to how human ingenuity can expand the ability to enhance human wellbeing (i.e. food security, human health), while minimizing and reversing environmental impacts. The concept is grounded in human innovation and the human capacity to develop alternative technologies, implement “green” infrastructure, and reconfigure institutional frameworks. The potential opportunities to explore solution-oriented research and policy strategies are amplified in an urbanizing planet, where such solutions can be replicated and can transform the way we build and inhabit the Earth.
Imagining a Resilient Urban Planet
While these different images of the future are both plausible and informative, they speak about the present more than the future. They all represent an extension of the current trajectory as if the future would unfold along the path of our current way of asking questions, and our way of understanding and solving problems. Yes, these perspectives do account for uncertainty but it is defined by the confidence intervals around this trajectory. Both stories are grounded in the inevitable dichotomies of humans and nature, and technology vs. ecology. These views are at best an incomplete account of what is possible: they reflect a limited ability to imagine the future beyond such archetypes. Why can we imagine smart technologies and not smart behaviors, smart institutions, and smart societies? Why think only of technology and not of humans and their societies that co-evolve with Earth?
Understanding the co-evolution of human and natural systems is key to build a resilient society and transform our habitat. One of the greatest questions in biology today is whether natural selection is the only process driving evolution and what the other potential forces might be. To understand how evolution constructs the mechanisms of life, molecular biologists would argue that we also need to understand the self-organization of genes governing the evolution of cellular processes and influencing evolutionary change (Johnson and Kwan Lam 2010).
To function, life on Earth depends on the close cooperation of multiple elements. Biologists are curious about the properties of complex networks that supply resources, process waste, and regulate the system’s functioning at various scales of biological organization. West et al. (2005) propose that natural selection solved this problem by evolving hierarchical fractal-like branching. Other characteristics of evolvable systems are flexibility (i.e. phenotypic plasticity), and novelty. This capacity for innovation is an essential precondition for any system to function. Gunderson and Holling (2002) have noted that if systems lack the capacity for innovation and novelty, they may become over-connected and dynamically locked, unable to adapt. To be resilient and evolve, they must create new structures and undergo dynamic change. Differentiation, modularity, and cross-scale interactions of organizational structures have been described as key characteristics of systems that are capable of simultaneously adapting and innovating (Allen and Holling 2010).
To understand coevolution of human-natural systems will require advancement in the evolution and social theories that explain how complex societies and cooperation have evolved. What role does human ingenuity play? In Cities as Hybrid Ecosystems I propose that coupled human-natural systems are not governed only by either natural selection or human ingenuity alone, but by hybrid processes and mechanisms. It is their hybrid nature that makes them unstable and at the same time able to innovate. This novelty of hybrid systems is key to reorganization and renewal. Urbanization modifies the spatial and temporal variability of resources, creates new disturbances, and generates novel competitive interactions among species. This is particularly important because the distribution of ecological functions within and across scales is key to the system being able to regenerate and renew itself (Peterson et al. 1998).
The city that thinks like a planet: What does it look like?
In this blog article I have ventured to pose this question, but I will not venture to provide an answer. In fact no single individual can do that. The answer resides in the collective imagination and evolving behaviors of people of diverse cultures who inhabit a diversity of places on the planet. Humanity has the capacity to think in the long term. Indeed, throughout history, people in societies faced with the prospect of deforestation, or other environmental changes, have successfully engaged in long-term thinking, as Jared Diamond (2005) reminds us: consider Tokugawa shoguns, Inca emperors, New Guinea highlanders, or 16th-century German landowners. Or, more recently, the Chinese. Many countries in Europe, and the United States, have dramatically reduced their air pollution and meanwhile increased their use of energy and combustion of fossil fuels. Humans have the intellectual and moral capacity to do even more when tuned into challenging problems and engaged in solving them.
A city that thinks like a planet is not built on already set design solutions or planning strategies. Nor can we assume that the best solution would work equally well across the world regardless of place and time. Instead, such a city will be built on principles that expand its drawing board and collaborative action to include planetary processes and scales, to position humanity in the evolution of Earth. Such a view acknowledges the history of the planet in every element or building block of the urban fabric, from the building to the sidewalk, from the back yard to the park, from the residential street to the highway. It is a view that is curious about understanding who we are and about taking advantage of the novel patterns, processes, and feedbacks that emerge from human and natural interactions. It is a city grounded in the here and the now and simultaneously in the different time and spatial scales of human and natural processes that govern the Earth. A city that thinks like a planet is simultaneously resilient and able to change.
How can such a perspective guide decisions in practice? Urban planners and decision makers, making strategic decisions and investments in public infrastructure, want to know whether certain generic properties or qualities of a city’s architecture and governance could predict its capacity to adapt and transform itself. Can such a shift in perspective provide a new lens, a new way to interpret the evolution of human settlements, and to support humans in successfully adapting to change? Evidence emerging from the study of complex systems points to their key properties that expand adaptation capacity while enabling them to change: self organization, heterogeneity, modularity, redundancy, and cross-scale interactions.
A co-evolutionary perspective shifts the focus of planning towards human-natural interactions, adaptive feedback mechanisms, and flexible institutional settings. Instead of predefining “solutions,” that communities must implement, such perspective focuses on understanding the ‘rules of the game’, to facilitate self-organization and careful balance top-down and bottom-up managements strategies (Helbing 2013). Planning will then rely on principles that expand heterogeneity of forms and functions in urban structures and infrastructures that support the city. They support modularity (selected as opposed to generalized connectivity) to create interdependent decentralized systems with some level of autonomy to evolve.
In cities across the world, people are setting great examples that will allow for testing such hypotheses. Human perception of time and experience of change is an emerging key in the shift to a new perspective for building cities. We must develop reverse experiments to explore what works, what shifts the time scale of individual and collective behaviors. Several Northern European cities have adopted successful strategies to cut greenhouse gases, and combined them with innovative approaches that will allow them to adapt to the inevitable consequences of climate change. One example is the Copenhagen 2025 Climate Plan. It lays out a path for the city to become the first carbon-neutral city by 2025 through efficient zero-carbon mobility and building. The city is building a subway project that will place 85 percent of its inhabitants within 650 yards of a Metro station. Nearly three-quarters of the emissions reductions will come as people transition to less carbon-intensive ways of producing heat and electricity through a diverse supply of clean energy: biomass, wind, geothermal, and solar. Copenhagen is also one of the first cities to adopt a climate adaptation plan to reduce its vulnerability to the extreme storm events and rising seas expected in the next 100 years.
In the Netherlands, alternative strategies are being explored to allow people to live with the inevitable floods. These strategies involve building on water to develop floating communities and engineering and implementing adaptive beach protections that take advantage of natural processes. The experimental Sand Motor project uses a combination of wind, waves, tides, and sand to replenish the eroded coasts. The Dutch Rijkswaterstaat and the South Holland provincial authority placed a large amount of sand in an artificial 1 km long and 2 km wide peninsula into the sea, allowing for the wave and currents to redistribute it and build sand dunes and beaches to protect the coast over time.
New York is setting an example for long-term planning by combining adaptation and transformation strategies into its plan to build a resilient city, and Mayor Michael Bloomberg has outlined a $19.5 billion plan to defend the city against rising seas. In many rapidly growing cities of the Global South, similar leadership is emerging. For example, Johannesburg which adopted one of the first climate change adaptation plan, and so have Durban and Cape Town, in South Africa and Quito, Equador, along with Ho Chi Minh City Vietnam, where a partnership with the City of Rotterdam Netherlands has been established to develop a resilience strategy.
To think like a planet and explore what is possible we may need to reframe our questions. Instead of asking what is good for the planet, we must ask what is good for a planet inhabited by people. What is a good human habitat on Earth? And instead of seeking optimal solutions, we should identify principles that will inform the diverse communities across the world. The best choices may be temporary, since we do not fully understand the mechanisms of life, nor can we predict the consequences of human action. They may very well vary with place and depend on their own histories. But human action may constrain the choices available for life on earth.
Scenario planning offers a systematic and creative approach to thinking about the future by letting scientists and practitioners expand old mindsets of ecological sciences and decision making. It provides a tool we can use to deal with the limited predictability of changes on the planetary scale and to support decision-making under uncertainty. Scenarios help bring the future into present decisions (Schwartz 1996). They broaden perspectives, prompt new questions, and expose the possibilities for surprise.
Scenarios have several great features. We expect that they can shift people’s attention toward resilience, redefine decision frameworks, expand the boundaries of predictive models, highlight the risks and opportunities of alternative future conditions, monitor early warning signals, and identify robust strategies (Alberti et al 2013)
A fundamental objective of scenario planning is to explore the interactions among uncertain trajectories that would otherwise be overlooked. Scenarios highlight the risks and opportunities of plausible future conditions. The hypothesis is that if planners and decision makers look at multiple divergent scenarios, they will engage in a more creative process for imagining solutions that would be invisible otherwise. Scenarios are narratives of plausible futures; they are not predictions. But they are extremely powerful when combined with predictive modeling. They help expand boundary conditions and provide a systematic approach we can use to deal with intractable uncertainties and assess alternative strategic actions. Scenarios can help us modify model assumptions and assess the sensitivities of model outcomes. Building scenarios can help us highlight gaps in our knowledge and identify the data we need to assess future trajectories.
Scenarios can also shine spotlights on warning signals, allowing decision makers to anticipate unexpected regime shifts and to act in a timely and effective way. They can support decision making in uncertain conditions by providing us a systematic way to assess the robustness of alternative strategies under a set of plausible future conditions. Although we do not know the probable impacts of uncertain futures, scenarios will provide us the basis to assess critical sensitivities, and identify both potential thresholds and irreversible impacts so we can maximize the wellbeing of both humans and our environment.
A new ethic for a hybrid planet
More than half a century ago, Aldo Leopold (1949) introduced the concept of “thinking like a mountain”: he wanted to expand the spatial and temporal scale of land conservation by incorporating the dynamics of the mountain. Defining a Land Ethic was a first step in acknowledging that we are all part of larger community hat include soils, waters, plants, and animals, and all the components and processes that govern the land, including the prey and predators. Now, along the same lines, Paul Hirsch and Bryan Norton (2012) In Ethical Adaptation to Climate Change: Human Virtues of the Future, MIT Press, articulates a new environmental ethics by suggesting that we “think like a planet.” Building on Hirsch and Norton’s idea, we need to expand the dimensional space of our mental models of urban design and planning to the planetary scale.
Note: The metaphor of “thinking like a planet” builds on the idea of cognitive transformation proposed by Paul Hirsch and Bryan Norton (2012) In Ethical Adaptation to Climate Change: Human Virtues of the Future, MIT Press.
Wednesday, September 04. 2013
Iconoclastic economist Herman Daly helped popularize the term “steady state economics.” It’s a concept many makers are already familiar with whether they know it or not. You can read all about it here, but at its essence steady state economics is a closed loop system that mimics nature in that it does not need new inputs or materials to keep running. It runs at a steady state and doesn’t grow lest it overshoot the carrying capacity of the natural resources on which it depends. Repair, repurposing, and recycling are what make the system work.
Of course, we live in the opposite system, one that requires new resources to build new things to replace last year’s model and all the stuff we throw away because it’s broken or out of style. One of the features of this model is “planned obsolescence” It’s a great system for getting people to buy new products, but it’s not so great for the planet (see the Great Pacific Garbage Patch, landfill leachate, and climate change for examples).
But like I said, many makers already know the virtues of repurposing and fixing “broken” stuff. One of my favorite examples is the humble Fixers Collective. They describe themselves as an “ongoing social experiment encouraging improvisational fixing and mending and fighting planned obsolescence.” The New York-based group gets together to fix broken appliances and electronics and to give them a second life. The project began as an art project in 2008, but lived on when participants realized they liked the experience of getting together to fix stuff and teach others.
The Fixers Collective will be returning to Maker Faire New York this month. They invite attendees to bring their broken stuff and learn how to fix it. But Vincent realizes many people don’t want to lug broken appliance to the fair so they may also have appliances on hand that people can take apart to see how they work and what’s inside.
Program director Vincent Lai says reusing or fixing objects is often better than recycling, citing figures that only 40 to 60 percent of recycled material avoids the landfill. Beyond that, he says it’s fun to watch the “eureka moment” when participants pull the chain on a formerly broken lamp they learned to fix themselves.
Even if you aren’t ready to embrace stead state economics it’s empowering to know you can fix that old toaster or lamp sitting in your garage. The Fixers Collective can show you how. While the Fixers Collective is based in New York, there are other likeminded groups all over. Here’s a map.
Tuesday, September 03. 2013
Wired has an excellent article, We Need a Fixer (Not Just a Maker) Movement, that focuses on how repairing electronics is more than simply good for the environment; it's good for brains, it's good for our souls:
Read the full article here -- it is sure to inspire you. And just in case you need more inspiration, here is what we at TreeHugger have had to say about the importance of repairing over replacing, and DIYing electronics:
Why Gadget Repairability Is So Damn Important
The DIY Ethic and Creating Technology Independence
How DIY Electronics Benefit The Environment
The DIY Ethic and Modern Technology: Why taking ownership of your electronics is essential
How The DIY Electronics Trend Is Empowering People, Communities, Businesses
Wednesday, August 07. 2013
By Megan Treacy
Undeniably one of the biggest stories of the year has been the leak about the NSA PRISM program, which has been monitoring American citizens' communications. Many people have been appalled by this revelation, but it turns out there is an environmentally appalling part of this spying program too. More details have been released about NSA's new Intelligence Community Comprehensive National Cybersecurity Initiative Data Center, otherwise known as that massive data center being built by the agency in Bluffdale, Utah.
Turns out that collecting tons of information in the form of phone calls, emails and web searches is an energy and water-hungry business. According to reports, the one million square-foot facility will house 100,000 square feet of data-storing servers and will use 1.7 million gallons of water per day to keep those servers cool.
The data center will account for one percent of all water use in the area and the city of Bluffdale is looking for additional water sources for when the facility is finished in September.
It won't be an energy-sipper either, but that was obvious from the size of the place. The facility will require 65 megawatts of power, which is the equivalent of 65,000 homes. It will have its own power substation and back-up diesel power generators.
The crazy thing is that this gigantic data center isn't quite enough. The NSA is also building another data center in Fort Meade, Maryland that will be two-thirds the size of the mega center, but that's still pretty darn big.
Thursday, July 11. 2013
Via Next Nature
Ever notice how ant colonies so successfully explore and exploit resources in the world … to find food at 4th of July picnics, for example? You may find it annoying. But as an ecologist who studies ants and collective behavior, I think it’s intriguing — especially the fact that it’s all done without any central control.
What’s especially remarkable: the close parallels between ant colonies’ networks and human-engineered ones. One example is “Anternet”, where we, a group of researchers at Stanford, found that the algorithm desert ants use to regulate foraging is like the Traffic Control Protocol (TCP) used to regulate data traffic on the internet. Both ant and human networks use positive feedback: either from acknowledgements that trigger the transmission of the next data packet, or from food-laden returning foragers that trigger the exit of another outgoing forager.
This research led some to marvel at the ingenuity of ants, able to invent systems familiar to us: wow, ants have been using internet algorithms for millions of years!
But insect behavior mimicking human networks — another example are the ant-like solutions to the traveling salesman problem provided by the ant colony optimization algorithm — is actually not what’s most interesting about ant networks. What’s far more interesting are the parallels in the other direction: What have the ants worked out that we humans haven’t thought of yet?
During the 130 million years or so that ants have been around, evolution has tuned ant colony algorithms.
During the 130 million years or so that ants have been around, evolution has tuned ant colony algorithms to deal with the variability and constraints set by specific environments.
Ant colonies use dynamic networks of brief interactions to adjust to changing conditions. No individual ant knows what’s going on. Each ant just keeps track of its recent experience meeting other ants, either in one-on-one encounters when ants touch antennae, or when an ant encounters a chemical deposited by another.
Such networks have made possible the phenomenal diversity and abundance of more than 11,000 ant species in every conceivable habitat on Earth. So Anternet, and other ant networks, have a lot to teach us. Ant protocols may suggest ways to build our own information networks…
Dealing with High Operating Costs
Harvester ant colonies in the desert must spend water to get water. The ants lose water when foraging in the hot sun, and get their water by metabolizing it out of the seeds that they collect. Since colonies store seeds, their system of positive feedback doesn’t waste foraging effort when water costs are high — even if it means they leave some seeds “on the table” (or rather, ground) to be obtained on another, more humid day.
In this way, the Anternet allows the colony to deal with high operating costs. In the internet, the TCP protocol also prevents the system from sending data out on the internet when there’s no bandwidth available. Effort would be wasted if the message is lost, so it’s not worth sending it out unless it’s certain to reach its destination.
More recently, I’ve shown how natural selection is currently optimizing the Anternet algorithm. I’ve been following a population of 300 harvester ant colonies for more than 25 years, and by using genetic fingerprinting we figured out which colonies had more offspring colonies.
Colonies store food inside the nest as a survival tactic. On especially hot days, colonies that are likely to lay low instead of collecting more food are the ones that have more offspring colonies over their 25-year lifetimes. Restraint therefore emerges as the best strategy at the colony level. Long-lived colonies in the desert regulate their behavior not to maximize or optimize food intake, but instead to keep going without wasting resources.
In the face of scarcity, the algorithm that regulates the flow of ants is evolving toward minimizing operating costs rather than immediate accumulation. This is a sustainable strategy for any system, like a desert ant colony or the mobile internet, where it’s essential to achieve long-term reliability while avoiding wasted effort.
Scaling Up from Small to Large Systems
What happens when a system scales up? Like human-engineered systems, ant systems must be robust to scale up as the colony grows, and they have to be able to tolerate the failure of individual components.
Since large systems allow for some messiness, the ideal solutions utilize the contributions of each additional ant in such a way that the benefit of an extra worker outweighs the cost of producing and feeding one.
The tools that serve large colonies well, therefore, are redundancy and minimal information. Enormous ant colonies function using very simple interactions among nameless ants without any address.
In engineered systems we too are searching for ways to ensure reliable outcomes, as our networks scale, by using cheap operations that make use of randomness. Elegant top-down designs are appealing, but the robustness of ant algorithms shows that tolerating imperfection sometimes leads to better solutions.
Optimizing for First-Mover Advantage
The diversity of ant algorithms shows how evolution has responded to different environmental constraints. When operating costs are low and colonies seek an ephemeral delicacy — like flower nectar or watermelon rinds — searching speed is essential if the colony is to capture the prize before it dries up or is taken away.
In the face of scarcity, the algorithm that regulates the flow of ants is evolving toward minimizing operating costs rather than immediate accumulation.
Since ant colonies compete with each other and many are out looking for the same food, the first colony to arrive might have the best chance of holding on to the food and keeping the other ants away.
How does a colony achieve this first-mover advantage without any central control? The challenge in this situation is for the colony to manage the flow of ants so it has an ant almost everywhere almost all the time. The goal is to increase the likelihood that some ant will be close enough to encounter whatever happens to show up.
One strategy ants use (familiar from our own data networks) is to set up a circuit of permanent highways — like a network of cell phone towers — from which ants search locally. The invasive Argentine ants are experts at this; they’ll find any crumb that lands on your kitchen counter.
The Argentine ants also adjust their paths, shifting from a close to random walk when there are lots of ants around, leading each ant to search thoroughly in a small area, to a straighter path when there are few ants around, thus allowing the whole group to cover more ground.
Like a distributed demand-response network, the aggregated responses of each ant to local conditions generates the outcome for the whole system, without any centralized direction or control.
Addressing Security Breaches and Disasters
In the tropics, where hundreds of ant species are packed close together and competing for resources, colonies must deal with security problems. This has led to the evolution of security protocols that use local information for intrusion detection and for response.
One colony might use (“borrow” or “steal”, as humans would say) information from another, such as chemical trails or the density of ants, to find and use resources.
Rather than attempting to prevent incursions completely, however, ants create loose, stochastic identity systems in which one species regulates its behavior in response to the level of incursion from another.
There are obvious parallels with computer security. It’s becoming clear (consider recent events!) that we too will need to implement local evaluation and repair of intrusions, tolerating some level of imperfection. The ants have found ways to let their systems respond to each others’ incursions, without attempting to set up a central authority that regulates hacks.
Ants have evolved security protocols that use local information for intrusion detection and response.
Some of our networks seem to be moving toward using methods deployed by the ants.
Take the disaster recovery protocols of ants that forage in trees where branches can break, so the threat of rupture is high. A ring network, with signals or ants flowing in both directions, allows for rapid recovery here; after a break in the flow in one direction, the flow in the other direction can re-establish a link.
Similarly, early fiber-optic cable networks were often disrupted by farm machinery and other digging: one break could bring down the system because it would isolate every load. Engineers soon discovered, as ants have already done, that ring networks would create networks that are easier to repair.
Our networks will continue to change and evolve. By examining and comparing the algorithms used by ants in the desert, in the tropical forest, and the invasive species that visit our kitchens, it’s already obvious that the ants have come up with new solutions that can teach us something about how we should engineer our systems.
Using simple interactions like the brief touch of antennae — not unlike our fleeting status updates in ephemeral social networks — colonies make networks that respond to a world that constantly changes, with resources that show up in patches and then disappear. These networks are easy to repair and can grow or shrink.
Ant colonies have been used throughout history as models of industry, obedience, and wisdom. Although the ants themselves can be indolent, inconsiderate of others, and downright stupid, we have much to learn from ant colony protocols. The ants have evolved ways of working together that we haven’t yet dreamed of.
Not only do the ants build amazing architectures, they are also using algorithms and networks for millenia to achieve quite sustainable results and behaviors. As the article suggest, should we learn from ants?
Thursday, July 04. 2013
The U.S. Energy Information Administration reviews big changes in energy use since the Declaration of Independence.
By Kevin Bullis
Energy independence: Since the colonies parted from Britain there have been big changes in energy use. It’s easy to forget just how recently we started using fossil fuels in large amounts. In honor of the July 4th holiday, the U.S. Energy Information has produced a chart showing how rapidly the country shifted from using wood almost exclusively as an energy source to using first coal, then petroleum and natural gas. Here’s a couple of notable things about the chart. The first is the obvious staying power of coal (see “The Enduring Technology of Coal”). Coal wasn’t used in significant amounts until the mid-1800s, but then it increases quickly (and with it, overall energy consumption increases by about 5 times). When oil is introduced, it seems to displace coal, leading to a sharp drop in coal consumption. But coal use quickly recovers. A similar drop occurs when natural gas consumption starts to rise. But within a couple of decades coal use is growing again. Near the end of the chart coal use drops off again as natural gas production surges–a result of fracking technology. What the chart doesn’t show is that the EIA expects coal consumption to go up again this year. The stuff is cheap, and we seem to keep finding ways to use it. President Obama recently praised the reduction in carbon dioxide emissions that the surge of natural gas production enabled (see “A Drop in U.S. CO2 Emissions” and “Obama Orders EPA to Regulate Power Plants in Wide-Ranging Climate Plan”). Given the resilience of coal, though, it’s hard to be optimistic that the decreased rate of emissions will persist—absent regulations that prevent it. One other interesting bit. Renewables such as wind and solar power now produce more energy than was consumed in the mid-1800s. So if we want a society that runs completely on these renewables, all we have to do is reduce the population to what it was then, only use as much energy as they did, stop flying airplanes (big ones require oil), stop industrial processes that require energy in forms other than electricity, and only drive electric vehicles or ride horses. I may have left something out. The good news is renewables are increasing fast. But if history is a guide, the introduction of a new energy source doesn’t cause the other sources of energy to decrease, at least not in the long run. Even wood consumption has close to what it was in the 1800s, even though it’s less convenient in many ways that fossil fuels. Introducing new sources of energy seems to allow overall energy consumption to increase. Absent regulations or political crises that cause the cost of fossil fuels to rise, as technological advances make renewable energy cheaper we’ll use it more, but we’ll likely keep using more of the other sources of energy, too. Indeed, the EIA predicts that in 2040, 75% pf U.S. energy will still come from oil, coal, and natural gas.
Energy independence: Since the colonies parted from Britain there have been big changes in energy use.
It’s easy to forget just how recently we started using fossil fuels in large amounts. In honor of the July 4th holiday, the U.S. Energy Information has produced a chart showing how rapidly the country shifted from using wood almost exclusively as an energy source to using first coal, then petroleum and natural gas.
Here’s a couple of notable things about the chart. The first is the obvious staying power of coal (see “The Enduring Technology of Coal”).
Coal wasn’t used in significant amounts until the mid-1800s, but then it increases quickly (and with it, overall energy consumption increases by about 5 times). When oil is introduced, it seems to displace coal, leading to a sharp drop in coal consumption. But coal use quickly recovers. A similar drop occurs when natural gas consumption starts to rise. But within a couple of decades coal use is growing again.
Near the end of the chart coal use drops off again as natural gas production surges–a result of fracking technology. What the chart doesn’t show is that the EIA expects coal consumption to go up again this year. The stuff is cheap, and we seem to keep finding ways to use it. President Obama recently praised the reduction in carbon dioxide emissions that the surge of natural gas production enabled (see “A Drop in U.S. CO2 Emissions” and “Obama Orders EPA to Regulate Power Plants in Wide-Ranging Climate Plan”). Given the resilience of coal, though, it’s hard to be optimistic that the decreased rate of emissions will persist—absent regulations that prevent it.
One other interesting bit. Renewables such as wind and solar power now produce more energy than was consumed in the mid-1800s. So if we want a society that runs completely on these renewables, all we have to do is reduce the population to what it was then, only use as much energy as they did, stop flying airplanes (big ones require oil), stop industrial processes that require energy in forms other than electricity, and only drive electric vehicles or ride horses. I may have left something out.
The good news is renewables are increasing fast. But if history is a guide, the introduction of a new energy source doesn’t cause the other sources of energy to decrease, at least not in the long run. Even wood consumption has close to what it was in the 1800s, even though it’s less convenient in many ways that fossil fuels. Introducing new sources of energy seems to allow overall energy consumption to increase.
Absent regulations or political crises that cause the cost of fossil fuels to rise, as technological advances make renewable energy cheaper we’ll use it more, but we’ll likely keep using more of the other sources of energy, too.
Indeed, the EIA predicts that in 2040, 75% pf U.S. energy will still come from oil, coal, and natural gas.
Thursday, May 02. 2013
Smart grid technology has been implemented in many places, but Florida’s new deployment is the first full-scale system.
By Kevin Bullis on May 2, 2013
Smart power: Andrew Brown, an engineer at Florida Power & Light, monitors equipment in one of the utility’s smart grid diagnostic centers.
The first comprehensive and large scale smart grid is now operating. The $800 million project, built in Florida, has made power outages shorter and less frequent, and helped some customers save money, according to the utility that operates it.
Smart grids should be far more resilient than conventional grids, which is important for surviving storms, and make it easier to install more intermittent sources of energy like solar power (see “China Tests a Small Smart Electric Grid” and “On the Smart Grid, a Watt Saved Is a Watt Earned”). The Recovery Act of 2009 gave a vital boost to the development of smart grid technology, and the Florida grid was built with $200 million from the U.S. Department of Energy made available through the Recovery Act.
Dozens of utilities are building smart grids—or at least installing some smart grid components, but no one had put together all of the pieces at a large scale. Florida Power & Light’s project incorporates a wide variety of devices for monitoring and controlling every aspect of the grid, not just, say, smart meters in people’s homes.
“What is different is the breadth of what FPL’s done,” says Eric Dresselhuys, executive vice president of global development at Silver Spring Networks, a company that’s setting up smart grids around the world, and installed the network infrastructure for Florida Power & Light (see “Headed into an IPO, Smart Grid Company Struggles for Profit”).
Many utilities are installing smart meters—Pacific Gas & Electric in California has installed twice as many as FPL, for example. But while these are important, the flexibility and resilience that the smart grid promises depends on networking those together with thousands of sensors at key points in the grid— substations, transformers, local distribution lines, and high voltage transmission lines. (A project in Houston is similar in scope, but involves half as many customers, and covers somewhat less of the grid.)
In FPL’s system, devices at all of these places are networked—data jumps from device to device until it reaches a router that sends it back to the utility—and that makes it possible to sense problems before they cause an outage, and to limit the extent and duration of outages that still occur (see “The Challenges of Big Data on the Smart Grid”). The project involved 4.5 million smart meters and over 10,000 other devices on the grid.
The project was completed just last week, so data about the impact of the whole system isn’t available yet. But parts of the smart grid have been operating for a year or more, and there are examples of improved operation. Customers can track their energy usage by the hour using a website that organizes data from smart meters. This helped one customer identify a problem with his air conditioner, says Brian Olnick, vice president of smart grid solutions at Florida Power & Light, when he saw a jump in electricity consumption compared to the previous year in similar weather.
The meters have also cut the duration of power outages. Often power outages are caused by problems within a home, like a tripped circuit breaker. Instead of dispatching a crew to investigate, which could take hours, it is possible to resolve the issue remotely. That happened 42,000 times last year, reducing the duration of outages by about two hours in each case, Olnick says.
The utility also installed sensors that can continually monitor gases produced by transformers to “determine whether the transformer is healthy, is becoming sick, or is about to experience an outage,” says Mark Hura, global smart grid commercial leader at GE, which makes the sensor.
Ordinarily, utilities only check large transformers once every six months or less, he says. The process involves taking an oil sample and sending it to the lab. In one case this year, the new sensor system identified an ailing transformer in time to prevent a power outage that could have affected 45,000 people. Similar devices allowed the utility to identify 400 ailing neighborhood-level transformers before they failed.
Smart grid technology is having an impact elsewhere. After Hurricane Sandy, sensors helped utility workers in some areas restore power faster than in others. One problem smart grids address is nested power outages—when smaller problems are masked by an outage that hits a large area. In a conventional system, after utility workers fix the larger problem, it can take hours for them to realize that a downed line has cut off power to a small area. With the smart grid, utility workers can ping sensors at smart meters or power lines before they leave an area, identifying these smaller outages.
And smart grid devices are helping utilities identify problems that could otherwise go misdiagnosed for years. In Chicago, for example, new voltage monitors indicated that a neighborhood was getting the wrong voltage, a problem that could wear out appliances. The fix took a few minutes.
As more renewable energy is installed, the smart grid will make it easier for utilities to keep the lights on. Without local sensors, it’s difficult for them to know how much power is coming from solar panels—or how much backup they need to have available in case clouds roll in and that power drops.
But whether the nearly $1 billion investment in smart grid infrastructure will pay for itself remains to be seen. The DOE is preparing reports on the impact of the technology to be published this year and next. Smart grid technology is also raising questions about security, since the networks could offer hackers new targets (see “Hacking the Smart Grid”).
This is a good news! As many countries are now looking to build smart grids, let's hope that the first outcomes of this implementation will be positive.
Tuesday, April 30. 2013
By David Talbot on April 16, 2013
Storing video and other files more intelligently reduces the demand on servers in a data center.
Worldwide, data centers consume huge and growing amounts of electricity.
New research suggests that data centers could significantly cut their electricity usage simply by storing fewer copies of files, especially videos.
For now the work is theoretical, but over the next year, researchers at Alcatel-Lucent’s Bell Labs and MIT plan to test the idea, with an eye to eventually commercializing the technology. It could be implemented as software within existing facilities. “This approach is a very promising way to improve the efficiency of data centers,” says Emina Soljanin, a researcher at Bell Labs who participated in the work. “It is not a panacea, but it is significant, and there is no particular reason that it couldn’t be commercialized fairly quickly.”
With the new technology, any individual data center could be expected to save 35 percent in capacity and electricity costs—about $2.8 million a year or $18 million over the lifetime of the center, says Muriel Médard, a professor at MIT’s Research Laboratory of Electronics, who led the work and recently conducted the cost analysis.
So-called storage area networks within data center servers rely on a tremendous amount of redundancy to make sure that downloading videos and other content is a smooth, unbroken experience for consumers. Portions of a given video are stored on different disk drives in a data center, with each sequential piece cued up and buffered on your computer shortly before it’s needed. In addition, copies of each portion are stored on different drives, to provide a backup in case any single drive is jammed up. A single data center often serves millions of video requests at the same time.
The new technology, called network coding, cuts way back on the redundancy without sacrificing the smooth experience. Algorithms transform the data that makes up a video into a series of mathematical functions that can, if needed, be solved not just for that piece of the video, but also for different parts. This provides a form of backup that doesn’t rely on keeping complete copies of the data. Software at the data center could simply encode the data as it is stored and decode it as consumers request it.
Médard’s group previously proposed a similar technique for boosting wireless bandwidth (see “A Bandwidth Breakthrough”). That technology deals with a different problem: wireless networks waste a lot of bandwidth on back-and-forth traffic to recover dropped portions of a signal, called packets. If mathematical functions describing those packets are sent in place of the packets themselves, it becomes unnecessary to re-send a dropped packet; a mobile device can solve for the missing packet with minimal processing. That technology, which improves capacity up to tenfold, is currently being licensed to wireless carriers, she says.
Between the electricity needed to power computers and the air conditioning required to cool them, data centers worldwide consume so much energy that by 2020 they will cause more greenhouse-gas emissions than global air travel, according to the consulting firm McKinsey.
Smarter software to manage them has already proved to be a huge boon (see “A New Net”). Many companies are building data centers that use renewable energy and smarter energy management systems (see “The Little Secrets Behind Apple’s Green Data Centers”). And there are a number of ways to make chips and software operate more efficiently (see “Rethinking Energy Use in Data Centers”). But network coding could make a big contribution by cutting down on the extra disk drives—each needing energy and cooling—that cloud storage providers now rely on to ensure reliability.
This is not the first time that network coding has been proposed for data centers. But past work was geared toward recovering lost data. In this case, Médard says, “we have considered the use of coding to improve performance under normal operating conditions, with enhanced reliability a natural by-product.”
Still a link in the context of our workshop at the Tsinghua University and related to data storage at large.
fabric | rblg
This blog is the survey website of fabric | ch - studio for architecture, interaction and research.
We curate and reblog articles, researches, writings, exhibitions and projects that we notice and find interesting during our everyday practice and readings.
Most articles concern the intertwined fields of architecture, territory, art, interaction design, thinking and science. From time to time, we also publish documentation about our own work and research, immersed among these related resources and inspirations.
This website is used by fabric | ch as archive, references and resources. It is shared with all those interested in the same topics as we are, in the hope that they will also find valuable references and content in it.
| rblg on Twitter