Monday, December 10. 2012The Relevance of Algorithms
Via Culture Digitally (via Christian Babski) By Tarleton Gillespie -----
I’m really excited to share my new essay, “The Relevance of Algorithms,” with those of you who are interested in such things. It’s been a treat to get to think through the issues surrounding algorithms and their place in public culture and knowledge, with some of the participants in Culture Digitally (here’s the full litany: Braun, Gillespie, Striphas, Thomas, the third CD podcast, and Anderson‘s post just last week), as well as with panelists and attendees at the recent 4S and AoIR conferences, with colleagues at Microsoft Research, and with all of you who are gravitating towards these issues in their scholarship right now. The motivation of the essay was two-fold: first, in my research on online platforms and their efforts to manage what they deem to be “bad content,” I’m finding an emerging array of algorithmic techniques being deployed: for either locating and removing sex, violence, and other offenses, or (more troublingly) for quietly choreographing some users away from questionable materials while keeping it available for others. Second, I’ve been helping to shepherd along this anthology, and wanted my contribution to be in the spirit of the its aims: to take one step back from my research to articulate an emerging issue of concern or theoretical insight that (I hope) will be of value to my colleagues in communication, sociology, science & technology studies, and information science. The anthology will ideally be out in Fall 2013. And we’re still finalizing the subtitle. So here’s the best citation I have.
Below is the introduction, to give you a taste. Algorithms play an increasingly important role in selecting what information is considered most relevant to us, a crucial feature of our participation in public life. Search engines help us navigate massive databases of information, or the entire web. Recommendation algorithms map our preferences against others, suggesting new or forgotten bits of culture for us to encounter. Algorithms manage our interactions on social networking sites, highlighting the news of one friend while excluding another’s. Algorithms designed to calculate what is “hot” or “trending” or “most discussed” skim the cream from the seemingly boundless chatter that’s on offer. Together, these algorithms not only help us find information, they provide a means to know what there is to know and how to know it, to participate in social and political discourse, and to familiarize ourselves with the publics in which we participate. They are now a key logic governing the flows of information on which we depend, with the “power to enable and assign meaningfulness, managing how information is perceived by users, the ‘distribution of the sensible.’” (Langlois 2012) Algorithms need not be software: in the broadest sense, they are encoded procedures for transforming input data into a desired output, based on specified calculations. The procedures name both a problem and the steps by which it should be solved. Instructions for navigation may be considered an algorithm, or the mathematical formulas required to predict the movement of a celestial body across the sky. “Algorithms do things, and their syntax embodies a command structure to enable this to happen” (Goffey 2008, 17). We might think of computers, then, fundamentally as algorithm machines — designed to store and read data, apply mathematical procedures to it in a controlled fashion, and offer new information as the output. But as we have embraced computational tools as our primary media of expression, and have made not just mathematics but all information digital, we are subjecting human discourse and knowledge to these procedural logics that undergird all computation. And there are specific implications when we use algorithms to select what is most relevant from a corpus of data composed of traces of our activities, preferences, and expressions. These algorithms, which I’ll call public relevance algorithms, are — by the very same mathematical procedures — producing and certifying knowledge. The algorithmic assessment of information, then, represents a particular knowledge logic, one built on specific presumptions about what knowledge is and how one should identify its most relevant components. That we are now turning to algorithms to identify what we need to know is as momentous as having relied on credentialed experts, the scientific method, common sense, or the word of God.
What we need is an interrogation of algorithms as a key feature of our information ecosystem (Anderson 2011), and of the cultural forms emerging in their shadows (Striphas 2010), with a close attention to where and in what ways the introduction of algorithms into human knowledge practices may have political ramifications. This essay is a conceptual map to do just that. I will highlight six dimensions of public relevance algorithms that have political valence:
Considering how fast these technologies and the uses to which they are put are changing, this list must be taken as provisional, not exhaustive. But as I see it, these are the most important lines of inquiry into understanding algorithms as emerging tools of public knowledge and discourse. It would also be seductively easy to get this wrong. In attempting to say something of substance about the way algorithms are shifting our public discourse, we must firmly resist putting the technology in the explanatory driver’s seat. While recent sociological study of the Internet has labored to undo the simplistic technological determinism that plagued earlier work, that determinism remains an alluring analytical stance. A sociological analysis must not conceive of algorithms as abstract, technical achievements, but must unpack the warm human and institutional choices that lie behind these cold mechanisms. I suspect that a more fruitful approach will turn as much to the sociology of knowledge as to the sociology of technology — to see how these tools are called into being by, enlisted as part of, and negotiated around collective efforts to know and be known. This might help reveal that the seemingly solid algorithm is in fact a fragile accomplishment.
~ ~ ~ Here is the full article [PDF]. Please feel free to share it, or point people to this post.
Posted by Patrick Keller
in Culture & society, Science & technology
at
17:40
Defined tags for this entry: code, computing, culture & society, data, information, knowledge, science & technology, theory, thinkers, thinking
Friday, December 07. 2012Apple Data Center Does Fuel Cell Industry a Huge Favor
----- Apple doubles the size of the fuel cell at its new data center, a potential new energy model for the cloud computing.
One of the ways Apple’s new data center will save energy is by using a white roof that reflects heat. Credit: Apple. Apple is doubling the size its fuel cell installation at its new North Carolina data center, making it a proving ground for large-scale on-site energy at data centers. In papers filed with the state’s utilities commission last month, Apple indicated that it intends to expand capacity from five megawatts of fuel cells, which are now runnning, to a maximum of 10 megawatts. The filing was originally spotted by the Charlotte News Observer. Apple says the much-watched project (Wired actually hired a pilot to take photos of it) will be one of the most environmentally benign data centers ever built because it will use several energy-efficiency tricks and run on biogas-powered fuel cells and a giant 20-megawatt solar array. Beyond Apple’s eco-bragging rights, this data center (and one being built by eBay) should provide valuable insights to the rest of the cloud computing industry. Apple likely won’t give hard numbers on expenses but, if all works as planned, it will validate data center fuel cells for reliable power generation at this scale. Stationary fuel cells are certainly well proven, but multi-megawatt installations are pretty rare. Data center customers for Bloom Energy, which is supplying Apple in North Carolina, typically have far less than a megawatt installed. Each Bloom Energy Server, which takes up about a full parking space, produces 200 kilowatts. By going to 10 megawatts of capacity, Apple can claim the largest fuel cell powered data center, passing eBay which earlier this year announced plans for six megawatts worth of fuel cells at a data center in Utah. (See, EBay Goes All-in With Fuel Cell-Powered Fuel Cell Data Center.) It also opens up new ways of doing business. Using fuel cells at this scale potentially changes how data center operators use grid power and traditional back up diesel generators. With Apple’s combination of its solar power and fuel cells, it appears the facility will be able to produce more than the 20 megawatts it needs at full steam. That means Apple could sell power back to the utility or even operate independently and use the grid as back up power—a completely new configuration. Bloom Energy’s top data center executive Peter Gross told Data Center Insider that data center servers could have two power cords—one from the grid and one from the fuel cells. In the event of a power failure, those fuel cells could keep the servers humming, rather than the backup diesel generators. Apple hasn’t disclose how much it’s paying for all this, but the utility commission filing indicates it plans to monetize its choice of biogas, rather than natural gas. The documents show that Apple is contracting with a separate company to procure biogas, or methane that is given off from landfills. Because it’s a renewable source, Apple can receive compensation for renewable energy credits. Proving fuel cells and solar work in a mission-critical workload at this scale is one thing. Whether it makes economic sense for companies other than cash-rich Apple and eBay is something different. Apple and eBay could save some money by installing fewer diesel generators. Investing in solar also gives companies a fixed electricity cost for years ahead, shielding them from spikes in utilities’ power prices. But some of the most valuable information on these projects will be how the numbers pencil out. That might help conservative data center designers to look at these technologies, which are substantially cleaner than the grid, more seriously. Both operationally and financially, there’s a lot to learn down in Maiden. Let’s hope Apple is a bit more forthcoming about its data center than telling us what’s in the next iPhone.
Personal comment:
This looks like one of several (but far not enough) implementations of "the third industrial revolution" (J. Rifkin), definitely a book to read to foresee a path toward a new (economic) model of clean energy and society, when the information based Internet will (might) combine with the energy based Internet and when energy will start to be an (abundant) solution and not a problem anymore.
Posted by Patrick Keller
in Architecture, Science & technology, Sustainability, Territory
at
09:28
Defined tags for this entry: architecture, data, energy, information, science & technology, sustainability, territory
Ducted Wind Turbines
In the firts part of the XIXth century, we saw our landscape gradually populated by water towers. They came all together with the advance of railways and steam trains as well as with the delivery of water under pressure to households, offices and factories, etc. They took their part in the implementation of the first industrial revolution.
Will we see now our landscape progressively transformed by some new types of energy constructions (i.e. above, a "duct turbine" designed to increase wind velocity, that we could also call a "wind tower") and become the new landmarks of our (still to come) sustainable society. Could we combine this type of structure with some other program? With living or with data centers, with other new and "iconic structures" of our still early century? Should these types of structure also inhabit hurricanes and their usual paths and collect huge amount of energy?
More about the "ducted wind turbine" on MIT Technology Review.
Related Links:Tuesday, December 04. 2012The Coldscape
Via Cabinet ----- By Nicola Twilley
More than three-quarters of the food consumed in the United States today is processed, packaged, shipped, stored, and sold under artificial refrigeration. The shiny, humming stainless steel box in your kitchen is just the tip of the iceberg, so to speak—a tiny fragment of the vast global network of temperature-controlled storage and distribution warehouses cumulatively capable of hosting uncounted billions of cubic feet of chilled flesh, fish, or fruit. Add to that an equally vast and immeasurable volume of thermally controlled space in the form of shipping containers, wine cellars, floating fish factories, international seed banks, meat-aging lockers, and livestock semen storage, and it becomes clear that the evolving architecture of coldspace is as ubiquitous as it is varied, as essential as it is overlooked.
(...)
More about it and about a "perpetual winter" on Cabinet's website.
Related Links:
Posted by Patrick Keller
in Architecture, Culture & society
at
09:55
Defined tags for this entry: architecture, artificial reality, conditioning, culture & society, engineering, food, geography, globalization, goods, weather
Electromagnetic Test Town
Via BLDGBLOG ----- de noreply@blogger.com (Geoff Manaugh)
[Image: An otherwise only conceptually related photo by Steve Rowell shows the LAPD's Edward M. Davis Emergency Vehicle Operations Center & Tactics/Firearms Training Facility in Granada Hills, CA; courtesy of the Center for Land Use Intrepretation].
Posted by Patrick Keller
in Architecture, Culture & society, Science & technology
at
09:43
Defined tags for this entry: architecture, control, culture & society, science & technology, surveillance
« previous page
(Page 2 of 3, totaling 11 entries)
» next page
|
fabric | rblgThis blog is the survey website of fabric | ch - studio for architecture, interaction and research. We curate and reblog articles, researches, writings, exhibitions and projects that we notice and find interesting during our everyday practice and readings. Most articles concern the intertwined fields of architecture, territory, art, interaction design, thinking and science. From time to time, we also publish documentation about our own work and research, immersed among these related resources and inspirations. This website is used by fabric | ch as archive, references and resources. It is shared with all those interested in the same topics as we are, in the hope that they will also find valuable references and content in it.
QuicksearchCategoriesCalendarSyndicate This BlogArchivesBlog Administration |