While we cannot display much of the project engaged for now, these (below) are preliminary studies for what was planned first as a large media/data arrchitecture installation, inspired by previous works (notably Atomized Functioning). It will likely become a more standard presentation of the project driven by the curator's team, in the "Artificiale Canon" part of the exhibition.
Nonetheless, fabric | ch will take part in the Biennale Architettura 2025, alos known as the 19th International Architecture Exhibition in Venice.
As we continue to lack a decent search engine on this blog and as we don't use a "tag cloud" ... This post could help navigate through the updated content on | rblg (as of 09.2023), via all its tags!
FIND BELOW ALL THE TAGS THAT CAN BE USED TO NAVIGATE IN THE CONTENTS OF | RBLG BLOG:
(to be seen just below if you're navigating on the blog's html pages or here for rss readers)
--
Note that we had to hit the "pause" button on our reblogging activities a while ago (mainly because we ran out of time, but also because we received complaints from a major image stock company about some images that were displayed on | rblg, an activity that we felt was still "fair use" - we've never made any money or advertised on this site).
Nevertheless, we continue to publish from time to time information on the activities of fabric | ch, or content directly related to its work (documentation).
Note: an interview about the implications of AI in art and the work of fabric | ch in particular, between Nathalie Bachand (writer & independant curator), Christophe Guignard and myself (both fabric | ch). The exchange happened in the context of a publication in the art magazine Espace, it was fruitful and we had the opportunity to develop on recent projects, like the "Atomized" serie of architectural works that will continue to evolve, as well as our monographic exhibition at Kunshalle Éphémère, entitled Environmental Devices (1997 - 2017).
Note: still catching up on past publications, these ones (Cloud of Cards and related) are "pre-covid times", in Print-on-Demand and related to a the design research on data and the cloud led jointly between ECAL / University of Art & Design, Lausanne and HEAD - Genève (with Prof. Nicolas Nova). It concerns mainly new propositions for hosting infrastructure of data, envisioned as "personal", domestic (decentralized) and small scale alternatives. Many "recipes" were published to describe how to creatively hold you data yourself.
It can also be accessed through my academia account, along with it's accompanying publication by NIcolas Nova: Cloud of Practices.
-----
By Patrick Keller
--
The same research was shortly presented in the Swiss journal Hemispheres, as well as in the international magazine Frame:
Note: just after archiving the MOMA exhibition on | rblg, here comes a small post by Eliza Pertigkiozoglou about the Architecture Machine Group at MIT, same period somehow. This groundbreaking architecture teaching unit and research experience that then led to the MIT Media Lab (Beatriz Colomina spoke about it in its research about design teaching and "Radical Pedagogies" - we spoke about it already on | rblg in the context of a book about the Black Mountain College).
The post details Urban 5, one of the first project the group developed that was supposed to help (anybody) develop an architecture project, in an interactive way. This story is also very well explained and detailed by Orit Halpern in the recent book by CCA: When is the Digital in Architecture?
URBAN 5’s overlay and the IBM 2250 model 1 cathode ray-tube used for URBAN 5 (source: openarchitectures.com)
Nicholas Negroponte (1943) founded in 1967, together with Leon Groisser, the Architecture Machine Group (Arch Mac) at MIT, which later in 1985 transformed to MIT Media Lab. Negroponte’s vision was an architecture machine that would turn the design process into a dialogue, altering the traditional human-machine dynamics. His approach was significantly influenced by recent discussion on artificial intelligence, cybernetics, conversation theory, technologies for learning, sketch recognition and representation. Arch Mac laboratory combined architecture, engineering and computing to develop architectural applications and artificially intelligent interfaces that question the design process and the role of its actors.
The Architecture Machine’s computer and interface installation (source:radical-pedagogies.com)
Urban 5 was the first research project of the lab developed in 1973, as an improved version of Urban 2. Interestingly, in his book “Architecture Machine” Negroponte explains, evaluate and criticize Urban5, contemplating on the successes and insufficiencies of the program that aimed to serve as a “toy” for experimentation rather than a tool to handle real design problems. It was “a system that could monitor design procedures” and not design tool by itself. As explained in the book, Urban’s 5 original goal was to “study the desirability and feasibility of conversing with a machine about environmental design project… using the computer as an objective mirror of the user’s own design criteria and form decisions; reflecting formed from a larger information base than the user’s personal experience”.
Urban 5 communicated with the architect-user first by giving him instructions, then by learning from him and eventually by dialoguing with him. Two languages were employed for that communication: graphic language and English language. The graphic language was using the abstract representation of cubes (nouns). The English language was text appearing on the screen (verbs). The cubes could be added incrementally and had qualities, such as sunlight, visual and acoustical privacy, which could be explicitly assigned by the user or implicitly by the machine. When the user was first introduced to the software, the software was providing instructions. Then the user could could explicitly assign criteria or generate forms graphically in different contexts. What Negroponte called context was defined by mode, which referred to different display modes that allow the designer different kinds of operations. For example, in the TOPO mode the architect can manipulate topography in plan, while in the DRAW mode he/she can manipulate the viewing mode and the physical elements. In the final stage of this human-machine relationship there was a dialogue between designer and the computer :when there was an inconsistency between the assigned criteria and the generated form, the computer informed the architect and he/she could choose the next step: ignore, postpone, and alter the criterion or the form.
Source: The Architecture Machine, Negroponte
Negreponte’s criticism give an insight of Arch Mac’s explorations, goals and self-reflection on the research project. To Negroponte, Urban 5 insufficiency was summarized in four main points. First, it was based on assumptions of the design process that can be denuded: architecture is additive(accumulation of cubes), labels are symbols and design is non-deterministic. Also, it offered specific and predetermined design services. Although different combinations could produce numerous results, they were still finite. The designer has always to decide what should be the next step in the cross-reference between the contexts/modes, without any suggestion or feedback from the computer. Last point of his criticism was that Urban 5 interacts with only one designer and the interaction is strictly mediated through “a meager selection of communication artifacts”, meaning the keyboard and the screen. The medium and the language itself.
Although Urban 5 is a simple program with limited options, the points that are addressed are basically the constraints of current CAD programs. This is, up to an extent, expected, given the medium and the language frames the interaction between man and the machine.“The world view of culture is limited by the structure of the language which that culture uses.”(Whorf, 1956) The world view of a machine is similarly marked by linguistic structure”(1). Nevertheless, it seems that Negroponte’s and Arch Mac explorations were ahead of their time, offered an insight in human-machine design interactions, suggesting “true dialogue”. “Urban 5 suggests an evolutionary system, an intelligent system — but, in itself , is none of them”(2).
References:
(1),(2): Quotes of Negroponte from “The Architecture Machine” book -see below
-Negroponte Nicholas, The Architecture Machine: Towards a more human environment, MIT Press, 1970
- Wright Steenson Molly, Architectures of Information:Christofer Alexander, Cedric Price and Nicholas Negroponte & MIT’s Architecture Machine Group, Phd Thesis, Princeton, April 2014
Note: the title and beginning of the article is very promissing or teasing, so to say... But unfortunately not freely accessible without a subscription on the New Scientist. Yet as it promisses an interesting read, I do archive it on | rblg for record and future readings.
In the meantime, here's also an interesting interview (2010) from Vlatko, at the time when he published his book Decoding Reality () about Information with physicist Vlatko Vedral for The Guardian.
And an extract from the article on the New Scientist:
I’m building a machine that breaks the rules of reality
We thought only fools messed with the cast-iron laws of thermodynamics – but quantum trickery is rewriting the rulebook, says physicist Vladko Vedral.
Martin Leon Barreto
By Vlatko Vedral
A FEW years ago, I had an idea that may sound a little crazy: I thought I could see a way to build an engine that works harder than the laws of physics allow.
You would be within your rights to baulk at this proposition. After all, the efficiency of engines is governed by thermodynamics, the most solid pillar of physics. This is one set of natural laws you don’t mess with.
Yet if I leave my office at the University of Oxford and stroll down the corridor, I can now see an engine that pays no heed to these laws. It is a machine of considerable power and intricacy, with green lasers and ions instead of oil and pistons. There is a long road ahead, but I believe contraptions like this one will shape the future of technology.
Better, more efficient computers would be just the start. The engine is also a harbinger of a new era in science. To build it, we have had to uncover a field called quantum thermodynamics, one set to retune our ideas about why life, the universe – everything, in fact – are the way they are.
Thermodynamics is the theory that describes the interplay between temperature, heat, energy and work. As such, it touches on pretty much everything, from your brain to your muscles, car engines to kitchen blenders, stars to quasars. It provides a base from which we can work out what sorts of things do and don’t happen in the universe. If you eat a burger, you must burn off the calories – or …
"We found evolution will punish you if you're selfish and mean," said lead author Christoph Adami, MSU professor of microbiology and molecular genetics. "For a short time and against a specific set of opponents, some selfish organisms may come out ahead. But selfishness isn't evolutionarily sustainable."
The paper "Evolutionary instability of Zero Determinant strategies demonstrates that winning isn't everything," is co-authored by Arend Hintze, molecular and microbiology research associate, and published in the Aug. 1, 2013 issue of Nature Communications.
Game theory is used in biology, economics, political science and other disciplines. Much of the last 30 years of research has focused on how cooperation came to be, since it's found in many forms of life, from single-cell organisms to people.
Researchers use the prisoner's dilemma game as a model to study cooperation. In it, two people have committed a crime and are arrested. Police offer each person a deal: snitch on your friend and go free while the friend spends six months in jail. If both prisoners snitch, they both get three months in jail. If they both stay silent, they both get one month in jail for a lesser offense. If the two prisoners get a chance to talk to each other, they can establish trust and are usually more likely to cooperate because then both of them only spend one month in jail. But if they're not allowed to communicate, the best strategy is to snitch because it guarantees the snitcher doesn't get the longer jail term.
The game allows scientists to study a basic question faced by individuals competing for limited resources: do I act selfishly or do I cooperate? Cooperating would do the most good for the most individuals, but it might be tempting to be selfish and freeload, letting others do the work and take the risks.
In May 2012, two leading physicists published a paper showing their newly discovered strategy – called zero-determinant—gave selfish players a guaranteed way to beat cooperative players.
"The paper caused quite a stir," said Adami. "The main result appeared to be completely new, despite 30 years of intense research in this area."
Adami and Hintze had their doubts about whether following a zero determinant strategy (ZD) would essentially eliminate cooperation and create a world full of selfish beings. So they used high-powered computing to run hundreds of thousands of games and found ZD strategies can never be the product of evolution. While ZD strategies offer advantages when they're used against non-ZD opponents, they don't work well against other ZD opponents.
"In an evolutionary setting, with populations of strategies, you need extra information to distinguish each other," Adami explained.
So ZD strategies only worked if players knew who their opponents were and adapted their strategies accordingly. A ZD player would play one way against another ZD player and a different way against a cooperative player.
"The only way ZD strategists could survive would be if they could recognize their opponents," Hintze added. "And even if ZD strategists kept winning so that only ZD strategists were left, in the long run they would have to evolve away from being ZD and become more cooperative. So they wouldn't be ZD strategists anymore."
Both Adami and Hintze are members of the BEACON Center for the Study of Evolution in Action, a National Science Foundation Center that brings together biologists, computer scientists, engineers and researchers from other disciplines to study evolution as it happens.
The research also makes that case that communication and information are necessary for cooperation to take place.
"Standard game theory doesn't take communication into account because it's so complicated to do the math for the expected payoffs," Adami explained. "But just because the math doesn't exist and the general formula may never be solved, it doesn't mean we can't explore the idea using agent-based modeling. Communication is critical for cooperation; we think communication is the reason cooperation occurs. It's generally believed that there are five independent mechanisms that foster cooperation. But these mechanisms are really just ways to ensure that cooperators play mostly with other cooperators and avoid all others. Communication is a universal way to achieve that. We plan to test the idea directly in yeast cells."
Enhancing the flow of information through the brain could be crucial to making neuroprosthetics practical.
The abilities to learn, remember, evaluate, and decide are central to who we are and how we live. Damage to or dysfunction of the brain circuitry that supports these functions can be devastating, leading to Alzheimer’s, schizophrenia, PTSD, or many other disorders. Current treatments, which are drug-based or behavioral, have limited efficacy in treating these problems. There is a pressing need for something more effective.
One promising approach is to build an interactive device to help the brain learn, remember, evaluate, and decide. One might, for example, construct a system that would identify patterns of brain activity tied to particular experiences and then, when called upon, impose those patterns on the brain. Ted Berger, Sam Deadwyler, Robert Hampsom, and colleagues have used this approach (see “Memory Implants”). They are able to identify and then impose, via electrical stimulation, specific patterns of brain activity that improve a rat’s performance in a memory task. They have also shown that in monkeys stimulation can help the animal perform a task where it must remember a particular item.
Their ability to improve performance is impressive. However, there are fundamental limitations to an approach where the desired neural pattern must be known and then imposed. The animals used in their studies were trained to do a single task for weeks or months and the stimulation was customized to produce the right outcome for that task. This is only feasible for a few well-learned experiences in a predictable and constrained environment.
New and complex experiences engage large numbers of neurons scattered across multiple brain regions. These individual neurons are physically adjacent to other neurons that contribute to other memories, so selectively stimulating the right neurons is difficult if not impossible. And to make matters even more challenging, the set of neurons involved in storing a particular memory can evolve as that memory is processed in the brain. As a result, imposing the right patterns for all desired experiences, both past and future, requires technology far beyond what is possible today.
I believe the answer to be an alternative approach based on enhancing flows of information through the brain. The importance of information flow can be appreciated when we consider how the brain makes and uses memories. During learning, information from the outside world drives brain activity and changes in the connections between neurons. This occurs most prominently in the hippocampus, a brain structure critical for laying down memories for the events of daily life. Thus, during learning, external information must flow to the hippocampus if memories are to be stored.
Once information has been stored in the hippocampus, a different flow of information is required to create a long-lasting memory. During periods of rest and sleep, the hippocampus “reactivates” stored memories, driving activity throughout the rest of the brain. Current theories suggest that the hippocampus acts like a teacher, repeatedly sending out what it has learned to the rest of the brain to help engrain memories in more stable and distributed brain networks. This “consolidation” process depends on the flow of internal information from the hippocampus to the rest of the brain.
Finally, when a memory is retrieved a similar pattern of internally driven flow is required. For many memories, the hippocampus is required for memory retrieval, and once again hippocampal activity drives the reinstatement of the memory pattern throughout the brain. This process depends on the same hippocampal reactivation events that contribute to memory consolidation.
Different flows of information can be engaged at different intensities as well. Some memories stay with us and guide our choices for a lifetime, while others fade with time. We and others have shown that new and rewarded experiences drive both profound changes in brain activity, and strong memory reactivation. Familiar and unrewarded experiences drive smaller changes and weaker reactivation. Further, we have recently shown that the intensity of memory reactivation in the hippocampus, measured as the number of neurons active together during each reactivation event, can predict whether an the next decision an animal makes is going to be right or wrong. Our findings suggest that when the animal reactivates effectively, it does a better job of considering possible future options (based on past experiences) and then makes better choices.
These results point to an alternative approach to helping the brain learn, remember and decide more effectively. Instead of imposing a specific pattern for each experience, we could enhance the flow of information to the hippocampus during learning and the intensity of memory reactivation from the hippocampus during memory consolidation and retrieval. We are able to detect signatures of different flows of information associated with learning and remembering. We are also beginning to understand the circuits that control this flow, which include neuromodulatory regions that are often damaged in disease states. Importantly, these modulatory circuits are more localized and easier to manipulate than the distributed populations of neurons in the hippocampus and elsewhere that are activated for each specific experience.
Thus, an effective cognitive neuroprosthetic would detect what the brain is trying to do (learn, consolidate or retrieve) and then amplify activity in the relevant control circuits to enhance the essential flows of information. We know that even in diseases like Alzheimer’s where there is substantial damage to the brain, patients have good days and bad days. On good days the brain smoothly transitions among distinct functions, each associated with a particular flow of information. On bad days these functions may become less distinct and the flows of information muddled. Our goal then, would be to restore the flows of information underlying different mental functions.
A prosthetic device has the potential to adapt to the moment-by-moment changes in information flow necessary for different types of mental processing. By contrast, drugs that seek to treat cognitive dysfunction may effectively amplify one type of processing but cannot adapt to the dynamic requirements of mental function. Thus, constructing a device that makes the brain’s control circuits work more effectively offers a powerful approach to treating disease and maximizing mental capacity.
Loren M. Frank is a professor at the Center for Integrative Neuroscience and the Department of Physiology at the University of California, San Francisco.
Storing video and other files more intelligently reduces the demand on servers in a data center.
Worldwide, data centers consume huge and growing amounts of electricity.
New research suggests that data centers could significantly cut their electricity usage simply by storing fewer copies of files, especially videos.
For now the work is theoretical, but over the next year, researchers at Alcatel-Lucent’s Bell Labs and MIT plan to test the idea, with an eye to eventually commercializing the technology. It could be implemented as software within existing facilities. “This approach is a very promising way to improve the efficiency of data centers,” says Emina Soljanin, a researcher at Bell Labs who participated in the work. “It is not a panacea, but it is significant, and there is no particular reason that it couldn’t be commercialized fairly quickly.”
With the new technology, any individual data center could be expected to save 35 percent in capacity and electricity costs—about $2.8 million a year or $18 million over the lifetime of the center, says Muriel Médard, a professor at MIT’s Research Laboratory of Electronics, who led the work and recently conducted the cost analysis.
So-called storage area networks within data center servers rely on a tremendous amount of redundancy to make sure that downloading videos and other content is a smooth, unbroken experience for consumers. Portions of a given video are stored on different disk drives in a data center, with each sequential piece cued up and buffered on your computer shortly before it’s needed. In addition, copies of each portion are stored on different drives, to provide a backup in case any single drive is jammed up. A single data center often serves millions of video requests at the same time.
The new technology, called network coding, cuts way back on the redundancy without sacrificing the smooth experience. Algorithms transform the data that makes up a video into a series of mathematical functions that can, if needed, be solved not just for that piece of the video, but also for different parts. This provides a form of backup that doesn’t rely on keeping complete copies of the data. Software at the data center could simply encode the data as it is stored and decode it as consumers request it.
Médard’s group previously proposed a similar technique for boosting wireless bandwidth (see “A Bandwidth Breakthrough”). That technology deals with a different problem: wireless networks waste a lot of bandwidth on back-and-forth traffic to recover dropped portions of a signal, called packets. If mathematical functions describing those packets are sent in place of the packets themselves, it becomes unnecessary to re-send a dropped packet; a mobile device can solve for the missing packet with minimal processing. That technology, which improves capacity up to tenfold, is currently being licensed to wireless carriers, she says.
Between the electricity needed to power computers and the air conditioning required to cool them, data centers worldwide consume so much energy that by 2020 they will cause more greenhouse-gas emissions than global air travel, according to the consulting firm McKinsey.
Smarter software to manage them has already proved to be a huge boon (see “A New Net”). Many companies are building data centers that use renewable energy and smarter energy management systems (see “The Little Secrets Behind Apple’s Green Data Centers”). And there are a number of ways to make chips and software operate more efficiently (see “Rethinking Energy Use in Data Centers”). But network coding could make a big contribution by cutting down on the extra disk drives—each needing energy and cooling—that cloud storage providers now rely on to ensure reliability.
This is not the first time that network coding has been proposed for data centers. But past work was geared toward recovering lost data. In this case, Médard says, “we have considered the use of coding to improve performance under normal operating conditions, with enhanced reliability a natural by-product.”
Personal comment:
Still a link in the context of our workshop at the Tsinghua University and related to data storage at large.
The link between energy, algorithms and data storage made obvious. To be read in parallel with the previous repost from Kazys Varnelis, Into the Cloud (with zombies). -
In the same idea, another piece of code that could cut flight delays and therefore cut approx $1.2 million in annual crew costs and $5 million in annual fuel savings to a midsized airline...
This blog is the survey website of fabric | ch - studio for architecture, interaction and research.
We curate and reblog articles, researches, writings, exhibitions and projects that we notice and find interesting during our everyday practice and readings.
Most articles concern the intertwined fields of architecture, territory, art, interaction design, thinking and science. From time to time, we also publish documentation about our own work and research, immersed among these related resources and inspirations.
This website is used by fabric | ch as archive, references and resources. It is shared with all those interested in the same topics as we are, in the hope that they will also find valuable references and content in it.