As we continue to lack a decent search engine on this blog and as we don't use a "tag cloud" ... This post could help navigate through the updated content on | rblg (as of 09.2023), via all its tags!
FIND BELOW ALL THE TAGS THAT CAN BE USED TO NAVIGATE IN THE CONTENTS OF | RBLG BLOG:
(to be seen just below if you're navigating on the blog's html pages or here for rss readers)
--
Note that we had to hit the "pause" button on our reblogging activities a while ago (mainly because we ran out of time, but also because we received complaints from a major image stock company about some images that were displayed on | rblg, an activity that we felt was still "fair use" - we've never made any money or advertised on this site).
Nevertheless, we continue to publish from time to time information on the activities of fabric | ch, or content directly related to its work (documentation).
The exhibition Beneath the Skin, Between the Machines just opened at HOW Art Museum (Hao Art Gallery) and fabric | ch was keen to be invited to create a large installation for the show, also intented to be used during a symposium that will be entirely part of the exhibition (panels and talks as part of the installation therefore). The exhibition will be open between January 15 - April 24 2022 in Shanghai.
Along with a selection of chinese and international artists, curator Liaoliao Fu asked us to develop a proposal based on a former architectural device, Public Platform of Future-Past, which in itself was inspired by an older installation of ours... Heterochrony.
This new work, entitled Platform of Future-Past, deals with the temporal oddity that can be produced and induced by the recording, accumulation and storage of monitoring data, which contributes to leaving partial traces of "reality", functioning as spectres of the past.
We are proud to present this work along artists such as Hito Steyerl, Geumhyung Jeong, Lu Yang, Jon Rafman, Forensic Architecture, Lynn Hershman Leeson and Harun Farocki.
...
Last but not least and somehow a "sign of the times", this is the first exhibition in which we are participating and whose main financial backers are a blockchain and crypto-finance company, as well as a NFT platform. Both based in China.
More information about the symposium will be published.
"Man is only man at the surface. Remove the skin, dissect, and immediately you come to machinery.” When Paul Valéry wrote this down, he might not foresee that human beings – a biological organism – would indeed be incorporated into machinery at such a profound level in a highly informationized and computerized time and space. In a sense, it is just as what Marx predicted: a conscious connection of machine[1]. Today, machine is no longer confined to any material form; instead, it presents itself in the forms of data, coding and algorithm – virtually everything that is “operable”, “calculable” and “thinkable”. Ever since the idea of cyborg emerges, the man-machine relation has always been intertwined with our imagination, vision and fear of the past, present and future.
In a sense, machine represents a projection of human beings. We human beings transfer ideas of slavery and freedom to other beings, namely a machine that could replace human beings as technical entities or tools. Opposite (and similar, in a sense,) to the “embodiment” of machine, organic beings such as human beings are hurrying to move towards “disembodiment”. Everything pertinent to our body and behavior can be captured and calculated as data. In the meantime, the social system that human beings have created never stops absorbing new technologies. During the process of trial and error, the difference and fortuity accompanying the “new” are taken in and internalized by the system. “Every accident, every impulse, every error is productive (of the social system),”[2]and hence is predictable and calculable. Within such a system, differences tend to be obfuscated and erased, but meanwhile due to highly professional complexities embedded in different disciplines/fields, genuine interdisciplinary communication is becoming increasingly difficult, if not impossible.
As a result, technologies today are highly centralized, homogenized, sophisticated and commonized. They penetrate deeply into our skin, but beyond knowing, sensing and thinking. On the one hand, the exhibition probes into the reconfiguration of man by technologies through what’s “beneath the skin”; and on the other, encourages people to rethink the position and situation we’re in under this context through what’s “between the machines”. As an art institute located at Shanghai Zhangjiang Hi-Tech Industrial Development Zone, one of the most important hi-tech parks in China, HOW Art Museum intends to carve out an open rather than enclosed field through the exhibition, inviting the public to immerse themselves and ponder upon the questions such as “How people touch machines?”, “What the machines think of us?” and “Where to position art and its practice in the face of the overwhelming presence of technology and the intricate technological reality?”
Departing from these issues, the exhibition presents a selection of recent works of Revital Cohen & Tuur Van Balen, Simon Denny, Harun Farocki, Nicolás Lamas, Lynn Hershman Leeson, Lu Yang, Lam Pok Yin, David OReilly, Pakui Hardware, Jon Rafman, Hito Steyerl, Shi Zheng and Geumhyung Jeong. In the meantime, it intends to set up a “panel installation”, specially created byfabric | chfor this exhibition, trying to offer a space and occasion for decentralized observation and participation in the above discussions. Conversations and actions are to be activated as well as captured, observed and archived at the same time.
[1]Karl Marx, “Fragment on Machines”,Foundations of a Critique of Political Economy
Duration: January 15-April 24, 2022
Artists: Revital Cohen & Tuur Van Balen, Simon Denny, fabric | ch, Harun Farocki, Geumhyung Jeong, Nicolás Lamas, Lynn Hershman Leeson, Lu Yang, Lam Pok Yin, David OReilly, Pakui Hardware, Jon Rafman, Hito Steyerl, Shi Zheng
Curator: Fu Liaoliao
Organizer: HOW Art Museum, Shanghai
Lead Sponsor: APENFT Foundation
Swiss participation is supported by Pro Helvetia Shanghai, Swiss Arts Council.
(Swiss speakers and performers appearing in the educational events will be updated soon.)
-----
Work by fabric | ch
HOW Art Museum has invited Lausanne-based artist group fabric | ch to set up a “panel installation” based on their former project “Public Platform of Future Past” and adapted to the museum space, fostering insightful communication among practitioners from different fields and the audiences.
“Platform of Future-Past” is a temporary environmental device that consists in a twenty meters long walkway, or rather an observation deck, almost archaeological: a platform that overlooks an exhibition space and that, paradoxically, directly links its entrance to its exit. It thus offers the possibility of crossing this space without really entering it and of becoming its observer, as from archaeological observation decks. The platform opens- up contrasting atmospheres and offers affordances or potential uses on the ground.
The peculiarity of the work consists thus in the fact that it generates a dual perception and a potential temporal disruption, which leads to the title of the work, Platform of Future-Past: if the present time of the exhibition space and its visitors is, in fact, the “archeology” to be observed from the platform, and hence a potential “past,” then the present time of the walkway could be understood as a possible “future” viewed from the ground…
“Platform of Future-Past” is equipped in three zones with environmental monitoring devices. The sensors record as much data as possible over time, generated by the continuously changing conditions, presences and uses in the exhibition space. The data is then stored on Platform Future-Past’s servers and replayed in a loop on its computers. It is a “recorded moment”, “frozen” on the data servers, that could potentially replay itself forever or is waiting for someone to reactivate it. A “data center” on the deck, with its set of interfaces and visualizations screens, lets the visitors-observers follow the ongoing process of recording.
The work could be seen as an architectural proposal built on the idea of massive data production from our environment. Every second, our world produces massive amounts of data, stored “forever” in remote data centers, like old gas bubbles trapped in millennial ice.
As such, the project is attempting to introduce doubt about its true nature: would it be possible, in fact, that what is observed from the platform is already a present recorded from the past? A phantom situation? A present regenerated from the data recorded during a scientific experiment that was left abandoned? Or perhaps replayed by the machine itself ? Could it already, in fact, be running on a loop for years?
Platform of Future-Past, Scaffolding, projection screens, sensors, data storage, data flows, plywood panels, textile partitions
-----
Platform of Future-Past (2022)
-----
Beneath the Skin, Between the Machines (exhibition, 01.22 - 04.22)
-----
Platform of Future-Past was realized with the support of Pro Helvetia.
Note: a proto-smart-architecture project by Cedric Price dating back from the 70ies, which sounds much more intersting than almost all contemporary smart architecture/cities proposals.
These lattest being in most cases glued into highly functional approaches driven by the "paths of less resistance-frictions", supported if not financed by data-hungry corporations. That's not a desirable future to my point of view.
"(...). If not changed, the building would have become “bored” and proposed alternative arrangements for evaluation (...)"
Cedric Price’s proposal for the Gilman Corporation was a series of relocatable structures on a permanent grid of foundation pads on a site in Florida.
Cedric Price asked John and Julia Frazer to work as computer consultants for this project. They produced a computer program to organize the layout of the site in response to changing requirements, and in addition suggested that a single-chip microprocessor should be embedded in every component of the building, to make it the controlling processor.
This would result in an “intelligent” building which controlled its own organisation in response to use. If not changed, the building would have become “bored” and proposed alternative arrangements for evaluation, learning how to improve its own evaluation, learning how to improve its own organisation on the basis of this experience.
The Brief
Generator (1976-79) sought to create conditions for shifting, changing personal interactions in a reconfigurable and responsive architectural project.
It followed this open-ended brief:
"A building which will not contradict, but enhance, the feeling of being in the middle of nowhere; has to be accessible to the public as well as to private guests; has to create a feeling of seclusion conducive to creative impulses, yet…accommodate audiences; has to respect the wildness of the environment while accommodating a grand piano; has to respect the continuity of the history of the place while being innovative."
The proposal consisted of an orthogonal grid of foundation bases, tracks and linear drains, in which a mobile crane could place a kit of parts comprised of cubical module enclosures and infill components (i.e. timber frames to be filled with modular components raging from movable cladding wall panels to furniture, services and fittings), screening posts, decks and circulation components (i.e. walkways on the ground level and suspended at roof level) in multiple arrangements.
When Cedric Price approached John and Julia Frazer he wrote:
"The whole intention of the project is to create an architecture sufficiently responsive to the making of a change of mind constructively pleasurable."
Generator Project
They proposed four programs that would use input from sensors attached to Generator’s components: the first three provided a “perpetual architect” drawing program that held the data and rules for Generator’s design; an inventory program that offered feedback on utilisation; an interface for “interactive interrogation” that let users model and prototype Generator’s layout before committing the design.
The powerful and curious boredom program served to provoke Generator’s users. “In the event of the site not being re-organized or changed for some time the computer starts generating unsolicited plans and improvements,” the Frazers wrote. These plans would then be handed off to Factor, the mobile crane operator, who would move the cubes and other elements of Generator. “In a sense the building can be described as being literally ‘intelligent’,” wrote John Frazer—Generator “should have a mind of its own.” It would not only challenge its users, facilitators, architect and programmer—it would challenge itself.
The Frazers’ research and techniques
The first proposal, associated with a level of ‘interactive’ relationship between ‘architect/machine’, would assist in drawing and with the production of additional information, somewhat implicit in the other parallel developments/ proposals.
The second proposal, related to the level of ‘interactive/semiautomatic’ relationship of ‘client–user/machine’, was ‘a perpetual architect for carrying out instructions from the Polorizer’ and for providing, for instance, operative drawings to the crane operator/driver; and the third proposal consisted of a ‘[. . .] scheduling and inventory package for the Factor [. . .] it could act as a perpetual functional critic or commentator.’
The fourth proposal, relating to the third level of relationship, enabled the permanent actions of the users, while the fifth proposal consisted of a ‘morphogenetic program which takes suggested activities and arranges the elements on the site to meet the requirements in accordance with a set of rules.’
Finally, the last proposal was [. . .] an extension [. . .] to generate unsolicited plans, improvements and modifications in response to users’ comments, records of activities, or even by building in a boredom concept so that the site starts to make proposals about rearrangements of itself if no changes are made. The program could be heuristic and improve its own strategies for site organisation on the basis of experience and feedback of user response.
Self Builder Kit and the Cal Build Kit, Working Models
In a certain way, the idea of a computational aid in the Generator project also acknowledged and intended to promote some degree of unpredictability. Generator, even if unbuilt, had acquired a notable position as the first intelligent building project. Cedric Price and the Frazers´ collaboration constituted an outstanding exchange between architecture and computational systems. The Generator experience explored the impact of the new techno-cultural order of the Information Society in terms of participatory design and responsive building. At an early date, it took responsiveness further; and postulates like those behind the Generator, where the influence of new computational technologies reaches the level of experience and an aesthetics of interactivity, seems interesting and productive.
Resources
John Frazer, An Evolutionary Architecture, Architectural Association Publications, London 1995. http://www.aaschool.ac.uk/publications/ea/exhibition.html
Frazer to C. Price, (Letter mentioning ‘Second thoughts but using the same classification system as before’), 11 January 1979. Generator document folio DR1995:0280:65 5/5, Cedric Price Archives (Montreal: Canadian Centre for Architecture).
I like to believe that we tried on our side to address this question of public space - mediated and somehow "franchised" by technology - through many of our past works at fabric | ch. We even tried with our limited means to articulate or bring scaled answers to these questions...
A collection of essays by prominent creators collected by MIT explores the uncertain nature of common space in the contemporary world. And the answer to the question in the title is yes!
Gediminas Urbonas, Ann Lui and Lucas Freeman are the editors of a book that presents a wide range of intellectual reflections and artistic experimentations centred around the concept of public space. The title of the volume, Public Space? Lost and Found, immediately places the reader in a doubtful state: nothing should be taken for granted or as certain, given that we are asking ourselves if, in fact, public space still exists.
This question was originally the basis for a symposium and an exhibition hosted by MIT in 2014, as part of the work of ACT, the acronym for the Art, Culture and Technology programme. Contained within the incredibly well-oiled scientific and technological machine that is MIT, ACT is a strange creature, a hybrid where sometimes extremely different practices cross paths, producing exciting results: exhibitions; critical analyses, which often examine the foundations and the tendencies of the university itself, underpinned by an interest in the political role of research; actual inventions, developed in collaboration with other labs and university courses, that attract students who have a desire to exchange ideas with people from different paths and want the chance to take part in initiatives that operate free from educational preconceptions.
The book is one of the many avenues of communication pursued by ACT, currently directed by Gediminas Urbonas (a Lithuanian visual artist who has taught there since 2009) who succeeded the curator Ute Meta Bauer. The collection explores how the idea of public space is at the heart of what interests artists and designers and how, consequently, the conception, the creation and the use of collective spaces are a response to current-day transformations. These include the spread of digital technologies, climate change, the enforcement of austerity policies due to the reduction in available resources, and the emergence of political arguments that favour separation between people. The concluding conversation Reflexivity and Resistance in Communicative Capitalism between Urbonas and Jodi Dean, an American political scientist, summarises many of the book’s ideas: public space becomes the tool for resisting the growing privatisation of our lives.
The book, which features stupendous graphics by Node (a design studio based in Berlin and Oslo), is divided into four sections: paradoxes, ecologies, jurisdictions and signals.
The contents alternate essays (like Angela Vettese’s analysis of the role of national pavilions at the Biennale di Venezia or Beatriz Colomina’s reflections about the impact of social media on issues of privacy) with the presentation of architectural projects and artistic interventions designed by architects like Andrés Jaque, Teddy Cruz and Marjetica Potr or by historic MIT professors like the multimedia artist Antoni Muntadas. The republication of Art and Ecological Consciousness, a 1972 book by György Kepes, the multi-disciplinary genius who was the director of the Center for Advanced Visual Studies at MIT, proves that the institution has long been interested in these topics.
This collection of contributions supported by captivating iconography signals a basic optimism: the documented actions and projects and the consciousness that motivates the thinking of many creators proves there is a collective mobilisation, often starting from the bottom, that seeks out and creates the conditions for communal life. Even if it is never explicitly written, the answer to the question in the title is a resounding yes.
Public Space? Lost and Found Gediminas Urbonas, Ann Lui and Lucas Freeman
SA + P Press, MIT School of Architecture and Planning
Cambridge MA, 2017
300 pages, $40 mit.edu
Overview
“Public space” is a potent and contentious topic among artists, architects, and cultural producers. Public Space? Lost and Found considers the role of aesthetic practices within the construction, identification, and critique of shared territories, and how artists or architects—the “antennae of the race”—can heighten our awareness of rapidly changing formulations of public space in the age of digital media, vast ecological crises, and civic uprisings.
Public Space? Lost and Found combines significant recent projects in art and architecture with writings by historians and theorists. Contributors investigate strategies for responding to underrepresented communities and areas of conflict through the work of Marjetica Potrč in Johannesburg and Teddy Cruz on the Mexico-U.S. border, among others. They explore our collective stakes in ecological catastrophe through artisticresearch such as atelier d’architecture autogérée’s hubs for community action and recycling in Colombes, France, and Brian Holmes’s theoretical investigation of new forms of aesthetic perception in the age of the Anthropocene. Inspired by artist and MIT professor Antoni Muntadas’ early coining of the term “media landscape,” contributors also look ahead, casting a critical eye on the fraught impact of digital media and the internet on public space.
This book is the first in a new series of volumes produced by the MIT School of Architecture and Planning’s Program in Art, Culture and Technology.
Contributors atelier d'architecture autogérée, Dennis Adams, Bik Van Der Pol, Adrian Blackwell, Ina Blom, Christoph Brunner with Gerald Raunig, Néstor García Canclini, Colby Chamberlain, Beatriz Colomina, Teddy Cruz with Fonna Forman, Jodi Dean, Juan Herreros, Brian Holmes, Andrés Jaque, Caroline Jones, Coryn Kempster with Julia Jamrozik, György Kepes, Rikke Luther, Matthew Mazzotta, Metahaven, Timothy Morton, Antoni Muntadas, Otto Piene, Marjetica Potrč, Nader Tehrani, Troy Therrien, Gedminas and Nomeda Urbonas, Angela Vettese, Mariel Villeré, Mark Wigley, Krzysztof Wodiczko
With section openings from Ana María León, T. J. Demos, Doris Sommer, and Catherine D'Ignazio
Note: More than a year ago, I posted about this move by Alphabet-Google toward becoming city designers... I tried to point out the problems related to a company which business is to collect data becoming the main investor in public space and common goods (the city is still part of the commons, isn't it?) But of course, this is, again, about big business ("to make the world a better place" ... indeed) and slick ideas.
But it is highly problematic that a company start investing in public space "for free". We all know what this mean now, don't we? It is not needed and not desired.
So where are the "starchitects" now? What do they say? Not much... Where are all the "regular" architects as well? Almost invisible, tricked in the wrong stakes, with -- I'm sorry...-- very few of them being only able to identify the problem.
This is not about building a great building for a big brand or taking a conceptual position, not even about "die Gestalt" anymore. It is about everyday life for 66% of Earth population by 2050 (UN study). It is, in this precise case, about information technologies and mainly information stategies and businesses that materialize into structures of life.
fabric | rblg legend: this hand drawn image contains all the marketing clichés (green, blue, clean air, bikes, local market, public transportation, autonomous car in a happy village atmosphere... Can't be further from what it will be).
An 800-acre strip of Toronto's waterfront may show us how cities of the future could be built. Alphabet’s urban innovation team, Sidewalk Labs, has announced a plan to inject urban design and new technologies into the city's quayside to boost "sustainability, affordability, mobility, and economic opportunity."
Huh?
Picture streets filled with robo-taxis, autonomous trash collection, modular buildings, and clean power generation. The only snag may be the humans: as we’ve said in the past, people can do dumb things with smart cities. Perhaps Toronto will be different.
As a matter of fact, there is a "devices" tag in this blog for this precise reason, to give references for trhese king of architectures that trigger modification in the environment.
The world's first commercial plant for capturing carbon dioxide directly from the air opened yesterday, refueling a debate about whether the technology can truly play a significant role in removing greenhouse gases already in the atmosphere. The Climeworks AG facility near Zurich becomes the first ever to capture CO2 at industrial scale from air and sell it directly to a buyer.
Developers say the plant will capture about 900 tons of CO2 annually — or the approximate level released from 200 cars — and pipe the gas to help grow vegetables.
While the amount of CO2 is a small fraction of what firms and climate advocates hope to trap at large fossil fuel plants, Climeworks says its venture is a first step in their goal to capture 1 percent of the world's global CO2 emissions with similar technology. To do so, there would need to be about 250,000 similar plants, the company says.
"Highly scalable negative emission technologies are crucial if we are to stay below the 2-degree target [for global temperature rise] of the international community," said Christoph Gebald, co-founder and managing director of Climeworks. The plant sits on top of a waste heat recovery facility that powers the process. Fans push air through a filter system that collects CO2. When the filter is saturated, CO2 is separated at temperatures above 100 degrees Celsius.
The gas is then sent through an underground pipeline to a greenhouse operated by Gebrüder Meier Primanatura AG to help grow vegetables, like tomatoes and cucumbers.
Gebald and Climeworks co-founder Jan Wurzbacher said the CO2 could have a variety of other uses, such as carbonating beverages. They established Climeworks in 2009 after working on air capture during postgraduate studies in Zurich.
The new plant is intended to run as a three-year demonstration project, they said. In the next year, the company said it plans to launch additional commercial ventures, including some that would bury gas underground to achieve negative emissions.
"With the energy and economic data from the plant, we can make reliable calculations for other, larger projects," said Wurzbacher.
Note: with interesting critical comments below concerning the real sustainable effect by Howard Herzog (MIT).
'Sideshow'
There are many critics of air capture technology who say it would be much cheaper to perfect carbon capture directly at fossil fuel plants and keep CO2 out of the air in the first place. Among the skeptics are Massachusetts Institute of Technology senior research engineer Howard Herzog, who called it a "sideshow" during a Washington event earlier this year. He estimated that total system costs for air capture could be as much as $1,000 per ton of CO2, or about 10 times the cost of carbon removal at a fossil fuel plant.
"At that price, it is ridiculous to think about right now. We have so many other ways to do it that are so much cheaper," Herzog said. He did not comment specifically on Climeworks but noted that the cost for air capture is high partly because CO2 is diffuse in the air, while it is more concentrated in the stream from a fossil fuel plant. Climeworks did not immediately release detailed information on its costs but said in a statement that the Swiss Federal Office of Energy would assist in financing. The European Union also provided funding.
In 2015, the National Academies of Sciences, Engineering and Medicine released a report saying climate intervention technologies like air capture were not a substitute for reducing emissions. Last year, two European scientists wrote in the journal Science that air capture and other "negative emissions" technologies are an "unjust gamble," distracting the world from viable climate solutions (Greenwire, Oct. 14, 2016).
Engineers have been toying with the technology for years, and many say it is a needed option to keep temperatures to controllable levels. It's just a matter of lowering costs, supporters say. More than a decade ago, entrepreneur Richard Branson launched the Virgin Earth Challenge and offered $25 million to the builder of a viable air capture design.
Climeworks was a finalist in that competition, as were companies like Carbon Engineering, which is backed by Microsoft Corp. co-founder Bill Gates and is testing air capture at a pilot plant in British Columbia.
-----
...
And let's also mention while we are here the similar device ("smog removal" for China cities) made by Studio Roosegaarde, Smog Free Project.
Note: let's "start" this new (delusional?) year with this short video about the ways "they" see things, and us. They? The "machines" of course, the bots, the algorithms...
An interesting reassembled trailer that was posted by Matthew Plummer-Fernandez on his Tumblr #algopop that documents the "appearance of algorithms in popular culture". Matthew was with us back in 2014, to collaborate on a research project at ECAL that will soon end btw and worked around this idea of bots in design.
Will this technological future become "delusional" as well, if we don't care enough? As essayist Eric Sadin points it in his recent book, "La silicolonisation du monde" (in French only at this time)?
Possibly... It is with no doubt up to each of us (to act), so as regarding our everyday life in common with our fellow human beings!
Everything but the detected objects are removed from the trailer of The Wolf of Wall Street. The software is powered by Yolo object-detection, which has been used for similar experiments.
Note: published a little while ago, this article from Time magazine ("The Next Revolution in Photography Is Coming") makes a fascinating point about the changing nature of photography. Even so the article mostly talks about journalism photography.
An interesting analysis by Stephen Mayes that shows how far photography is becoming data capture (sensing)... as much --or even more-- as it is visual capture. We should certainly discuss further around this question with the scientists that are writing the algorithms of photography. Yet as stated in the paper, a camera is slowly becoming primarily a "data-collecting device" and the image reconstructed from these data (by algorithms then) a last "grip on the belief in the image as an objective record".
This comes in resonance with our scholar understanding of photography as the media that was once believed or phantasized of being able to "capture reality", as it is. The early cinema carried later the same kind of beliefs. And we could then think again about this fantastic novel by Adolfo Bioy Casares (The Invention of Morel, 1940) that was extrapolating around these myths (of being able to fully record and register the "present", then replay it, entirely).
Today, this belief in our ability to "fully record the real" or digg into its recorded past (some big data projects) has a tendency to be transferred into data capture (and I obviously publish this post on purpose just after the presentation of an architecture project by fabric | ch that largely used and played around this idea of recording the present and that followed an installation around the same idea, through data).
So the connection that is made in this paper between photography and data capture is full of epistemological interests!
In the future, there will be no such thing as a "straight photograph"
It’s time to stop talking about photography. It’s not that photography is dead as many have claimed, but it’s gone.
Just as there’s a time to stop talking about girls and boys and to talk instead about women and men so it is with photography; something has changed so radically that we need to talk about it differently, think of it differently and use it differently. Failure to recognize the huge changes underway is to risk isolating ourselves in an historical backwater of communication, using an interesting but quaint visual language removed from the cultural mainstream.
The moment of photography’s “puberty” was around the time when the technology moved from analog to digital although it wasn’t until the arrival of the Internet-enabled smartphone that we really noticed a different behavior. That’s when adolescence truly set in. It was surprising but it all seemed somewhat natural and although we experienced a few tantrums along the way with arguments about promiscuity, manipulation and some inexplicable new behaviors, the photographic community largely accommodated the changes with some adjustments in workflow.
But these visible changes were merely the advance indicators of deeper transformations and it was only a matter of time before people’s imagination reached beyond the constraints of two dimensions to explore previously unimagined possibilities. And so it is that we find ourselves in a world where the digital image is almost infinitely flexible, a vessel for immeasurable volumes of information, operating in multiple dimensions and integrated into apps and technologies with purposes yet to be imagined.
Digital capture quietly but definitively severed the optical connection with reality, that physical relationship between the object photographed and the image that differentiated lens-made imagery and defined our understanding of photography for 160 years. The digital sensor replaced to optical record of light with a computational process that substitutes a calculated reconstruction using only one third of the available photons. That’s right, two thirds of the digital image is interpolated by the processor in the conversion from RAW to JPG or TIF. It’s reality but not as we know it.
For obvious commercial reasons camera manufacturers are careful to reconstruct the digital image in a form that mimics the familiar old photograph and consumers barely noticed a difference in the resulting image, but there are very few limitations on how the RAW data could be handled and reality could be reconstructed in any number of ways. For as long as there’s an approximate consensus on what reality should look like we retain a fingernail grip on the belief in the image as an objective record. But forces beyond photography and traditional publishing are already onto this new data resource, and culture will move with it whether photographers choose to follow or not.
As David Campbell has pointed out in his report on image integrity for the World Press Photo, this requires a profound reassessment of words like “manipulation” that assume the existence of a virginal image file that hasn’t already been touched by computational process. Veteran digital commentator Kevin Connor says, “The definition of computational photography is still evolving, but I like to think of it as a shift from using a camera as a picture-making device to using it as a data-collecting device.”
The differences contained in the structure and processing of a digital file are not the end of the story of photography’s transition from innocent childhood to knowing adulthood. There is so much more to grasp that very few people have yet grappled with the inevitable but as yet unimaginable impact on the photographic image. Taylor Davidson has described the camera of the future as an app, a software rather than a device that compiles data from multiple sensors. The smartphone’s microphone, gyroscope, accelerometer, thermometer and other sensors all contribute data as needed by whatever app calls on it and combines it with the visual data. And still that’s not the limit on what is already bundled with our digital imagery.
Our instruments are connected to satellites that contribute GPS data while connecting us to the Internet that links our data to all the publicly available information of Wikipedia, Google and countless other resources that know where we are, who was there before us and the associated economic, social and political activity. Layer on top of that the integration of LIDAR data (currently only in some specialist apps) then apply facial and object recognition software and consider the implication of emerging technologies such as virtual reality, semantic reality and artificial intelligence and one begins to realize the mind-boggling potential of computational imagery.
Things will go even further with the development of curved sensors that will allow completely different ways to interpret light, but that for the moment remains an idea rather than a reality. Everything else is already happening and will become increasingly evident as new technologies roll out, ushering us into a very different visual culture with expectations far beyond simple documentation.
Computational photography draws on all these resources and allows the visual image to create a picture of reality that is infinitely richer than a simple visual record, and with this comes the opportunity to incorporate deeper levels of knowledge. It won’t be long before photographers are making images of what they know, rather than only what they see. Mark Levoy, formerly of Stanford and now of Google puts it this way, “Except in photojournalism, there will be no such thing as a ‘straight photograph’; everything will be an amalgam, an interpretation, an enhancement or a variation – either by the photographer as auteur or by the camera itself.”
As we tumble forwards into these unknown territories there’s a curious throwback to a moment in art history when 100 years ago the Cubists revolutionized ways of seeing using a very similar (albeit analog) approach to what they saw. Picasso, Braque and others deconstructed the world and reassembled it not in terms of what they saw, but rather in terms of what they knew using multiple perspectives to depict a deeper understanding.
While the photographic world wrestles with even such basic tools as Photoshop there is no doubt that we’re moving into a space more aligned with Cubism than Modernism. It will not be long before our audiences demand more sophisticated imagery that is dynamic and responsive to change, connected to reality by more than a static two-dimensional rectangle of crude visual data isolated in space and time. We’ll look back at the black-and-white photograph that was the voice of truth for nearly a century, as a simplistic and incomplete source of information about what was happening in the world.
Some will consider this a threat, seeing only the danger of distortion and undetectable fakery and it’s certainly true that we’ll need to develop new measures by which to read imagery. We’re already highly skilled in distinguishing probable and improbable information and we know how to read written journalism (which is driven entirely by the writer’s imaginative ability to interpret reality in symbolic form) and we don’t confuse advertising imagery with documentary, nor the photo illustration on a magazine’s cover with the reportage inside. Fraud will always be a risk but with over a century of experience we’ve learned that we can’t rely on the mechanical process to protect us. New conventions will emerge and all the artistry that’s been developed since the invention of photography will find richer and deeper opportunities to express information, ideas and emotions with no greater risk to truth than we currently experience. The enriched opportunities for storytelling will allow greater complexity that’s closer to reality than the thinned-down simplification of 20th Century journalism and will open unprecedented connection between the subject and the viewer.
The twist is that new forces will be driving the process. The clue is in what already occurred with the smartphone. The revolutionary change in photography’s cultural presence wasn’t led by photographers, nor publishers or camera manufacturers but by telephone engineers, and this process will repeat as business grasps the opportunities offered by new technology to use visual imagery in extraordinary new ways, throwing us into new and wild territory. It’s happening already and we’ll see the impact again and again as new apps, products and services hit the market.
We owe it to the medium that we’ve nurtured into adolescence to stand by it and support it in adulthood even though it might seem unrecognizable in its new form. We know the alternative: it will be out the door and hanging with the wrong crowd while we sit forlornly in the empty nest wondering what we did wrong. The first step is to stop talking about the child it once was and to put away the sentimental memories of photography as we knew it for all these years.
It’s very far from dead but it’s definitely left the building.
Note: in the continuity of my previous post/documentation concerning the project Platform of Future-Past (fabric | ch's recent winning competition proposal), I publish additional images (several) and explanations about the second phase of the Platform project, for which we were mandated by Canton de Vaud (SiPAL).
The first part of this article gives complementary explanations about the project, but I also take the opportunity to post related works and researches we've done in parallel about particular implications of the platform proposal. This will hopefully bring a neater understanding to the way we try to combine experimentations-exhibitions, the creation of "tools" and the design of larger proposals in our open and process of work.
Notably, these related works concerned the approach to data, the breaking of the environment into computable elements and the inevitable questions raised by their uses as part of a public architecture project.
The information pavilion was potentially a slow, analog and digital "shape/experience shifter", as it was planned to be built in several succeeding steps over the years and possibly "reconfigure" to sense and look at its transforming surroundings.
The pavilion conserved therefore an unfinished flavour as part of its DNA, inspired by these old kind of meshed constructions (bamboo scaffoldings), almost sketched. This principle of construction was used to help "shift" if/when necessary.
In a general sense, the pavilion answered the conventional public program of an observation deck about a construction site. It also served the purpose of documenting the ongoing building process that often comes along. By doing so, we turned the "monitoring dimension" (production of data) of such a program into a base element of our proposal. That's where a former experimental installation helped us: Heterochrony.
As it can be noticed, the word "Public" was added to the title of the project between the two phases, to become Public Platform of Future-Past (PPoFP) ... which we believe was important to add. This because it was envisioned that the PPoFP would monitor and use environmental data concerning the direct surroundings of the information pavilion (but NO DATA about uses/users). Data that we stated in this case Public, while the treatment of the monitored data would also become part of the project, "architectural" (more below about it).
For these monitored data to stay public, so as for the space of the pavilion itself that would be part of the public domain and physically extends it, we had to ensure that these data wouldn't be used by a third party private service. We were in need to keep an eye on the algorithms that would treat the spatial data. Or best, write them according to our design goals (more about it below).
That's were architecture meets code and data (again) obviously...
The Public Platform of Future-Past is a structure (an information and sightseeing pavilion), a Platform that overlooks an existing Public site while basically taking it as it is, in a similar way to an archeological platform over an excavation site.
The asphalt ground floor remains virtually untouched, with traces of former uses kept as they are, some quite old (a train platform linked to an early XXth century locomotives hall), some less (painted parking spaces). The surrounding environment will move and change consideralby over the years while new constructions will go on. The pavilion will monitor and document these changes. Therefore the last part of its name: "Future-Past".
By nonetheless touching the site in a few points, the pavilion slightly reorganizes the area and triggers spaces for a small new outdoor cafe and a bikes parking area. This enhanced ground floor program can work by itself, seperated from the upper floors.
Several areas are linked to monitoring activities (input devices) and/or displays (in red, top -- that concern interests points and views from the platform or elsewhere --). These areas consist in localized devices on the platform itself (5 locations), satellite ones directly implented in the three construction sites or even in distant cities of the larger political area --these are rather output devices-- concerned by the new constructions (three museums, two new large public squares, a new railway station and a new metro). Inspired by the prior similar installation in a public park during a festival -- Heterochrony (bottom image) --, these raw data can be of different nature: visual, audio, integers from sensors (%, °C, ppm, db, lm, mb, etc.), ...
Input and output devices remain low-cost and simple in their expression: several input devices / sensors are placed outside of the pavilion in the structural elements and point toward areas of interest (construction sites or more specific parts of them). Directly in relation with these sensors and the sightseeing spots but on the inside are placed output devices with their recognizable blue screens. These are mainly voice interfaces: voice outputs driven by one bot according to architectural "scores" or algorithmic rules (middle image). Once the rules designed, the "architectural system" runs on its own. That's why we've also named the system based on automated bots "Ar.I." It could stand for "Architectural Intelligence", as it is entirely part of the architectural project.
The coding of the "Ar.I." and use of data has the potential to easily become something more experimental, transformative and performative along the life of PPoFT.
Observers (users) and their natural "curiosity" play a central role: preliminary observations and monitorings are indeed the ones produced in an analog way by them (eyes and ears), in each of the 5 interesting points and through their wanderings. Extending this natural interest is a simple cord in front of each "output device" that they can pull on, which will then trigger a set of new measures by all the related sensors on the outside. This set new data enter the database and become readable by the "Ar.I."
The whole part of the project regarding interaction and data treatments has been subject to a dedicated short study (a document about this study can be accessed here --in French only--). The main design implications of it are that the "Ar.I." takes part in the process of "filtering" which happens between the "outside" and the "inside", by taking part to the creation of a variable but specific "inside atmosphere" ("artificial artificial", as the outside is artificial as well since the anthropocene, isn't it ?) By doing so, the "Ar.I." bot fully takes its own part to the architecture main program: triggering the perception of an inside, proposing patterns of occupations.
"Ar.I." computes spatial elements and mixes times. It can organize configurations for the pavilion (data, displays, recorded sounds, lightings, clocks). It can set it to a past, a present, but also a future estimated disposition. "Ar.I." is mainly a set of open rules and a vocal interface, at the exception of the common access and conference space equipped with visual displays as well. "Ar.I." simply spells data at some times while at other, more intriguingly, it starts give "spatial advices" about the environment data configuration.
In parallel to Public Platform of Future Past and in the frame of various research or experimental projects, scientists and designers at fabric | ch have been working to set up their own platform for declaring and retrieving data (more about this project, Datadroppers, here). A platform, simple but that is adequate to our needs, on which we can develop as desired and where we know what is happening to the data. To further guarantee the nature of the project, a "data commune" was created out of it and we plan to further release the code on Github.
In tis context, we are turning as well our own office into a test tube for various monitoring systems, so that we can assess the reliability and handling of different systems. It is then the occasion to further "hack" some basic domestic equipments and turn them into sensors, try new functions as well, with the help of our 3d printer in tis case (middle image). Again, this experimental activity is turned into a side project, Studio Station (ongoing, with Pierre-Xavier Puissant), while keeping the general background goal of "concept-proofing" the different elements of the main project.
A common room (conference room) in the pavilion hosts and displays the various data. 5 small screen devices, 5 voice interfaces controlled for the 5 areas of interests and a semi-transparent data screen. Inspired again by what was experimented and realized back in 2012 during Heterochrony (top image).
----- ----- -----
PPoFP, several images. Day, night configurations & few comments
Public Platform of Future-Past, axonometric views day/night.
An elevated walkway that overlook the almost archeological site (past-present-future). The circulations and views define and articulate the architecture and the five main "points of interests". These mains points concentrates spatial events, infrastructures and monitoring technologies. Layer by layer, the suroundings are getting filtrated by various means and become enclosed spaces.
Walks, views over transforming sites, ...
Data treatment, bots, voice and minimal visual outputs.
Night views, circulations, points of view.
Night views, ground.
Random yet controllable lights at night. Underlined areas of interests, points of "spatial densities".
This blog is the survey website of fabric | ch - studio for architecture, interaction and research.
We curate and reblog articles, researches, writings, exhibitions and projects that we notice and find interesting during our everyday practice and readings.
Most articles concern the intertwined fields of architecture, territory, art, interaction design, thinking and science. From time to time, we also publish documentation about our own work and research, immersed among these related resources and inspirations.
This website is used by fabric | ch as archive, references and resources. It is shared with all those interested in the same topics as we are, in the hope that they will also find valuable references and content in it.